24 October 2025

12 ways to earn trust in your AI—with examples

AI is already shaping how kids think, play, and learn, often without them knowing it.

That’s why parents and educators are asking sharper questions about AI in kids’ lives. Questions that don’t come from fear alone. People want to trust the brands and products they bring into their homes and classrooms.

This article explores how you can build transparency into your AI-powered products to build and retain that trust.

Jump to:
A blue and white paper with the word pdf on it.
Free PDF guide!

12 ways to earn trust in your AI
Get a practical summary of this article in pdf format

Download

The long-term risks of AI 

Kids often build quiet friendships with AI. They talk to it, confide in it, and start to believe it understands them. Many kids typically trust AI’s answers, praise, and fairness without realising how much those systems shape what they see and believe. 

This is what makes parents and educators worry. And they’re right to. 

Social and emotional learning (SEL) is how kids develop empathy, agency, and a sense of self. AI can disrupt this development, interfering with how kids understand emotions, relationships, and identity. That poses a real risk to their long-term health and wellbeing.

The social and emotional skills at stake

Kids learn by watching, listening, and connecting with real people. AI can mimic emotional cues without truly understanding them, muddling how kids perceive others and express themselves. 

AI also shapes how kids see themselves. Algorithmic recommendations and feedback loops encourage them to repeat what gets attention and avoid what doesn’t. That can limit experimentation, discourage healthy risk-taking, and shape a child’s self-worth around what the system rewards.

Decision-making is another core SEL skill. Kids need time and space to reflect, consider others’ perspectives, and think through consequences. But AI often responds instantly and confidently, which can shortcut the moments that help kids pause, question, and learn to make thoughtful choices.

There’s also a social cost. When kids lean on AI for guidance or comfort, they may avoid the unexpected in human relationships. Over time, this can weaken their ability to do what’s needed to make friends: build trust, resolve conflict, ask for help and spot signals when others need it.  

The biggest risk? That these harms go unnoticed. AI doesn’t need to malfunction to cause damage. When it quietly reinforces stereotypes, rewards surface-level engagement, or discourages critical thinking and creative problem-solving, the effects can accumulate gradually, and at scale.

Transparency that promotes AI literacy can help kids’ brands build trust. But it’s for these reasons that Peppy believes it’s also their responsibility. 

12 ways to build transparency into your AI-powered product 

Here are 12 practical ways you can build transparency in your AI-powered product. And in the process, address the red flags raised by kids and adults. Each one gives your brand a way to make your product clearer, safer, and more trustworthy by design.

Label AI interactions

What’s at stake
Kids often can’t tell when they’re talking to a bot. If they believe they’re chatting with a real person, they may form unrealistic expectations about what AI can understand, remember, or feel. This can undermine their ability to distinguish between real and simulated relationships.

What transparency looks like
Use visual, audio, or text-based cues to flag AI interactions, whether it’s a chatbot, content generator, or recommendation feed. Keep the explanation simple, consistent, and age-appropriate.

Example
In-game characters could include a sparkle icon, sound effect, or tooltip to flag that they’re AI-generated. Or make a big deal of it. In League of Legends, the use of bots isn’t concealed; it’s promoted as a feature of the newcomer’s experience.

Say what data you’re collecting and why

What’s at stake
Data is a big worry. How it’s collected, stored, and shared. Kids often don’t realize that their clicks, conversations, and choices leave behind a digital trail (and when they okay location sharing, it’s a physical trail). Without clear communication, families may feel manipulated or shut out. 

What transparency looks like
Explain what data is being collected and how it’s used with simple language, visual cues, or interactive walk-throughs. Make it easy for kids and parents to understand what’s optional and what can be adjusted.

Example
Use tooltips or pop-up overlays to explain what’s being asked and why. The LEGO Group talks to kids about privacy information on one page, but they signpost readers with clear headers and subheaders, and use simple language in snackable texts.

Offer explanations of AI decisions

What’s at stake
When AI safety and moderation tools hide content, flag behavior, or make a suggestion, kids may not understand why. That can feel arbitrary, unfair, or even personal. Without context, they’re left guessing about the rules and may lose trust in the system or themselves.

What transparency looks like
Give short, clear explanations whenever these tools recommend, block, or remove something. These moments can help kids understand what’s happening and foster digital literacy.

Example
A message like “This video was hidden because it includes strong language” or “You’re seeing this because you liked similar games” helps kids connect cause and effect. YouTube Kids does this well. When results appear, a statement informs the user why and how parents can learn more.

Let kids (and adults) challenge or override AI choices

What’s at stake
Kids can feel stuck or misrepresented when AI gets it wrong or repetitive. Over-personalization can lead to boredom, frustration, or a narrow view of what’s possible. Without a way to push back, kids lose a sense of control and choice. At the same time, a constant stream of tailor content can make excessive screentime all too easy, which carries well-documented risks.

What transparency looks like
Make it easy to reset recommendations and flag mismatches. Giving kids and families a say in how AI personalizes content reinforces agency and promotes exploration. And to reduce the risk of excessive screentime, make it easy for kids and parents to limit app usage, turn off autoplay functionality, or a similar way to control how long they spend on their devices.

Example
A “Show me something different” button or a one-click content blocker keeps kids in charge. YouTube shows a drop down menu next to every content recommendation. Here, users can dismiss a suggestion, stop YouTube channels appearing in future recommendations, and report the AI’s picks. YouTube Kids makes it easy for parents to steer AI-generated content recommendations, too. It’s also simple for them to restrict screen time.

Make reporting easy and follow up

What’s at stake
When something upsetting or inappropriate happens, kids need to know they can speak up and that someone will listen. They may stay silent if the reporting process is confusing or seems pointless. That can leave them feeling powerless or unsafe.

What transparency looks like
Use kid-friendly terms and simple steps to guide them through reporting. Whenever possible, close the loop. Let them know what action was taken and why so that they feel seen and supported.

Example
Thank users for reporting in a short reply. It’s an easy way to demonstrate that their voice matters. Roblox Help & Safety notifications do this well. This one both confirms action has been taken following a user report and rewards the user with praise for taking action.

Explain how AI is trained

What’s at stake
AI can feel smart, confident, and all-knowing. But kids don’t always understand that it’s trained on data sets, which means it can get things wrong. When kids assume AI is always right, they may not learn to question what they’re told or seek answers for themselves.

What transparency looks like
Use age-appropriate language to explain that AI learns from patterns, and sometimes those patterns are flawed. Help kids understand that no system is perfect and that asking questions is part of using AI wisely.

Example
You might say, “This tool was trained on lots of writing from the internet, so sometimes it repeats mistakes or misses things. If something feels off, check with a grown-up and report anything that’s wrong.” Perplexity.ai illustrates results with sources, which makes it easy for users of all ages to see where the AI-generated response comes from. 

Show what’s human (and what’s not)

What’s at stake
Kids may assume that AI-generated media (e.g., images, voices, characters, videos) are created by people. When that line blurs, it can distort how they understand creativity, authorship, or even what counts as real.

What transparency looks like
Give kids visual or verbal cues that help them distinguish between human and machine-created material. Clearly labelling AI-generated content, including characters and virtual content creators, won’t diminish the appeal. Especially when presented in an open and honest way and not like a disclaimer you’re trying to hide. Simplicity is key. But you might try to make it fun and playful. 

Example
Before playing an AI-generated story, include a quick voiceover or label that says, “This story was created with AI, not a person.” A simple line can flag this on the profile of virtual influencers. Meta’s AI labels couldn't be simpler and more effective.

Invite feedback on AI tools

What’s at stake
If AI feels like a closed system, kids and families may assume it’s not changeable and their experiences and input don’t matter. When users don’t feel heard, frustration builds, and trust is diminished.

What transparency looks like
Give kids and adults a simple way to say what’s working, what’s confusing, or what feels off. Show that you’re listening by sharing updates or improvements based on their input.

Example
A “Help us improve this” button or a one-click thumbs up/down on AI outputs can make it easy for users to respond and start a feedback loop. Andraly Stories, an AI-powered storytelling platform, does this in a super-intuitive way. There’s also an email to invite detailed feedback.

Reinforce the need for real-world support

What’s at stake
Some kids believe AI understands them or can offer real emotional support. That creates unrealistic expectations and potential harm when AI fails to respond with care or nuance.

What transparency looks like
Be clear about AI’s limitations. Explain that while it can mimic empathy or conversation, it doesn’t have feelings or real understanding.

Example
When a child types something emotional, a message like “I’m here to help, but I’m not a real person. If you’re upset, talking to someone you trust will help” sets healthy boundaries. ChatGPT has guardrails in place so that the AI suggests kids talk to trusted adults when certain keywords are detected. 

Disclose when AI is used behind the scenes

What’s at stake
If families don’t know when AI is influencing what kids see or do, it can feel manipulative, especially when decisions happen invisibly.

What transparency looks like
Let users know when AI is shaping content, behavior, or access. A brief notification or visual cue can go a long way toward rebuilding trust.

Example
When showing a content feed, a short note like “This order is based on your recent activity” helps clarify that an algorithm is at work. Netflix uses headings to explain to subscribers why recommendations appear. Their Help Center goes into more detail about their algorithms. 

Review and update AI disclosures

What’s at stake
AI is changing fast. Disclosures that felt clear a year ago might now be outdated, incomplete, or misleading. If your transparency doesn’t evolve, trust breaks down.

What transparency looks like
Commit to reviewing how you explain AI. Update language as the tools evolve, and let users know what’s changing and why.

Example
A message like “We’ve updated how our AI works. Here’s what’s new and how it affects your experience” helps keep users informed and involved. Google emailed parents to explain an update related to Gemini, their AI system. In it, they stated that kids' data wouldn’t be used to train the AI. They also flagged potential risks like errors or inappropriate content. Parents got a heads-up when their child first used Gemini and could turn it off anytime.

Directly empower kids

What’s at stake
Without active support, kids may use powerful AI-powered tools without grasping how they work, what their limits are, or when to question their outputs.

What transparency looks like
Bake AI literacy into your experience. Use interactive prompts, learning moments, or embedded mini-lessons to help kids understand key ideas, like how recommendations work, what training data means, or why AI sometimes gets things wrong. Or make it a bigger deal with gamified training experiences that use fun to build AI literacy.

Example
Google Arts and Culture offers a gamified training experience to help users develop AI prompting skills to get more from creative content generation tools (and better understand how AI ‘thinks’).

No items found.

Make transparency a shared experience

Transparency is more than a compliance task or a PR move. It builds AI literacy and in turn helps kids develop the real-life skills they need to make friends, establish nurturing friendships, build a healthy self-image, and much more. 

In a world where trust is fragile, that’s what earns lasting affinity.

Of course, designing for transparency goes beyond tooltips and disclosures. It’s about putting kids and families in dialogue with this technology. When AI experiences become a shared experience, kids learn to ask better questions and remain connected to real-world support. And adults feel invited in, not left out.

Consider transparency a mindset that sets kids’ brands apart.

We hope you’ve enjoyed this article!

Any thoughts or questions on your mind?
Get in touch

Keep reading

Kids love AI—here’s what your brand needs to know
Practical guides
Research & Trends
There’s a risk to AI you might be missing
Research & Trends
Practical guides
Beyond age: four ways to build fandoms that grow up with kids
Research & Trends