Smart Choices: A Parents’ Guide to Navigating the Risks of Generative AI
Artificial intelligence (AI) now shapes many of the digital experiences children use every day—from smart toys and voice assistants to image generators and learning apps. As these tools become more capable, they also introduce new questions for families about privacy, safety, advertising, and wellbeing.
To help, BBB National Programs’ Children’s Advertising Review Unit (CARU) created Smart Choices: A Parents’ Guide to Navigating the Risks of Generative AI, a concise tip sheet that pairs the common generative AI risks with practical steps parents can take at home.
The guidance is informed by CARU’s longstanding advertising and privacy standards and reinforced by recent case work examining how AI intersects with children’s experiences online. This work was part of a larger effort by CARU’s AI Working Group, a group of responsible child-directed brands considering the potential issues raised by the use of generative AI in products, services, and advertising.
The known risks, made simple:
See these principles in practice in CARU’s recent AI-related cases, including KidGeni (August 2024) and Buddy AI (February 2025).
AI can be a powerful tool for learning and creativity when paired with informed parenting and responsible product design. With straightforward steps and ongoing dialogue, families can enjoy the benefits of innovation while minimizing the risks.
Download and share Smart Choices: A Parents’ Guide to Navigating the Risks of Generative AI for quick, household-ready tips.
To help, BBB National Programs’ Children’s Advertising Review Unit (CARU) created Smart Choices: A Parents’ Guide to Navigating the Risks of Generative AI, a concise tip sheet that pairs the common generative AI risks with practical steps parents can take at home.
The guidance is informed by CARU’s longstanding advertising and privacy standards and reinforced by recent case work examining how AI intersects with children’s experiences online. This work was part of a larger effort by CARU’s AI Working Group, a group of responsible child-directed brands considering the potential issues raised by the use of generative AI in products, services, and advertising.
What’s Changing and Why it Matters
Generative AI can produce realistic text, images, and audio on demand. In the children’s space, that power shows up in chatbots, AI-enabled games, and personalized recommendations. These features can be engaging and educational—but they can also create misleading ad experiences, unnecessary data collection, biased outputs, and exposure to inappropriate content if not designed and used responsibly.The known risks, made simple:
- Misleading or overly persuasive content. AI systems can generate advertising-like messages or “nudges” that blur the line between content and marketing ads, sometimes encouraging unwanted purchases.
- Data collection by default. Many AI tools begin collecting data at log-in; children may share sensitive information without realizing it.
- Bias in AI outputs. Because AI learns from human data, it can reflect cultural or demographic bias.
- Addictiveness and mental health concerns. Instant feedback, infinite feeds, and social companions can drive overuse and affect attention and mood.
- Imperfect filters and moderation. Even with controls, AI can surface unsafe or inaccurate results.
See these principles in practice in CARU’s recent AI-related cases, including KidGeni (August 2024) and Buddy AI (February 2025).
What Families Can Do Right Now
CARU’s Smart Choices guide offers clear, parent-friendly steps you can implement today:- Tighten settings and payments. Use ad blockers where available, consider ad-free or paid versions of services, and disable in-app purchases or remove saved payment methods.
- Raise the privacy bar. Turn off mics and cameras when not needed; use child accounts with high-privacy defaults; read notices to understand what data is collected, why, and with whom it is shared.
- Build healthy skepticism. Talk with children about how AI works, why it can be wrong, and how bias can appear in outputs; practice simple fact-checking together.
- Balance time and talk. Set limits for AI tools, co-view when possible, and make online experiences a regular family conversation—at home and at school.
- Prepare for mistakes. Learn platform reporting tools and show children how to use them if they encounter unsafe or inappropriate content.
AI can be a powerful tool for learning and creativity when paired with informed parenting and responsible product design. With straightforward steps and ongoing dialogue, families can enjoy the benefits of innovation while minimizing the risks.
CARU’s Role
CARU helps companies comply with laws and guidelines that protect children from deceptive or inappropriate advertising and ensures children’s data is collected and handled responsibly. CARU also operates the nation’s first FTC-approved COPPA Safe Harbor Program and advances practical best practices for brands building in the children’s space.Download and share Smart Choices: A Parents’ Guide to Navigating the Risks of Generative AI for quick, household-ready tips.