Warning: Use Caution with AI in the Children’s Space

Jun 5, 2024 by Debra Policarpo, Senior Attorney, Children’s Advertising Review Unit, BBB National Programs

Whether online or an object in their hands, and whether they realize it or not, children are engaging with various forms of artificial intelligence (AI). From toys that can provide personal responses to video games in the metaverse to content in their classrooms, AI-driven products can provide children with personalized recommendations for movies, music, games, and toys based on a child’s interests and preferences. These benefits can also be accompanied by a series of risks depending on how a company chooses to use AI. 

In April, BBB National Programs' Children’s Advertising Review Unit (CARU), now in its 50th year of monitoring the child-directed marketplace and establishing the self-regulatory guidelines necessary to protect children young children, issued a compliance warning regarding the use of AI in online advertising and data collection practices directed to children. 

In our warning, CARU reminds advertisers, brands, endorsers, developers, toy manufacturers, and others that CARU’s Advertising and Privacy Guidelines apply to the use of AI in online advertising to children and the online collection of personal information from children. 

When dealing with child-directed advertising, CARU reminds industry that it has special responsibilities to children, and that children are more vulnerable to advertising messages due to their limited knowledge, experience, sophistication, and maturity and thus require extra protections. 

Similarly, with respect to privacy and data collection, companies that integrate AI technology in their products should remember that the existing rules of the road for the child-directed marketplace, such as compliance with the Children’s Online Privacy Protection Act (COPPA), still apply. That means that companies implementing AI in their products and services must clearly disclose data collection practices and obtain verifiable parental consent (VPC) before they collect personal information from children. 

Transparency regarding data collection and parental consent remain the guiding standards to uphold privacy and safety for children in an online environment.

CARU believes the Federal Trade Commission (FTC) will focus its enforcement efforts on any uses of AI that may mislead or deceive consumers. In addition, the Biden administration’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence issued last fall last fall calls for more robust standards and practices regarding the development and use of AI with an eye toward responsible innovation.

With CARU’s AI compliance warning and Advertising and Privacy Guidelines in mind, as well as the increased focus on AI by the FTC and the White House, CARU recommends that brands keep these best practices in mind when developing or using AI in child-directed products, services, or advertising. 

 

Avoid being misleading.

Ensuring your ads are not misleading sounds simple enough, but misleading ad claims are one of the most common mistakes CARU sees in its monitoring efforts. When you use AI technology in your advertising, be especially careful not to mislead children:

  • About product characteristics or performance.
  • About the distinction between real and imaginary or fantasy experiences.
  • To believe that they are engaging with a real person or have a personal relationship with a brand or brand character, celebrity, or influencer.
  • To believe that a celebrity or other person has endorsed a product when they have not. 

 

Be transparent.

The guiding principle that advertisements directed to children must be easily identifiable as advertising remains true, no matter the platform or technology involved. Of special concern for CARU is that AI technology allows for deep fakes and simulated realistic content, including simulated figures, to be developed so that they are virtually undetectable. 

In the children’s space, AI should not be used to generate images of fictitious people who appear to be endorsing a product. Such images would be misleading because they would appear to be third-party endorsements when they are actually messages from the advertiser itself.

 

Don’t use dark patterns, manipulation, or deception. 

Be aware that AI-generated deep fakes, simulated elements, including the simulation of realistic people, places, or things, or AI-powered voice cloning techniques within an ad, could potentially deceive an ordinary child. Advertising to children should not be deceptive about the inclusion, benefits, or features of AI technology in the products themselves.

Additionally, claims should not: 

  • Unduly exploit a child’s imagination.
  • Create unattainable performance expectations.
  • Exploit a child’s difficulty distinguishing between the real and the fanciful. 

 

Be safe and responsible.

In ads featuring AI-generated children and in AI-generated environments, CARU urges brands to ensure proper safety measures are depicted, including safety equipment, adult supervision, and age-appropriate play. 

In particular, advertisers should ensure that the use of AI in advertising, including AI-generated images, do not portray or promote harmful negative social stereotypes, prejudice, or discrimination. To the extent that generative AI is used to depict people, it is imperative that advertisers filter images and take measures to ensure people depicted reflect the diversity of humanity. 

 

Ensure a child’s data is protected. 

CARU wants advertisers to be aware that online data collection from children poses special concerns. AI offers unique opportunities to interact with children who may not understand the nature of the information being sought or its intended use. For instance, many products, including AI-powered toys, rely upon third-party generative AI technology to operate and process data. COPPA requires operators to provide a parental notice that outlines each type of personal information (PI) collected from children.

Companies that integrate AI technology in their products must clearly disclose their privacy and data collection practices. They must also appropriately obtain verifiable parental consent prior to any collection of personal information from children, including if that data is being used for machine learning processing and analysis. 

Transparency regarding privacy and data collection practices and parental consent remain the guiding standards to ensure children’s data privacy, especially when personal information used for machine-based learning models via third-party processes integral for product functioning, is collected, used, and/or disclosed.

 

CARU is here to help. 

The team at CARU is staying abreast of the fast-paced and developing nature of the use of AI in advertising to children and the online collection of personal information from children when using AI technology. CARU is here to help guide you to use AI tools and technology in a safe, responsible, and compliant manner. 

Interested in becoming involved in this work? Set up a meeting to learn more about becoming a CARU Supporter and joining our AI Working Group. 

Need help with an upcoming campaign or a current product or service? Sign up for a pre-screen for one-on-one support. Need hands-on privacy support? Sign up for our COPPA Safe Harbor program. 

Suggested Articles

Blog

Old MacDonald Had an Engagement Farm: Lessons Learned from FTC v. NGL

Capturing user engagement is the foundation of internet commerce. And while the incentives to prompt greater engagement are certainly understandable, the recent NGL Labs case from the FTC raises important questions about the ethical and legal ramifications when companies try to artificially generate engagement among their userbase.
Read more
Blog

Independence Day Edition: CBPR Framework Offers “Checks & Balances”

Going, Going, Gone Global, a webinar on the CBPR Global Forum, delved into how privacy impacts businesses’ brand reputation and builds trust with key stakeholders, discussed the purpose of the Global CBPR, and its value to Global Forum members.
Read more
Blog

Industry Self-Regulation: Part of the Solution for Governing Generative AI

The spotlight on generative AI remains bright. The benefits and risks continue to be ever-present in the minds of business and political leaders. No matter the timing or the setting, the creation of transparency, accountability, and collaboration among stakeholders is key to successful industry self-regulation as is the importance of setting standards and best practices.
Read more
Blog

The Demise of “Chevron Deference”: Who Will Fill the Regulatory Gaps?

The Supreme Court's 1984 ruling in Chevron v. NRDC held that courts should defer to federal agencies’ interpretations of ambiguous federal laws so long as those interpretations are reasonable. So given the court’s decision to overturn it, where does that leave companies that want a level playing field and perhaps even to raise the bar, instead of racing to the bottom?
Read more