Rewriting the Digital Childhood: How 2026 Is Driving a Global Shift in Tech and Privacy
For years, children’s privacy sat at the margins of technology policy, acknowledged as important, but rarely the catalyst for sweeping regulatory change. That era is ending.
In 2026, children’s privacy is no longer a niche concern. It has become the organizing principle behind some of the most consequential tech policy reforms in the United States and around the world. From the Federal Trade Commission’s (FTC) revision of the Children’s Online Privacy Protection Rule (COPPA), to Australia’s modernization of its Privacy Act, to New York’s SAFE Kids Act to California’s regulation of AI companion chatbots, lawmakers are concluding that the digital environment is evolving faster than the rules governing it, and the youngest users have paid the price.
This year marks a global recalibration of what responsible data practices look like, one that will shape platform design, data governance, and cross-border data flows for years to come.
The updated rule expands COPPA’s scope to include biometrics, government-issued identifiers, and other high-risk data categories that did not exist, or were not widely used, when the law was first written.
It requires explicit, separate parental consent before platforms can share children’s data with third parties or use it for targeted advertising. It defines a long-contested “mixed audience” category, closing loopholes that allowed platforms to claim they were not “directed to children” even when large numbers of minors used their services. And it authorizes new parental consent mechanisms, including text-based and facial verification methods, designed to meet families where they are.
These changes do more than modernize COPPA. They set a new baseline for what regulators expect from companies that build products used by children – even if those companies do not explicitly market to them. In practice, COPPA’s updated requirements will influence product design, data minimization strategies, and advertising models across the tech sector.
Early indications suggest that CBPR 2.0 will introduce stronger requirements for risk assessments, marketing opt-outs, and breach notifications, heightened safeguards for cross-border transfers, and clearer obligations for companies that process minors’ information. It may also include new expectations for how organizations evaluate the impact of their data practices on vulnerable populations.
If implemented as anticipated, CBPR 2.0 could become the closest thing the world has to a universal baseline for children’s data protection. For multinational companies, this would be a significant development: a single, interoperable framework that reduces fragmentation while raising the bar for responsible data governance.
The law creates new obligations to separate reporting of the scale of data processing for minors under 18 and 14. The first round of reporting is due on January 31, 2026, via the online Personal Information Protection Business System.
These reforms reflect a growing international consensus that protecting children online requires both structural safeguards and meaningful enforcement. Australia’s approach is notable not only for its breadth but also for its emphasis on accountability. Regulators have signaled that compliance will not be optional, and penalties for non-compliance will be substantial.
With one of the largest child populations in the country, New York’s actions will reverberate nationally. Governor Hochul has already signaled that children’s digital well-being will remain a central policy priority in 2026. The SAFE Kids Act is not just a state-level experiment; it is a bellwether for how policymakers are beginning to regulate the attention economy itself.
California’s approach is notable for its forward-looking scope. Rather than waiting years after harms have materialized, lawmakers are attempting to mitigate the risks of AI-mediated relationships. As with New York, California’s large youth population means the law’s impact will extend far beyond state borders.
But there is also a warning embedded in this moment.
As jurisdictions race to protect children, the risk of fragmentation grows. Companies operating across borders face a patchwork of requirements that may be aligned in spirit but divergent in practice. Without coordination, the result could be confusion for families and compliance burdens that ultimately disadvantage smaller innovators.
Children’s privacy is no longer a one-off issue. It is at the front line of tech policy, and the decisions made this year will shape the digital lives of millions of young people for years to come. Learn more about how we can help through our COPPA Safe Harbor.
In 2026, children’s privacy is no longer a niche concern. It has become the organizing principle behind some of the most consequential tech policy reforms in the United States and around the world. From the Federal Trade Commission’s (FTC) revision of the Children’s Online Privacy Protection Rule (COPPA), to Australia’s modernization of its Privacy Act, to New York’s SAFE Kids Act to California’s regulation of AI companion chatbots, lawmakers are concluding that the digital environment is evolving faster than the rules governing it, and the youngest users have paid the price.
This year marks a global recalibration of what responsible data practices look like, one that will shape platform design, data governance, and cross-border data flows for years to come.
The FTC’s COPPA Overhaul: A New U.S. Baseline
The most significant U.S. development (as of now) is the FTC’s update to COPPA. Published in April 2025, and enforceable beginning April 2026, the new rule represents the most substantial revision to children’s privacy law in over a decade, showing that the definition of “personal information” must continue to evolve alongside the technologies that collect it.The updated rule expands COPPA’s scope to include biometrics, government-issued identifiers, and other high-risk data categories that did not exist, or were not widely used, when the law was first written.
It requires explicit, separate parental consent before platforms can share children’s data with third parties or use it for targeted advertising. It defines a long-contested “mixed audience” category, closing loopholes that allowed platforms to claim they were not “directed to children” even when large numbers of minors used their services. And it authorizes new parental consent mechanisms, including text-based and facial verification methods, designed to meet families where they are.
These changes do more than modernize COPPA. They set a new baseline for what regulators expect from companies that build products used by children – even if those companies do not explicitly market to them. In practice, COPPA’s updated requirements will influence product design, data minimization strategies, and advertising models across the tech sector.
Global CBPR 2.0: Toward a Universal Standard
While the United States updates COPPA, the Global Cross-Border Privacy Rules (CBPR) Forum is preparing its own major revision. Expected in 2026, CBPR 2.0 aims to harmonize privacy expectations across more than ten jurisdictions spanning the Asia Pacific and Western hemispheres. The forthcoming framework is poised to elevate global standards for handling sensitive data, especially children’s data.Early indications suggest that CBPR 2.0 will introduce stronger requirements for risk assessments, marketing opt-outs, and breach notifications, heightened safeguards for cross-border transfers, and clearer obligations for companies that process minors’ information. It may also include new expectations for how organizations evaluate the impact of their data practices on vulnerable populations.
If implemented as anticipated, CBPR 2.0 could become the closest thing the world has to a universal baseline for children’s data protection. For multinational companies, this would be a significant development: a single, interoperable framework that reduces fragmentation while raising the bar for responsible data governance.
China Weighs in on Minors’ Personal Information Compliance Reporting
In December 2025, the Cyberspace Administration of China (CAC) issued the Announcement on the Reporting of Minors' Personal Information Protection Compliance Audit Status, which “requires personal information handlers to conduct annual compliance audits and submit audit status to local prefecture-level CACs.”The law creates new obligations to separate reporting of the scale of data processing for minors under 18 and 14. The first round of reporting is due on January 31, 2026, via the online Personal Information Protection Business System.
Australia’s Privacy Act Reforms: A Comprehensive Reset
Australia is undergoing one of the most sweeping privacy reforms in its history. Updates to the Privacy Act 1988 strengthen the Australian Privacy Principles, expand breach notification duties, and introduce new expectations for platforms serving minors. Among the most consequential changes are restrictions on social media access for users under 16, new age verification obligations, and significantly higher penalties for violations.These reforms reflect a growing international consensus that protecting children online requires both structural safeguards and meaningful enforcement. Australia’s approach is notable not only for its breadth but also for its emphasis on accountability. Regulators have signaled that compliance will not be optional, and penalties for non-compliance will be substantial.
New York’s SAFE Kids Act and AI Chatbot Companion Laws: Regulating the Attention Economy
States are increasingly stepping into the regulatory vacuum. New York’s SAFE Kids Act and AI Companion Models Law (General Business Law Article 47), passed at the end of 2025. The Safe Kids Act is one of the most ambitious attempts yet to curb the addictive design features that dominate social media platforms. The law restricts algorithmic feeds for minors, limits nighttime notifications, and requires parental consent for features deemed “addictive.” The AI Companion Models Law, though not specific to minor users, establishes mandatory safety protocols, notification requirements and disclosures about the use of the AI computer program, and heightened obligations for companies operating AI companion models in the state. This could have direct impacts on mixed audiences, many times younger users of chatbots.With one of the largest child populations in the country, New York’s actions will reverberate nationally. Governor Hochul has already signaled that children’s digital well-being will remain a central policy priority in 2026. The SAFE Kids Act is not just a state-level experiment; it is a bellwether for how policymakers are beginning to regulate the attention economy itself.
California’s AI Companion Law: A New Frontier in Youth Protection
California’s SB 243, taking effect in 2026, mirrors the intent of the NY AI Companion Models law, but tackles AI companion chatbots – specific to young users. These systems, designed for human-like interaction, raise novel risks for minors, from emotional dependency to exposure to inappropriate content. The law requires clear disclosure that users are interacting with AI, mandates crisis response protocols for self-harm scenarios, restricts sexually explicit content for minors, and introduces annual reporting requirements beginning in 2027.California’s approach is notable for its forward-looking scope. Rather than waiting years after harms have materialized, lawmakers are attempting to mitigate the risks of AI-mediated relationships. As with New York, California’s large youth population means the law’s impact will extend far beyond state borders.
Utah, Arkansas, Virginia, and the Rise of State-Level Digital Safety Laws
Other states – including Utah, Arkansas, and Virginia – are rolling out their own children’s online safety and data portability laws throughout 2026. App Store Accountability Acts passed in Utah, Texas, Louisiana, and California put the onus on platforms to verify a user’s age. Utah, which has one of the highest proportions of children in the country, has positioned itself as a leader in age verification and social media access restrictions. These state-level efforts require platforms to take greater responsibility for the environments they create.A Global Convergence
Across continents and political systems, policymakers are coalescing around a shared set of principles:- Children’s data is inherently sensitive.
- Platforms must design with minors in mind.
- Age verification is becoming a global expectation.
- Enforcement mechanisms must have real consequences.
But there is also a warning embedded in this moment.
As jurisdictions race to protect children, the risk of fragmentation grows. Companies operating across borders face a patchwork of requirements that may be aligned in spirit but divergent in practice. Without coordination, the result could be confusion for families and compliance burdens that ultimately disadvantage smaller innovators.
The Opportunity Ahead
2026 offers a rare opportunity: the chance to build a coherent, global approach to children’s privacy before the next generation of technologies, such as AI companions, immersive environments, and biometric systems, become ubiquitous. If regulators, companies, and civil society groups can align on shared principles now, they can set a durable foundation for the decade ahead.Children’s privacy is no longer a one-off issue. It is at the front line of tech policy, and the decisions made this year will shape the digital lives of millions of young people for years to come. Learn more about how we can help through our COPPA Safe Harbor.