Understanding the Global AI Risk-Reward Ratio
Remember Captain Planet? Different heroes, unique powers, one mission – saving the planet. Today’s global AI conversation feels similar: major powers, each with their own approach, big egos, but a shared goal of shaping AI’s future.
This year alone we have witnessed three unique positions on AI from around the globe:
Comparing the EU and Japan's AI regulation offers lessons for the U.S. in balancing legislative, regulatory, and self-regulatory approaches. The EU, an early mover in AI regulation, is now revising aspects of the GDPR and AI Act due to their restrictive effects on innovation. Its highly regulated model, once lauded for leadership in privacy and AI oversight, is being relaxed to avoid stifling economic opportunities in the digital marketplace.
Japan, by contrast, takes a principles-based approach emphasizing free trade, trusted data flows, and international cooperation. Its new AI legislation promotes innovation through regulatory sandboxes. Notably the law includes no enforcement mechanisms, signaling a preference for flexibility over strict compliance.
So, should the U.S. be following in the EU’s or Japan’s footsteps?
One way to answer this question is by tracking for AI model development, including costs to build the models and performance impact. Global leaders must also weigh training costs against potential rewards.
Another critical factor is the role of regulation. Is AI regulation necessary, and if so, what kind?
The Japan AI Act approach, in contrast, is innovation-focused, promoting R&D and adoption through regulatory sandboxes. Japan’s Act is narrower than other AI laws because it homes in on generative AI and “agile governance,” thus allowing the Act to be a living, breathing document that will continue to evolve with rapid advancements in the technology. It focuses on alignment and compliance with local and international norms and references global initiatives like the Hiroshima AI Process. This approach encourages responsible governance through multi-stakeholder collaboration and alignment with international standards. The Act, which was enacted in May 2025, has no risk categories or outright bans and relies on voluntary compliance, soft law, and industry-specific regulation.
The U.S. AI Action Plan prioritizes accelerating innovation, strengthening AI infrastructure, and leading globally on AI diplomacy and security. At its core, the Action Plan emphasizes deregulation, private-sector leadership, and expedited environmental regulatory processes and procurement standards. The priority is to promote U.S. AI standards globally, engaging in international AI diplomacy, and strengthening export control strategies for transformative AI technologies with a careful eye on what our adversary, China, is doing that could hinder growth.
AI is not a monolith. It cannot replace humans but can complement us in fulfilling our desires in the most optimal manner possible. As we continue to navigate the complexities of AI, it is crucial for our society to consider the different international models that are chiseling AI to balance innovation with regulation, while being clear-eyed that proper incentives must be in place to ensure that AI serves the best interests of society.
Originally published in Swiss Cognitive
This year alone we have witnessed three unique positions on AI from around the globe:
- EU AI Act – years in the making, comprehensive regulation, took effect earlier this month.
- Japan AI Act – passed quickly this spring, wide-open to industry to innovate without guardrails, principles-based approach, no enforcement.
- U.S. AI Action Plan – developed in weeks, focused on taking leadership, private-sector collaboration, funding for research and development, and scaling back stringent regulation that can impede AI progress.
Comparing the EU and Japan's AI regulation offers lessons for the U.S. in balancing legislative, regulatory, and self-regulatory approaches. The EU, an early mover in AI regulation, is now revising aspects of the GDPR and AI Act due to their restrictive effects on innovation. Its highly regulated model, once lauded for leadership in privacy and AI oversight, is being relaxed to avoid stifling economic opportunities in the digital marketplace.
Japan, by contrast, takes a principles-based approach emphasizing free trade, trusted data flows, and international cooperation. Its new AI legislation promotes innovation through regulatory sandboxes. Notably the law includes no enforcement mechanisms, signaling a preference for flexibility over strict compliance.
So, should the U.S. be following in the EU’s or Japan’s footsteps?
One way to answer this question is by tracking for AI model development, including costs to build the models and performance impact. Global leaders must also weigh training costs against potential rewards.
Go, Planet! Comparing AI Model Growth
- Market Leadership: As of last year, the U.S. led with 40 innovative AI models, compared to 15 in China and 3 in Europe. It is important to note that heavier EU regulation is likely to increase development costs. Japan, with $65 billion in government funding and companies like Rakuten driving innovation, has not disclosed model counts. The U.S. AI Action Plan prioritizes federal R&D and may include a dedicated strategic plan.
- Performance Gap: In January 2025, the top U.S. model outperformed its greatest adversary, the best Chinese models, by 9.26% and then by February, the gap narrowed to 1.70%.
- Training Costs: Google Gemini Ultra cost nearly $192 million to train, one of the most expensive in the U.S., while China’s Deepseek claims $6 million (a disputed figure).
Another critical factor is the role of regulation. Is AI regulation necessary, and if so, what kind?
The Power Is Yours! Comparing the Role of Regulation
The EU AI Act, which entered into force in August 2024, takes effect this August, and should have full applicability by August 2026. It is a risk-based legal framework prioritizing the protection of fundamental human rights and safety by imposing strict obligations based on the level of risk an AI system poses, prohibiting "unacceptable risk" (e.g., social scoring, manipulative techniques) and imposing strict requirements on high-risk systems (pre-market conformity assessments, robust risk management systems, human oversight, transparency, and post-market monitoring). General purpose AI models are categorized by risk (normal or systemic). The Act is being enforced by the newly created EU AI Office, with significant penalties for non-compliance (ranging from €7.5 million or 1.5% of worldwide annual turnover to €35 million or 7% of worldwide annual turnover).The Japan AI Act approach, in contrast, is innovation-focused, promoting R&D and adoption through regulatory sandboxes. Japan’s Act is narrower than other AI laws because it homes in on generative AI and “agile governance,” thus allowing the Act to be a living, breathing document that will continue to evolve with rapid advancements in the technology. It focuses on alignment and compliance with local and international norms and references global initiatives like the Hiroshima AI Process. This approach encourages responsible governance through multi-stakeholder collaboration and alignment with international standards. The Act, which was enacted in May 2025, has no risk categories or outright bans and relies on voluntary compliance, soft law, and industry-specific regulation.
The U.S. AI Action Plan prioritizes accelerating innovation, strengthening AI infrastructure, and leading globally on AI diplomacy and security. At its core, the Action Plan emphasizes deregulation, private-sector leadership, and expedited environmental regulatory processes and procurement standards. The priority is to promote U.S. AI standards globally, engaging in international AI diplomacy, and strengthening export control strategies for transformative AI technologies with a careful eye on what our adversary, China, is doing that could hinder growth.
The Path Forward – Some Recommendations
- Foster public trust through AI leadership. According to a KPMG study of more than 48,000 people in 47 countries, more than half of all people polled – about 66% – use AI regularly. In the study, 80% of participants believe the use of AI will lead to benefits. But, only 46% of people trust AI systems. More importantly, a whopping 70% believe that AI regulation is needed. Complete lack of regulation would lead to a wild west problem, but elevated levels of regulation could chill innovation. The U.S. could adopt an approach that incentivizes companies to build in necessary safety guardrails, quickly address complaints, keep error rates low, and prioritize accuracy.
- Innovation-first with third-party accountability. The U.S. must ensure compliance programs can equip the industry with baseline requirements and guardrails to keep companies accountable, while preserving high-reward opportunities to innovate. The U.S. must focus on accountability of high-risk AI systems and continue to enforce against misuse of AI in high-risk contexts.
- Invest in privacy-protective approaches. AI can significantly improve data management, making consent preferences more efficient and streamlined, enhancing cyber detection, and improving identity management and deep fake detection. Techniques like Differential Privacy, Distributed & Federated Analysis & Learning, Encrypted Computation, Synthetic Data, and Fully Homomorphic Encryption can transform data sets to make them more secure and useful. Focus should be placed on demonstrating privacy-protective practices on the underlying data and data governance prior to its use in AI models.
AI is not a monolith. It cannot replace humans but can complement us in fulfilling our desires in the most optimal manner possible. As we continue to navigate the complexities of AI, it is crucial for our society to consider the different international models that are chiseling AI to balance innovation with regulation, while being clear-eyed that proper incentives must be in place to ensure that AI serves the best interests of society.
Originally published in Swiss Cognitive