Industry Self-Regulation: A Path Forward for Governing Artificial Intelligence?

Dec 21, 2023 by Eric D. Reicin, President & CEO, BBB National Programs

I have written previously about the multiplicity of ways that independent industry self-regulation can address some of our most vexing societal problems by encouraging businesses to act responsibly, always with an eye on enhancing consumer trust. Today, I want to discuss how industry self-regulation can be a key differentiator for businesses and nonprofits to perform at their most effective level, especially when it comes to engaging with emerging technologies such as generative artificial intelligence (AI).

Indeed, industry self-regulation, with its speed and agility when adapting to market changes and consumer needs, was designed for these major transformational moments, such as the current rise of AI.

With the recent release of the Biden Administration’s Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence, it is clear that the management of risks associated with AI is fast becoming a priority, with the development of new guidance and rules likely over the coming years. In addition, the Office of Management and Budget (OMB) released its direction to federal agencies on how to implement the EO. I predict that this directive could potentially provide clues as to how private industry is regulated down the road. 

But the question is, as a private sector leader, what is your road map today?


Attributes of Industry Self-Regulation

Through industry self-regulation, businesses can come together to develop an appropriate road map for industry, one that is attuned to market realities and reflects the accumulated judgment and experience of industry players representing a broad cross-section of industry views. 

How do you ensure that such an approach will work? Well-constructed industry self-regulation consists of key attributes: 

  • The ability to tailor work done in other industries to a new business category. Government regulation tends to paint with a broader brush.
  • The opportunity to achieve compliance at levels equal to or greater than government. The threat of referral to the government for noncompliance also is an effective mechanism.
  • A transparent and objective process for rule-setting and accountability mechanisms. This allows the consumer to judge the system’s integrity, increasing public confidence in the process.


For example, in her speech at the 2014 Self-Regulation Conference, Commissioner Maureen K. Ohlhausen said: “Successful self-regulatory initiatives share several features: clear requirements; widespread industry participation; active monitoring programs; effective enforcement mechanisms; procedures to resolve conflicts; a transparent process; responsiveness to a changing market and to consumers; and sufficient independence from direct control by industry.” 


Considerations for Organizations Today

Despite these successes in industry self-regulation, its promise has yet to be realized across many industries, including in generative AI. That can change, and here is how organizations can begin thinking about industry self-regulation:

  • Do Not Wait: The time is now to work within your industry or as part of a cross-industry incubator effort and take steps towards developing realistic guidelines for AI.
  • Consider Existing Rules: There is no need to start from scratch. Consider what rules from around the world make sense for your industry and use those as models to lead in AI governance. 
  • Be Truthful to Claims of AI Governance: Existing rules on truth-in-advertising apply.


A variety of global efforts focused on AI risks and opportunities have coalesced since the dawn of the ChatGPT era. In some other parts of the world – such as the European Union and China – rules and regulations have been put in place to address the perceived risks of rapid AI implementation.


Looking Ahead 

Early AI legislative and regulatory efforts in the U.S. are found mostly in the states. According to a report from Software Alliance, “State legislators introduced more AI-related bills–191–this year than in the previous two years combined, a 440% increase in the number of AI-related bills introduced in 2022.” 

A 2021 paper by Michael Cusumano of MIT, Annabelle Gawer of the University of Surrey, and David Yoffie of Harvard Business School entitled “Can Self-Regulation Save Digital Platforms?” delves into the history of industry self-regulation with the goal of sharing learning on if and how self-regulation can thrive in the digital age. While the piece was authored before ChatGPT arrived, it is prescient in its description of the challenges and opportunities that arise when trying to regulate – or self-regulate – emerging technologies.

The authors underscore that self-regulation by an industry group works best when the sector consists of a small, relatively homogeneous number of interconnected or interdependent actors who face shared challenges. They conclude, “. . .the research suggests that a combination of self-regulation and credible threats of government regulation may yield the best results.”

As the LegalTech article (registration required) notes, “. . .some industry experts are pushing back on imposing legally binding obligations on AI. Instead, they see an “all-hands-on-deck” method that also relies on “soft law” principles—think guidelines, self-certification, and industry standards—as the way to holistically regulate the fast-evolving AI realm.” 

Industry self-regulation through agreed-upon voluntary standards and guidelines should be part of the mix from the very start, especially when it comes to AI.

Originally published in Forbes.

Suggested Articles


How Will Customers Know They Can Trust Your Business?

When customers trust you, they are more likely to do business with you. It is well past time for business leaders to “galvanize around trust and transparency.” When it comes to enhancing consumer trust, responsible business and nonprofit organizations can – and must – lead the way.
Read more

What to Know About California’s Lemon Law

Buying a new car should be exciting, not stressful, but the fear of ending up with a “lemon” – a car that’s more trouble than it’s worth – is on the rise. While purchasing a car with unfixable defects is uncommon, it is important to know what to do if you face persistent issues and suspect your car is a lemon.
Read more

Warning: Use Caution with AI in the Children’s Space

Children are engaging with various forms of artificial intelligence (AI), a technology that can provide significant benefits that can be accompanied by a series of risks. The Children’s Advertising Review Unit compliance warning regarding the use of AI in practices directed to children reminds industry of its special responsibilities to children.
Read more

Continuing to Evolve: the 10s, 20s, and the Future of CARU

The confluence of social media, apps, and digital advertising in the 2010s and 2020s generated new issues that inspired multiple revisions to CARU's Guidelines as well as compliance warnings to address new platforms breaking onto the scene.
Read more