Industry Self-Regulation: A Path Forward for Governing Artificial Intelligence?

Dec 21, 2023 by Eric D. Reicin, President & CEO, BBB National Programs

I have written previously about the multiplicity of ways that independent industry self-regulation can address some of our most vexing societal problems by encouraging businesses to act responsibly, always with an eye on enhancing consumer trust. Today, I want to discuss how industry self-regulation can be a key differentiator for businesses and nonprofits to perform at their most effective level, especially when it comes to engaging with emerging technologies such as generative artificial intelligence (AI).

Indeed, industry self-regulation, with its speed and agility when adapting to market changes and consumer needs, was designed for these major transformational moments, such as the current rise of AI.

With the recent release of the Biden Administration’s Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence, it is clear that the management of risks associated with AI is fast becoming a priority, with the development of new guidance and rules likely over the coming years. In addition, the Office of Management and Budget (OMB) released its direction to federal agencies on how to implement the EO. I predict that this directive could potentially provide clues as to how private industry is regulated down the road. 

But the question is, as a private sector leader, what is your road map today?


Attributes of Industry Self-Regulation

Through industry self-regulation, businesses can come together to develop an appropriate road map for industry, one that is attuned to market realities and reflects the accumulated judgment and experience of industry players representing a broad cross-section of industry views. 

How do you ensure that such an approach will work? Well-constructed industry self-regulation consists of key attributes: 

  • The ability to tailor work done in other industries to a new business category. Government regulation tends to paint with a broader brush.
  • The opportunity to achieve compliance at levels equal to or greater than government. The threat of referral to the government for noncompliance also is an effective mechanism.
  • A transparent and objective process for rule-setting and accountability mechanisms. This allows the consumer to judge the system’s integrity, increasing public confidence in the process.


For example, in her speech at the 2014 Self-Regulation Conference, Commissioner Maureen K. Ohlhausen said: “Successful self-regulatory initiatives share several features: clear requirements; widespread industry participation; active monitoring programs; effective enforcement mechanisms; procedures to resolve conflicts; a transparent process; responsiveness to a changing market and to consumers; and sufficient independence from direct control by industry.” 


Considerations for Organizations Today

Despite these successes in industry self-regulation, its promise has yet to be realized across many industries, including in generative AI. That can change, and here is how organizations can begin thinking about industry self-regulation:

  • Do Not Wait: The time is now to work within your industry or as part of a cross-industry incubator effort and take steps towards developing realistic guidelines for AI.
  • Consider Existing Rules: There is no need to start from scratch. Consider what rules from around the world make sense for your industry and use those as models to lead in AI governance. 
  • Be Truthful to Claims of AI Governance: Existing rules on truth-in-advertising apply.


A variety of global efforts focused on AI risks and opportunities have coalesced since the dawn of the ChatGPT era. In some other parts of the world – such as the European Union and China – rules and regulations have been put in place to address the perceived risks of rapid AI implementation.


Looking Ahead 

Early AI legislative and regulatory efforts in the U.S. are found mostly in the states. According to a report from Software Alliance, “State legislators introduced more AI-related bills–191–this year than in the previous two years combined, a 440% increase in the number of AI-related bills introduced in 2022.” 

A 2021 paper by Michael Cusumano of MIT, Annabelle Gawer of the University of Surrey, and David Yoffie of Harvard Business School entitled “Can Self-Regulation Save Digital Platforms?” delves into the history of industry self-regulation with the goal of sharing learning on if and how self-regulation can thrive in the digital age. While the piece was authored before ChatGPT arrived, it is prescient in its description of the challenges and opportunities that arise when trying to regulate – or self-regulate – emerging technologies.

The authors underscore that self-regulation by an industry group works best when the sector consists of a small, relatively homogeneous number of interconnected or interdependent actors who face shared challenges. They conclude, “. . .the research suggests that a combination of self-regulation and credible threats of government regulation may yield the best results.”

As the LegalTech article (registration required) notes, “. . .some industry experts are pushing back on imposing legally binding obligations on AI. Instead, they see an “all-hands-on-deck” method that also relies on “soft law” principles—think guidelines, self-certification, and industry standards—as the way to holistically regulate the fast-evolving AI realm.” 

Industry self-regulation through agreed-upon voluntary standards and guidelines should be part of the mix from the very start, especially when it comes to AI.

Originally published in Forbes.

Suggested Articles


KOSA (and Children’s Privacy) on the Move

The Kids Online Safety Act (KOSA) is gaining traction in the U.S. Senate after the most recent round of revisions released this month by Senators Richard Blumenthal and Marsha Blackburn, following on the heels of proposed changes to the COPPA Rule. Here are CARU's high-level takeaways from the KOSA revisions with some insight into each revision.
Read more

Location Not Found: Mitigating Precise Geolocation Consent Flow Risk

Privacy-minded Federal Trade Commission (FTC) watchers have seen two bombshell enforcement actions related to alleged mishandling of consumer geolocation data. The Privacy Initiative team delves into those cases, the breadth of the penalties the FTC has included in the proposed orders, and best practices to avoid the crosshairs.
Read more

The ABCs of DPF and GDPR

Easing data flows across the Atlantic, the EU-U.S. DPF satisfies requirements outlined under the General Data Protection Regulation (GDPR), helping companies avoid steep fines.
Read more

The FTC Joins the Global CBPR Party

This month the Federal Trade Commission (FTC) announced participation in the Global Cooperation Arrangement for Privacy Enforcement (Global CAPE), signaling the agency’s interest in keeping pace with the increasingly global nature of commerce and marks an important step forward for the global expansion of CBPRs.
Read more