Spilling the Tea on AI Accountability: An Analysis of NTIA Stakeholder Comments

Aug 4, 2023 by Divya Sridhar, Director, Privacy Initiatives, BBB National Programs and Sander McComiskey, Intern, Privacy Initiatives, BBB National Programs

Friday afternoons are generally not a time for significant news to break, but when leading executives at seven major U.S. artificial intelligence companies recently met with President Joe Biden at the White House to adopt a commitment toward a shared voluntary framework governing new standards for privacy, security, and accountability in the development and deployment of powerful AI, it was a big deal.

Gathering for more than just a photo op, the companies agreed to meaningful, proactive protective measures and stronger practices intended to manage the immediate risks of their products, including independent testing of their systems, sharing information with government officials and civil society, and watermarking AI-generated content to combat disinformation.

This voluntary commitment represents an important step in the journey to mitigating AI-associated risk and reaffirms that industry self-regulation is a viable solution for such a dynamic and nascent space.

While it may appear to be a nonbinding commitment at face value, the commitment — as a representation in commerce — is also enforceable by the Federal Trade Commission and similar state authorities that enforce laws on unfair and deceptive acts.

This newsworthy commitment from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI was the most recent addition to an AI regulatory ecosystem shaped by the existing principles of stakeholder involvement, voluntary governance and independent accountability. In April, the National Telecommunications and Information Administration issued a request for comments to gather stakeholder feedback on AI accountability measures and policies to assist in the crafting of a report on AI accountability policy and the AI assurance regime.

Nearly 200 organizations, including BBB National Programs, and thousands of individuals responded to the NTIA solicitation. Our response focused on two key aspects. The first is an "ideal checklist" of characteristics companies should incorporate into a certification or accountability mechanism. The second highlights best practices gleaned from third-party privacy accountability programs with a longstanding history, trust, and commitment to the marketplace.

To understand areas of consensus in the public discourse, including perspectives similar and distinct from our own, we recently pulled a sample of the responses representing industry, consumer, and civil society perspectives on AI accountability, and then analyzed and distilled that information into a summary to shed light on what stakeholders value for the future of AI regulation. Here is what we found:

 

Overwhelming Consensus. . .

  • Give us regulatory consistency! in lieu of an international regulatory patchwork, participants strongly expressed views in favor of cooperation with U.S. allies and trade partners to achieve some degree of consistency in the legal treatment of AI products, though disagreement exists over the proper division and devolution of regulatory power.
  • Self-regulation is preferable to government standard-setting. Participants emphasized that any effort by public regulatory bodies to dictate best practices and technical standards will be quickly outdated, limiting growth and the development and implementation of more effective safety and privacy measures. Self-regulatory bodies have a greater capacity for dynamic, flexible regulation than their government counterparts.
  • We <3 the NIST RMF. Industry groups extolled the cooperative stakeholder approach utilized in drafting the Risk Management Framework produced by the National Institute of Standards and Technology through a stakeholder-led process (known as the NIST RMF). Civil society organizations hailed the guidelines as a promising first step and recommended further efforts be built upon this framework. Additionally, nearly all groups expressed a general backing for the framework of protections against discrimination included in the AI Bill of Rights, while differing on various specifics.
  • Focus to limit any overly broad definitions of AI that can chill innovation. All groups agree that, in forthcoming AI regulation, a sufficiently narrow definition of AI should not encompass low-risk uses that have long been understood not to require regulatory scrutiny.
  • Build regulatory expertise and upskill on AI. All commenters agreed that the areas of the federal government responsible for AI regulation should prioritize acquisition of talent and the development of expertise within their ranks.
  • Create the National Artificial Intelligence Research Resource. NAIRR would leverage the research prowess of the federal government to support the ethical development of AI. This initiative predictably received overwhelming support.
  • Provide clarity to the distribution of accountability along the AI value chain. Though commenters disagreed on the specifics, almost all commenters agreed regulators must provide clear guidance detailing which responsibilities fall to each of the many members of the productive process that develops, markets, deploys, and utilizes AI.
  • Enact a comprehensive federal privacy law. The consensus holds that while such a law is not essential to the creation of an AI regulatory regime, it would bring clarity to the many interactions with personal data at every step in the AI value chain.
  • NTIA should spearhead taxonomy-based research to shape standards. Commenters propose that NTIA create a taxonomy of AI systems for future use by regulators and courts, hold workshops for the development of legal and technical standards governing AI, and prepare to leverage technical expertise to advise AI regulators from agencies across the government.
  • Create a national registry of high-risk AI systems. Such a database would allow disclosure to regulators of the results of internal audits, capturing the security processes high-risk developers follow, and help ensure that audits are acted upon when appropriate. This registry would also be of use to third-party auditors in target identification.

 

But Some Disagreement. . .

  • Introduce and streamline new industry standards, assessments, audits, and related practices. Commenters agreed that impact assessments are a useful tool to ascertain relevant risks using a process with which industry is already familiar, steeped in requirements in state consumer privacy laws. Industry groups are generally resistant to “mandates” for tools they deem helpful, usually preferring voluntary approaches, but regulatory efforts to require impact assessments would likely meet with little to no pushback. 
  • Utilize internal and external audits as a means to demonstrate AI accountability. These tools can ensure technical functionality, ease legal compliance, and guard against civil rights violations and associated harms. Discord arose over whether such measures should be mandatory or voluntary and whether required audits should be internal or external.
  • Strengthen the NIST RMF. All groups concurred that future regulation should be built on the foundation of the NIST framework but disagreed over whether the government should use soft law to encourage adoption (e.g., leverage government procurement through a self-attestation system) or give the framework teeth through binding law or regulation.
  • License high-risk AI systems. Some civil society groups supported broad licensing for high-risk models, but others warn that this requirement could create huge barriers to entry that diminish competition and gift existing players enormous market power. These groups recommend that licensing be pursued only in specific cases such as for procurement and use of military weapons. Many of the leaders in generative AI development also supported a federal licensing regime, in keeping with the predictions of groups worried about the competitive effects.
  • Create an AI regulatory infrastructure. All commenters agreed that regulation should be the product of some combination of central AI regulators and federal agencies with existing jurisdictional authority (FTC, DOJ, CFPB, EEOC, etc.). Some groups endorsed the creation of a powerful central AI regulator with a peripheral role for existing agencies within their jurisdictions, while other groups argued that existing agencies are best positioned to regulate such a vast technology with seemingly infinite uses and can do so within existing law. An emergent middle path seemed to favor altering jurisdictional powers where necessary to allow existing agencies to form the front line against misuse of AI in industry, giving one existing agency an expanded role as a general AI regulator as a failsafe, and expanding NIST’s capacity to advise and provide technical expertise and standard development.

 

The productive discussion, visible across the responses to the NTIA's request for comment, is emblematic of the collaborative process that defined previous U.S. AI development in principle and its path forward toward opportunities for regulation. With last week's announcement at the White House, these comments lend credence to the hope that further efforts will build upon this cooperation to minimize the possible risks emerging from AI, while protecting and encouraging its significant promise. 

This article was originally published by IAPP.

Suggested Articles

Blog

Old MacDonald Had an Engagement Farm: Lessons Learned from FTC v. NGL

Capturing user engagement is the foundation of internet commerce. And while the incentives to prompt greater engagement are certainly understandable, the recent NGL Labs case from the FTC raises important questions about the ethical and legal ramifications when companies try to artificially generate engagement among their userbase.
Read more
Blog

Independence Day Edition: CBPR Framework Offers “Checks & Balances”

Going, Going, Gone Global, a webinar on the CBPR Global Forum, delved into how privacy impacts businesses’ brand reputation and builds trust with key stakeholders, discussed the purpose of the Global CBPR, and its value to Global Forum members.
Read more
Blog

Industry Self-Regulation: Part of the Solution for Governing Generative AI

The spotlight on generative AI remains bright. The benefits and risks continue to be ever-present in the minds of business and political leaders. No matter the timing or the setting, the creation of transparency, accountability, and collaboration among stakeholders is key to successful industry self-regulation as is the importance of setting standards and best practices.
Read more
Blog

The Demise of “Chevron Deference”: Who Will Fill the Regulatory Gaps?

The Supreme Court's 1984 ruling in Chevron v. NRDC held that courts should defer to federal agencies’ interpretations of ambiguous federal laws so long as those interpretations are reasonable. So given the court’s decision to overturn it, where does that leave companies that want a level playing field and perhaps even to raise the bar, instead of racing to the bottom?
Read more