Creating a Uniform Approach to AI Accountability

Jul 10, 2023 by Divya Sridhar, Ph.D., Director, Privacy Initiatives, BBB National Programs

As innovators in the U.S. roll out new technologies, tools, and data-driven automated systems that incorporate generative artificial intelligence (AI) and machine learning, the federal government has also made a call to heighten standards for protecting against AI’s potential harms, including its impact on bias, discrimination, misinformation, and more. 

The White House objectives in building a national AI strategy are clear: President Biden’s May 2023 fact sheet notes the importance of additional federal research on AI, public stakeholder input on best practices, and the importance of gleaning sector-specific impacts of AI. Of note, the National Telecommunications and Information Administration (NTIA), which serves as the telecom advisor to the President, has taken particular interest in understanding AI from an accountability and governance perspective.

The NTIA’s recent request for comment focuses on this distinct and yet lesser documented angle in the public discourse: the role of soft law mechanisms to supplement the enforcement objectives of federal agencies. These accountability mechanisms – including compliance certifications, audits, and third-party verification practices – can serve as a policy solution to further the goals of the White House’s national strategy on AI. 

While it has gotten far less interest or coverage in the discussion of AI governance, the topic of accountability has been baked into several federal and state consumer privacy protection activities. For example, Senate Majority Leader Chuck Schumer’s newly published Safe Innovation Framework for AI Policy seeks to encourage AI innovation while advancing security, accountability, foundations, and explainability. Previous drafts of U.S. federal privacy legislation, such as the American Data Privacy and Protection Act (ADPPA, or H.R. 8152), include reference to “technical accountability programs” and the importance of the role of industry accountability programs in the data privacy ecosystem, which underpins algorithmic decision making and AI. 

Tennessee’s recently enacted consumer privacy law establishes that a controller or processor may leverage a voluntary privacy program and its respective privacy policy as an affirmative defense to a cause of action – so long as the program is aligned to the NIST Risk Management framework and meets appropriate criteria as defined in the bill. The law cites the Asia Pacific Economic Cooperation’s Cross Border Privacy Rules (CBPR) system as an example of an appropriate certification mechanism to uphold accountability for privacy programs. The reference to accountability mechanisms in the consumer privacy field reflects expectations that are likely to transcend into the AI accountability field as it continues to grow.

State attorney generals have also signaled their interest in closely monitoring privacy and algorithms in cautionary statements as well as clear enforcement action decisions.

Yet, despite their laudable goals and hopes, the fact remains that the federal and state governments have limited bandwidth to scrutinize existing practices, verify these practices, and enforce where there are gaps. Thus, the NTIA’s call for comment can shine an important light on what AI governance can look like, whether through existing models or new third-party independent accountability mechanisms.

Across sectors and technologies, independent accountability mechanisms provide a means to increase transparency in the marketplace, coalesce best practices, and build consumer trust. Trusted third parties have a role to play to bring marketplace players together—capitalizing on the cutting-edge AI governance work being done across organizations—while showcasing those organizations that embrace best practices, in a verifiable and transparent manner. 

 

Building AI Accountability 

In BBB National Programs’ response to the NTIA request for information on AI accountability, we focused on two key aspects. The first is an “ideal checklist” of characteristics that companies should incorporate into a certification or accountability mechanism. The second, to be covered in a future article, focuses on best practices gleaned from third-party privacy accountability programs that have a longstanding history, trust, and commitment to the marketplace.

The “Ideal Checklist” 

AI accountability mechanisms such as certifications, audits, and assessments are essential elements of the landscape to ensure that AI systems are developed and deployed in a responsible and trustworthy manner. When such systems are properly structured—with incentives aligned and quality assured—they serve the purpose of providing independent and objective verification of the claims, compliance, and quality of AI systems, as well as enhancing the transparency and accountability of AI actors.

When fully mature, an effective and accountable independent certification mechanism will demonstrate the following characteristics: 

  • Consistent Standards. Encouraging commitments to verifiable standards brings consistency to the marketplace, building trust by demonstrating baseline conformity with best practices. 
  • Transparency. Mechanisms that require public commitment to standards along with other transparent markers (such as verifiable trust marks, annual reports, or consumer complaint processes) help to incentivize businesses to adopt best practices.
  • Defined Areas of Responsibility. Independent certifications often provide markers that assist businesses in reviewing commercially relevant compliance obligations. Mutual recognition of lines of responsibility can reduce friction in the marketplace that may otherwise require complex negotiations.
  • Oversight and Independent Review. Independent accountability mechanisms often retain the authority not only to hold participants to their promises but also to refer non-compliant behavior to relevant regulatory bodies, such as the Federal Trade Commission.
  • Regulatory Recognition. Research suggests that the private sector values both formal and informal recognition of industry standards and codes, coupled with independent accountability, to demonstrate leadership and uphold good practices in the marketplace.
  • Layers of Accountability. Holding independent reviewers to rigorous criteria of transparency, confidentiality, and impartiality is a necessary precondition of any effective accountability structure.

 

In the next piece, we will discuss a second approach to AI accountability: reviewing lessons learned from its related area of privacy law, where accountability programs flourish. As a recognized leader in self-regulation and accountability programs for privacy, BBB National Programs believes those in nascent fields like AI can learn from longstanding programs such as the Children’s Advertising Review Unit (CARU) COPPA Safe Harbor Program, the Cross Border Privacy Rules certification program, and the EU Privacy Shield Program (soon to be the EU-US Data Privacy Framework certification), and more.

Suggested Articles

Blog

American Privacy Rights Act: A Primer for Business

Was it the recent series of natural phenomena that prompted Congress to move on a bipartisan, bicameral federal privacy bill? We can’t say with certainty, but we can outline for you what we believe to be, at first glance, the most compelling elements of the American Privacy Rights Act of 2024 (APRA).
Read more
Blog

Take Care of Your “Health-Lite” Claims

Some advertisers believe they can avoid scrutiny when making health-related claims by making their claim “softer.” But context is key. Health benefit claims must comply with the FTC’s Health Products Compliance Guidance. The substantiation bar is not lowered by changing the approach to the health-related claim.
Read more
Blog

Bullish but Cautionary: A Balanced Way to Approach the Impact of AI

Business and nonprofit leaders in the U.S. may not feel so weighty a responsibility in assessing the global impact of AI, but we must realize AI’s power to impact our organizations, our local economies, our sectors, and our nation.
Read more
Blog

New Rules of the Road Can Sustain US Leadership on Interoperable Digital Data Flows

President Biden closed February 2024 with an EO that signaled an important development for how the U.S. plans to position and guard itself from global adversaries, and speaks volumes about how the U.S. views the next-generation impacts of data flows on the digital economy and how our nation can be better equipped as a global leader. Read our takeaways and future considerations.
Read more