Creating a Uniform Approach to AI Accountability

Jul 10, 2023 by Divya Sridhar, Ph.D., Director, Privacy Initiatives, BBB National Programs

As innovators in the U.S. roll out new technologies, tools, and data-driven automated systems that incorporate generative artificial intelligence (AI) and machine learning, the federal government has also made a call to heighten standards for protecting against AI’s potential harms, including its impact on bias, discrimination, misinformation, and more. 

The White House objectives in building a national AI strategy are clear: President Biden’s May 2023 fact sheet notes the importance of additional federal research on AI, public stakeholder input on best practices, and the importance of gleaning sector-specific impacts of AI. Of note, the National Telecommunications and Information Administration (NTIA), which serves as the telecom advisor to the President, has taken particular interest in understanding AI from an accountability and governance perspective.

The NTIA’s recent request for comment focuses on this distinct and yet lesser documented angle in the public discourse: the role of soft law mechanisms to supplement the enforcement objectives of federal agencies. These accountability mechanisms – including compliance certifications, audits, and third-party verification practices – can serve as a policy solution to further the goals of the White House’s national strategy on AI. 

While it has gotten far less interest or coverage in the discussion of AI governance, the topic of accountability has been baked into several federal and state consumer privacy protection activities. For example, Senate Majority Leader Chuck Schumer’s newly published Safe Innovation Framework for AI Policy seeks to encourage AI innovation while advancing security, accountability, foundations, and explainability. Previous drafts of U.S. federal privacy legislation, such as the American Data Privacy and Protection Act (ADPPA, or H.R. 8152), include reference to “technical accountability programs” and the importance of the role of industry accountability programs in the data privacy ecosystem, which underpins algorithmic decision making and AI. 

Tennessee’s recently enacted consumer privacy law establishes that a controller or processor may leverage a voluntary privacy program and its respective privacy policy as an affirmative defense to a cause of action – so long as the program is aligned to the NIST Risk Management framework and meets appropriate criteria as defined in the bill. The law cites the Asia Pacific Economic Cooperation’s Cross Border Privacy Rules (CBPR) system as an example of an appropriate certification mechanism to uphold accountability for privacy programs. The reference to accountability mechanisms in the consumer privacy field reflects expectations that are likely to transcend into the AI accountability field as it continues to grow.

State attorney generals have also signaled their interest in closely monitoring privacy and algorithms in cautionary statements as well as clear enforcement action decisions.

Yet, despite their laudable goals and hopes, the fact remains that the federal and state governments have limited bandwidth to scrutinize existing practices, verify these practices, and enforce where there are gaps. Thus, the NTIA’s call for comment can shine an important light on what AI governance can look like, whether through existing models or new third-party independent accountability mechanisms.

Across sectors and technologies, independent accountability mechanisms provide a means to increase transparency in the marketplace, coalesce best practices, and build consumer trust. Trusted third parties have a role to play to bring marketplace players together—capitalizing on the cutting-edge AI governance work being done across organizations—while showcasing those organizations that embrace best practices, in a verifiable and transparent manner. 


Building AI Accountability 

In BBB National Programs’ response to the NTIA request for information on AI accountability, we focused on two key aspects. The first is an “ideal checklist” of characteristics that companies should incorporate into a certification or accountability mechanism. The second, to be covered in a future article, focuses on best practices gleaned from third-party privacy accountability programs that have a longstanding history, trust, and commitment to the marketplace.

The “Ideal Checklist” 

AI accountability mechanisms such as certifications, audits, and assessments are essential elements of the landscape to ensure that AI systems are developed and deployed in a responsible and trustworthy manner. When such systems are properly structured—with incentives aligned and quality assured—they serve the purpose of providing independent and objective verification of the claims, compliance, and quality of AI systems, as well as enhancing the transparency and accountability of AI actors.

When fully mature, an effective and accountable independent certification mechanism will demonstrate the following characteristics: 

  • Consistent Standards. Encouraging commitments to verifiable standards brings consistency to the marketplace, building trust by demonstrating baseline conformity with best practices. 
  • Transparency. Mechanisms that require public commitment to standards along with other transparent markers (such as verifiable trust marks, annual reports, or consumer complaint processes) help to incentivize businesses to adopt best practices.
  • Defined Areas of Responsibility. Independent certifications often provide markers that assist businesses in reviewing commercially relevant compliance obligations. Mutual recognition of lines of responsibility can reduce friction in the marketplace that may otherwise require complex negotiations.
  • Oversight and Independent Review. Independent accountability mechanisms often retain the authority not only to hold participants to their promises but also to refer non-compliant behavior to relevant regulatory bodies, such as the Federal Trade Commission.
  • Regulatory Recognition. Research suggests that the private sector values both formal and informal recognition of industry standards and codes, coupled with independent accountability, to demonstrate leadership and uphold good practices in the marketplace.
  • Layers of Accountability. Holding independent reviewers to rigorous criteria of transparency, confidentiality, and impartiality is a necessary precondition of any effective accountability structure.


In the next piece, we will discuss a second approach to AI accountability: reviewing lessons learned from its related area of privacy law, where accountability programs flourish. As a recognized leader in self-regulation and accountability programs for privacy, BBB National Programs believes those in nascent fields like AI can learn from longstanding programs such as the Children’s Advertising Review Unit (CARU) COPPA Safe Harbor Program, the Cross Border Privacy Rules certification program, and the EU Privacy Shield Program (soon to be the EU-US Data Privacy Framework certification), and more.

Suggested Articles


How Will Customers Know They Can Trust Your Business?

When customers trust you, they are more likely to do business with you. It is well past time for business leaders to “galvanize around trust and transparency.” When it comes to enhancing consumer trust, responsible business and nonprofit organizations can – and must – lead the way.
Read more

What to Know About California’s Lemon Law

Buying a new car should be exciting, not stressful, but the fear of ending up with a “lemon” – a car that’s more trouble than it’s worth – is on the rise. While purchasing a car with unfixable defects is uncommon, it is important to know what to do if you face persistent issues and suspect your car is a lemon.
Read more

Warning: Use Caution with AI in the Children’s Space

Children are engaging with various forms of artificial intelligence (AI), a technology that can provide significant benefits that can be accompanied by a series of risks. The Children’s Advertising Review Unit compliance warning regarding the use of AI in practices directed to children reminds industry of its special responsibilities to children.
Read more

Continuing to Evolve: the 10s, 20s, and the Future of CARU

The confluence of social media, apps, and digital advertising in the 2010s and 2020s generated new issues that inspired multiple revisions to CARU's Guidelines as well as compliance warnings to address new platforms breaking onto the scene.
Read more