Creating a Uniform Approach to AI Accountability

Jul 10, 2023 by Divya Sridhar, Ph.D., Director, Privacy Initiatives, BBB National Programs

As innovators in the U.S. roll out new technologies, tools, and data-driven automated systems that incorporate generative artificial intelligence (AI) and machine learning, the federal government has also made a call to heighten standards for protecting against AI’s potential harms, including its impact on bias, discrimination, misinformation, and more. 

The White House objectives in building a national AI strategy are clear: President Biden’s May 2023 fact sheet notes the importance of additional federal research on AI, public stakeholder input on best practices, and the importance of gleaning sector-specific impacts of AI. Of note, the National Telecommunications and Information Administration (NTIA), which serves as the telecom advisor to the President, has taken particular interest in understanding AI from an accountability and governance perspective.

The NTIA’s recent request for comment focuses on this distinct and yet lesser documented angle in the public discourse: the role of soft law mechanisms to supplement the enforcement objectives of federal agencies. These accountability mechanisms – including compliance certifications, audits, and third-party verification practices – can serve as a policy solution to further the goals of the White House’s national strategy on AI. 

While it has gotten far less interest or coverage in the discussion of AI governance, the topic of accountability has been baked into several federal and state consumer privacy protection activities. For example, Senate Majority Leader Chuck Schumer’s newly published Safe Innovation Framework for AI Policy seeks to encourage AI innovation while advancing security, accountability, foundations, and explainability. Previous drafts of U.S. federal privacy legislation, such as the American Data Privacy and Protection Act (ADPPA, or H.R. 8152), include reference to “technical accountability programs” and the importance of the role of industry accountability programs in the data privacy ecosystem, which underpins algorithmic decision making and AI. 

Tennessee’s recently enacted consumer privacy law establishes that a controller or processor may leverage a voluntary privacy program and its respective privacy policy as an affirmative defense to a cause of action – so long as the program is aligned to the NIST Risk Management framework and meets appropriate criteria as defined in the bill. The law cites the Asia Pacific Economic Cooperation’s Cross Border Privacy Rules (CBPR) system as an example of an appropriate certification mechanism to uphold accountability for privacy programs. The reference to accountability mechanisms in the consumer privacy field reflects expectations that are likely to transcend into the AI accountability field as it continues to grow.

State attorney generals have also signaled their interest in closely monitoring privacy and algorithms in cautionary statements as well as clear enforcement action decisions.

Yet, despite their laudable goals and hopes, the fact remains that the federal and state governments have limited bandwidth to scrutinize existing practices, verify these practices, and enforce where there are gaps. Thus, the NTIA’s call for comment can shine an important light on what AI governance can look like, whether through existing models or new third-party independent accountability mechanisms.

Across sectors and technologies, independent accountability mechanisms provide a means to increase transparency in the marketplace, coalesce best practices, and build consumer trust. Trusted third parties have a role to play to bring marketplace players together—capitalizing on the cutting-edge AI governance work being done across organizations—while showcasing those organizations that embrace best practices, in a verifiable and transparent manner. 

 

Building AI Accountability 

In BBB National Programs’ response to the NTIA request for information on AI accountability, we focused on two key aspects. The first is an “ideal checklist” of characteristics that companies should incorporate into a certification or accountability mechanism. The second, to be covered in a future article, focuses on best practices gleaned from third-party privacy accountability programs that have a longstanding history, trust, and commitment to the marketplace.

The “Ideal Checklist” 

AI accountability mechanisms such as certifications, audits, and assessments are essential elements of the landscape to ensure that AI systems are developed and deployed in a responsible and trustworthy manner. When such systems are properly structured—with incentives aligned and quality assured—they serve the purpose of providing independent and objective verification of the claims, compliance, and quality of AI systems, as well as enhancing the transparency and accountability of AI actors.

When fully mature, an effective and accountable independent certification mechanism will demonstrate the following characteristics: 

  • Consistent Standards. Encouraging commitments to verifiable standards brings consistency to the marketplace, building trust by demonstrating baseline conformity with best practices. 
  • Transparency. Mechanisms that require public commitment to standards along with other transparent markers (such as verifiable trust marks, annual reports, or consumer complaint processes) help to incentivize businesses to adopt best practices.
  • Defined Areas of Responsibility. Independent certifications often provide markers that assist businesses in reviewing commercially relevant compliance obligations. Mutual recognition of lines of responsibility can reduce friction in the marketplace that may otherwise require complex negotiations.
  • Oversight and Independent Review. Independent accountability mechanisms often retain the authority not only to hold participants to their promises but also to refer non-compliant behavior to relevant regulatory bodies, such as the Federal Trade Commission.
  • Regulatory Recognition. Research suggests that the private sector values both formal and informal recognition of industry standards and codes, coupled with independent accountability, to demonstrate leadership and uphold good practices in the marketplace.
  • Layers of Accountability. Holding independent reviewers to rigorous criteria of transparency, confidentiality, and impartiality is a necessary precondition of any effective accountability structure.

 

In the next piece, we will discuss a second approach to AI accountability: reviewing lessons learned from its related area of privacy law, where accountability programs flourish. As a recognized leader in self-regulation and accountability programs for privacy, BBB National Programs believes those in nascent fields like AI can learn from longstanding programs such as the Children’s Advertising Review Unit (CARU) COPPA Safe Harbor Program, the Cross Border Privacy Rules certification program, and the EU Privacy Shield Program (soon to be the EU-US Data Privacy Framework certification), and more.

Suggested Articles

Blog

Industry Self-Regulation Will Shine Post-Chevron

In its landmark decision in Relentless Inc. v. U.S. Department of Commerce and Loper Bright Enterprises v. Raimondo, the U.S. Supreme Court has fundamentally reshaped the landscape of regulatory governance in the U.S. And in the wake of the ruling, the implications for industry self-regulation loom large.
Read more
Blog

What to Know About New Jersey’s Lemon Law

While most cars run smoothly off the lot, it’s important to understand your rights if you find yourself with a potential “lemon” parked in your driveway. New Jersey's Lemon Law protects consumers of new vehicles from persistent defects.
Read more
Blog

U.S. Supreme Court Impact: Judicial Power at Work, Industry Self-Regulation in Play

The U.S. Supreme Court decision, Loper Bright Enterprises v. Raimondo, marked a pivotal shift in administrative law by overturning the Chevron deference doctrine and will have a long-term impact. The ruling also presents a unique opportunity for industries to fill regulatory gaps in a manner that enhances consumer trust.
Read more
Blog

Solving Shared Challenges: A Global Approach to Advertising Self-Regulation

Recognizing the shared challenges society and marketers are facing while reckoning with declining levels of trust, advertising standards authorities are uniting to help ensure responsible marketing across all media channels through the new ICAS Global Think Tank, creating a space for academic, business, and policy leaders to engage in candid discussion, research, and collaboration on the biggest challenges facing the advertising industry.
Read more