Creating a Uniform Approach to AI Accountability

Jul 10, 2023 by Divya Sridhar, Ph.D., Director, Privacy Initiatives, BBB National Programs

As innovators in the U.S. roll out new technologies, tools, and data-driven automated systems that incorporate generative artificial intelligence (AI) and machine learning, the federal government has also made a call to heighten standards for protecting against AI’s potential harms, including its impact on bias, discrimination, misinformation, and more. 

The White House objectives in building a national AI strategy are clear: President Biden’s May 2023 fact sheet notes the importance of additional federal research on AI, public stakeholder input on best practices, and the importance of gleaning sector-specific impacts of AI. Of note, the National Telecommunications and Information Administration (NTIA), which serves as the telecom advisor to the President, has taken particular interest in understanding AI from an accountability and governance perspective.

The NTIA’s recent request for comment focuses on this distinct and yet lesser documented angle in the public discourse: the role of soft law mechanisms to supplement the enforcement objectives of federal agencies. These accountability mechanisms – including compliance certifications, audits, and third-party verification practices – can serve as a policy solution to further the goals of the White House’s national strategy on AI. 

While it has gotten far less interest or coverage in the discussion of AI governance, the topic of accountability has been baked into several federal and state consumer privacy protection activities. For example, Senate Majority Leader Chuck Schumer’s newly published Safe Innovation Framework for AI Policy seeks to encourage AI innovation while advancing security, accountability, foundations, and explainability. Previous drafts of U.S. federal privacy legislation, such as the American Data Privacy and Protection Act (ADPPA, or H.R. 8152), include reference to “technical accountability programs” and the importance of the role of industry accountability programs in the data privacy ecosystem, which underpins algorithmic decision making and AI. 

Tennessee’s recently enacted consumer privacy law establishes that a controller or processor may leverage a voluntary privacy program and its respective privacy policy as an affirmative defense to a cause of action – so long as the program is aligned to the NIST Risk Management framework and meets appropriate criteria as defined in the bill. The law cites the Asia Pacific Economic Cooperation’s Cross Border Privacy Rules (CBPR) system as an example of an appropriate certification mechanism to uphold accountability for privacy programs. The reference to accountability mechanisms in the consumer privacy field reflects expectations that are likely to transcend into the AI accountability field as it continues to grow.

State attorney generals have also signaled their interest in closely monitoring privacy and algorithms in cautionary statements as well as clear enforcement action decisions.

Yet, despite their laudable goals and hopes, the fact remains that the federal and state governments have limited bandwidth to scrutinize existing practices, verify these practices, and enforce where there are gaps. Thus, the NTIA’s call for comment can shine an important light on what AI governance can look like, whether through existing models or new third-party independent accountability mechanisms.

Across sectors and technologies, independent accountability mechanisms provide a means to increase transparency in the marketplace, coalesce best practices, and build consumer trust. Trusted third parties have a role to play to bring marketplace players together—capitalizing on the cutting-edge AI governance work being done across organizations—while showcasing those organizations that embrace best practices, in a verifiable and transparent manner. 

 

Building AI Accountability 

In BBB National Programs’ response to the NTIA request for information on AI accountability, we focused on two key aspects. The first is an “ideal checklist” of characteristics that companies should incorporate into a certification or accountability mechanism. The second, to be covered in a future article, focuses on best practices gleaned from third-party privacy accountability programs that have a longstanding history, trust, and commitment to the marketplace.

The “Ideal Checklist” 

AI accountability mechanisms such as certifications, audits, and assessments are essential elements of the landscape to ensure that AI systems are developed and deployed in a responsible and trustworthy manner. When such systems are properly structured—with incentives aligned and quality assured—they serve the purpose of providing independent and objective verification of the claims, compliance, and quality of AI systems, as well as enhancing the transparency and accountability of AI actors.

When fully mature, an effective and accountable independent certification mechanism will demonstrate the following characteristics: 

  • Consistent Standards. Encouraging commitments to verifiable standards brings consistency to the marketplace, building trust by demonstrating baseline conformity with best practices. 
  • Transparency. Mechanisms that require public commitment to standards along with other transparent markers (such as verifiable trust marks, annual reports, or consumer complaint processes) help to incentivize businesses to adopt best practices.
  • Defined Areas of Responsibility. Independent certifications often provide markers that assist businesses in reviewing commercially relevant compliance obligations. Mutual recognition of lines of responsibility can reduce friction in the marketplace that may otherwise require complex negotiations.
  • Oversight and Independent Review. Independent accountability mechanisms often retain the authority not only to hold participants to their promises but also to refer non-compliant behavior to relevant regulatory bodies, such as the Federal Trade Commission.
  • Regulatory Recognition. Research suggests that the private sector values both formal and informal recognition of industry standards and codes, coupled with independent accountability, to demonstrate leadership and uphold good practices in the marketplace.
  • Layers of Accountability. Holding independent reviewers to rigorous criteria of transparency, confidentiality, and impartiality is a necessary precondition of any effective accountability structure.

 

In the next piece, we will discuss a second approach to AI accountability: reviewing lessons learned from its related area of privacy law, where accountability programs flourish. As a recognized leader in self-regulation and accountability programs for privacy, BBB National Programs believes those in nascent fields like AI can learn from longstanding programs such as the Children’s Advertising Review Unit (CARU) COPPA Safe Harbor Program, the Cross Border Privacy Rules certification program, and the EU Privacy Shield Program (soon to be the EU-US Data Privacy Framework certification), and more.

Suggested Articles

Blog

Old MacDonald Had an Engagement Farm: Lessons Learned from FTC v. NGL

Capturing user engagement is the foundation of internet commerce. And while the incentives to prompt greater engagement are certainly understandable, the recent NGL Labs case from the FTC raises important questions about the ethical and legal ramifications when companies try to artificially generate engagement among their userbase.
Read more
Blog

Independence Day Edition: CBPR Framework Offers “Checks & Balances”

Going, Going, Gone Global, a webinar on the CBPR Global Forum, delved into how privacy impacts businesses’ brand reputation and builds trust with key stakeholders, discussed the purpose of the Global CBPR, and its value to Global Forum members.
Read more
Blog

Industry Self-Regulation: Part of the Solution for Governing Generative AI

The spotlight on generative AI remains bright. The benefits and risks continue to be ever-present in the minds of business and political leaders. No matter the timing or the setting, the creation of transparency, accountability, and collaboration among stakeholders is key to successful industry self-regulation as is the importance of setting standards and best practices.
Read more
Blog

The Demise of “Chevron Deference”: Who Will Fill the Regulatory Gaps?

The Supreme Court's 1984 ruling in Chevron v. NRDC held that courts should defer to federal agencies’ interpretations of ambiguous federal laws so long as those interpretations are reasonable. So given the court’s decision to overturn it, where does that leave companies that want a level playing field and perhaps even to raise the bar, instead of racing to the bottom?
Read more