Colorado’s Second Potential Gold Rush: Regulating AI

May 28, 2024 by Divya Sridhar, Vice President, Global Privacy Division and Privacy Initiatives Operations, BBB National Programs

This month, Colorado Gov. Jared Polis signed into law the Colorado Artificial Intelligence Act, Senate Bill 205, first-of-its-kind comprehensive legislation regarding the use of artificial intelligence (AI) in “high-risk” systems, potentially leaving other states and nations in the dust during this new potential gold rush. 

This Act, which goes into effect in 2026, puts the United States ahead of many other privacy-first world leaders and jurisdictions in writing the rules for limiting algorithmic discrimination when AI – in particular, high-risk AI - is leveraged by companies. 

In the law, high-risk AI is defined as “any AI system that, when deployed, makes or is a substantial factor in making a consequential decision” and includes a list of use cases that are not AI, including calculators, spreadsheets, firewalls, and anti-virus and anti-malware systems. The law focuses on ensuring risk-management programs are in place when high-risk AI is deployed. The law recommends that companies be well-positioned to comply by leveraging the federal NIST AI Risk Management Framework when building their risk management programs.

To insiders who have been closely tracking legal and regulatory privacy changes at the state level, Colorado’s AI Act comes as no surprise. Over the last two years, Colorado has been planting the seeds on AI through many provisions in its multi-year stakeholder process to finalize its data privacy regulations. Those regulations included tailored expectations and considerations for covered entities - businesses and nonprofit organizations to whom the law applies. 

For example, companies must meet heightened requirements in the Colorado Privacy Act and its respective regulations if they are leveraging “solely automated processing,” “human involved automated processing,” and “human reviewed processing,” or “profiling” customers, as these actions concern the use of data in automated systems. Colorado’s privacy rules diverge from other rules (e.g. California’s Privacy Rights Act), because of these unique terms and expectations. 

Colorado’s comprehensive consumer Privacy Act took effect last year with a wide range of data privacy regulations that add additional obligations for business, including data minimization, purpose limitations, data retention, and consent required prior to processing of sensitive data.

 

Colorado AI Act vs Other Similar Privacy Proposals

When it comes to enforcement, the Colorado AI Act includes some “teeth.” The law notes that if the covered entity leveraging and deploying high-risk AI finds that such deployment has caused algorithmic discrimination, the deployer must notify the Colorado Attorney General (AG) and the consumers affected as soon as possible, within 90 days of discovery. It also notes that the AG may request disclosure of the risk management policy developed by the entity leveraging the high-risk AI. Section 6-1-1706 focuses on how violations would constitute unfair trade practices for bad actors who do not choose to cure violations and take good faith measures toward corrective action.

Colorado’s AI Act takes a similar but more narrowly tailored enforcement approach to state privacy proposals such as Tennessee’s Information Protection Act by including an affirmative defense measure. Tennessee’s law establishes that a controller or processor can leverage a voluntary privacy program and its respective privacy policy as an affirmative defense to a cause of action so long as the program is aligned to the NIST Risk Management Framework and meets appropriate criteria as defined in the bill. 

But Tennessee’s law goes further by also citing the Asia Pacific Economic Cooperation’s Cross Border Privacy Rules (CBPR) system (now referenced as the Global CBPR system), a co-regulatory mechanism that involves government and third-party oversight of cross border data transfers, as an example of an appropriate certification mechanism to uphold accountability for privacy programs.

Colorado's AI Act clearly differs from recent state law trends to create stricter enforceability of violations of privacy law by not including a private right of action. For example, Vermont’s new Data Privacy Act, Maryland’s new Kids Code, and last year’s My Health My Data Act in Washington permit private rights of action, thus permitting the right for individual citizens to bring lawsuits against companies that are allegedly violating the law. Colorado’s AI Act also differs from federal privacy proposals like the recently introduced American Privacy Rights Act (APRA), which includes both a private right of action and independent accountability to streamline enforcement. 

While the AI Act is the first law of its kind here in the United States, the law leaves room for further regulations to add clarity and detail on enforceability. Other states currently working on AI legislation may want to consider a more robust soft law approach that enhances the role of independent accountability and mirrors existing frameworks in federal sectoral laws and state privacy laws. 

Such an approach would include the addition of language in the regulations (or the original privacy statute) to ensure covered entities are asking credible third parties with a track record of providing independent accountability to confirm the validity of the privacy and risk management program being leveraged to carry out the risk assessment aligned to the AI law. This way, the attorney general or other privacy/AI enforcer need not be left to enforce against the Act alone, but instead, the regulator can depend on an independent third party to review the actions the company is taking to assess its efforts to carry out its promises toward AI compliance.

Our role in the privacy and AI ecosystem includes:

  1. Convening stakeholders to build recommendations in emerging fields, with tangible deliverables, including the AI In Hiring & Recruiting Principles and Protocols.
  2. Developing and updating guidelines for children’s advertising, privacy, the metaverse, and AI through the Children’s Advertising Review Unit (CARU), in alignment with the Federal Trade Commission.
  3. Operating as an Accountability Agent partner to the U.S. Department of Commerce to certify good actors to the global baseline standards through our Global Privacy Division programs.

 

As covered entities explore the ramifications of AI laws on their business models, there is an expectation that laws and regulations may need to be updated further to make amendments where they fell short or left gaps, upon enactment. Or, the law may need to be updated in areas, such as definitions and enforcement, where the laws were originally vague, misleading, or unclear. 

Soft law plays an important role in helping convene stakeholders that can shape this very dialogue about the past, present, and future of privacy and AI policy, laws, and regulations. If interested in learning more about the soft law convenings held on various topics, from children’s privacy and AI to emerging technology, contact us at GlobalPrivacy@bbbnp.org.

Suggested Articles

Blog

Old MacDonald Had an Engagement Farm: Lessons Learned from FTC v. NGL

Capturing user engagement is the foundation of internet commerce. And while the incentives to prompt greater engagement are certainly understandable, the recent NGL Labs case from the FTC raises important questions about the ethical and legal ramifications when companies try to artificially generate engagement among their userbase.
Read more
Blog

Independence Day Edition: CBPR Framework Offers “Checks & Balances”

Going, Going, Gone Global, a webinar on the CBPR Global Forum, delved into how privacy impacts businesses’ brand reputation and builds trust with key stakeholders, discussed the purpose of the Global CBPR, and its value to Global Forum members.
Read more
Blog

Industry Self-Regulation: Part of the Solution for Governing Generative AI

The spotlight on generative AI remains bright. The benefits and risks continue to be ever-present in the minds of business and political leaders. No matter the timing or the setting, the creation of transparency, accountability, and collaboration among stakeholders is key to successful industry self-regulation as is the importance of setting standards and best practices.
Read more
Blog

The Demise of “Chevron Deference”: Who Will Fill the Regulatory Gaps?

The Supreme Court's 1984 ruling in Chevron v. NRDC held that courts should defer to federal agencies’ interpretations of ambiguous federal laws so long as those interpretations are reasonable. So given the court’s decision to overturn it, where does that leave companies that want a level playing field and perhaps even to raise the bar, instead of racing to the bottom?
Read more