Colorado’s Second Potential Gold Rush: Regulating AI

May 28, 2024 by Divya Sridhar, Vice President, Global Privacy Division and Privacy Initiatives Operations, BBB National Programs

This month, Colorado Gov. Jared Polis signed into law the Colorado Artificial Intelligence Act, Senate Bill 205, first-of-its-kind comprehensive legislation regarding the use of artificial intelligence (AI) in “high-risk” systems, potentially leaving other states and nations in the dust during this new potential gold rush. 

This Act, which goes into effect in 2026, puts the United States ahead of many other privacy-first world leaders and jurisdictions in writing the rules for limiting algorithmic discrimination when AI – in particular, high-risk AI - is leveraged by companies. 

In the law, high-risk AI is defined as “any AI system that, when deployed, makes or is a substantial factor in making a consequential decision” and includes a list of use cases that are not AI, including calculators, spreadsheets, firewalls, and anti-virus and anti-malware systems. The law focuses on ensuring risk-management programs are in place when high-risk AI is deployed. The law recommends that companies be well-positioned to comply by leveraging the federal NIST AI Risk Management Framework when building their risk management programs.

To insiders who have been closely tracking legal and regulatory privacy changes at the state level, Colorado’s AI Act comes as no surprise. Over the last two years, Colorado has been planting the seeds on AI through many provisions in its multi-year stakeholder process to finalize its data privacy regulations. Those regulations included tailored expectations and considerations for covered entities - businesses and nonprofit organizations to whom the law applies. 

For example, companies must meet heightened requirements in the Colorado Privacy Act and its respective regulations if they are leveraging “solely automated processing,” “human involved automated processing,” and “human reviewed processing,” or “profiling” customers, as these actions concern the use of data in automated systems. Colorado’s privacy rules diverge from other rules (e.g. California’s Privacy Rights Act), because of these unique terms and expectations. 

Colorado’s comprehensive consumer Privacy Act took effect last year with a wide range of data privacy regulations that add additional obligations for business, including data minimization, purpose limitations, data retention, and consent required prior to processing of sensitive data.


Colorado AI Act vs Other Similar Privacy Proposals

When it comes to enforcement, the Colorado AI Act includes some “teeth.” The law notes that if the covered entity leveraging and deploying high-risk AI finds that such deployment has caused algorithmic discrimination, the deployer must notify the Colorado Attorney General (AG) and the consumers affected as soon as possible, within 90 days of discovery. It also notes that the AG may request disclosure of the risk management policy developed by the entity leveraging the high-risk AI. Section 6-1-1706 focuses on how violations would constitute unfair trade practices for bad actors who do not choose to cure violations and take good faith measures toward corrective action.

Colorado’s AI Act takes a similar but more narrowly tailored enforcement approach to state privacy proposals such as Tennessee’s Information Protection Act by including an affirmative defense measure. Tennessee’s law establishes that a controller or processor can leverage a voluntary privacy program and its respective privacy policy as an affirmative defense to a cause of action so long as the program is aligned to the NIST Risk Management Framework and meets appropriate criteria as defined in the bill. 

But Tennessee’s law goes further by also citing the Asia Pacific Economic Cooperation’s Cross Border Privacy Rules (CBPR) system (now referenced as the Global CBPR system), a co-regulatory mechanism that involves government and third-party oversight of cross border data transfers, as an example of an appropriate certification mechanism to uphold accountability for privacy programs.

Colorado's AI Act clearly differs from recent state law trends to create stricter enforceability of violations of privacy law by not including a private right of action. For example, Vermont’s new Data Privacy Act, Maryland’s new Kids Code, and last year’s My Health My Data Act in Washington permit private rights of action, thus permitting the right for individual citizens to bring lawsuits against companies that are allegedly violating the law. Colorado’s AI Act also differs from federal privacy proposals like the recently introduced American Privacy Rights Act (APRA), which includes both a private right of action and independent accountability to streamline enforcement. 

While the AI Act is the first law of its kind here in the United States, the law leaves room for further regulations to add clarity and detail on enforceability. Other states currently working on AI legislation may want to consider a more robust soft law approach that enhances the role of independent accountability and mirrors existing frameworks in federal sectoral laws and state privacy laws. 

Such an approach would include the addition of language in the regulations (or the original privacy statute) to ensure covered entities are asking credible third parties with a track record of providing independent accountability to confirm the validity of the privacy and risk management program being leveraged to carry out the risk assessment aligned to the AI law. This way, the attorney general or other privacy/AI enforcer need not be left to enforce against the Act alone, but instead, the regulator can depend on an independent third party to review the actions the company is taking to assess its efforts to carry out its promises toward AI compliance.

Our role in the privacy and AI ecosystem includes:

  1. Convening stakeholders to build recommendations in emerging fields, with tangible deliverables, including the AI In Hiring & Recruiting Principles and Protocols.
  2. Developing and updating guidelines for children’s advertising, privacy, the metaverse, and AI through the Children’s Advertising Review Unit (CARU), in alignment with the Federal Trade Commission.
  3. Operating as an Accountability Agent partner to the U.S. Department of Commerce to certify good actors to the global baseline standards through our Global Privacy Division programs.


As covered entities explore the ramifications of AI laws on their business models, there is an expectation that laws and regulations may need to be updated further to make amendments where they fell short or left gaps, upon enactment. Or, the law may need to be updated in areas, such as definitions and enforcement, where the laws were originally vague, misleading, or unclear. 

Soft law plays an important role in helping convene stakeholders that can shape this very dialogue about the past, present, and future of privacy and AI policy, laws, and regulations. If interested in learning more about the soft law convenings held on various topics, from children’s privacy and AI to emerging technology, contact us at

Suggested Articles


How Will Customers Know They Can Trust Your Business?

When customers trust you, they are more likely to do business with you. It is well past time for business leaders to “galvanize around trust and transparency.” When it comes to enhancing consumer trust, responsible business and nonprofit organizations can – and must – lead the way.
Read more

What to Know About California’s Lemon Law

Buying a new car should be exciting, not stressful, but the fear of ending up with a “lemon” – a car that’s more trouble than it’s worth – is on the rise. While purchasing a car with unfixable defects is uncommon, it is important to know what to do if you face persistent issues and suspect your car is a lemon.
Read more

Warning: Use Caution with AI in the Children’s Space

Children are engaging with various forms of artificial intelligence (AI), a technology that can provide significant benefits that can be accompanied by a series of risks. The Children’s Advertising Review Unit compliance warning regarding the use of AI in practices directed to children reminds industry of its special responsibilities to children.
Read more

Continuing to Evolve: the 10s, 20s, and the Future of CARU

The confluence of social media, apps, and digital advertising in the 2010s and 2020s generated new issues that inspired multiple revisions to CARU's Guidelines as well as compliance warnings to address new platforms breaking onto the scene.
Read more