CFBAI_ProgramBackgrounds_4-28-2020

 

Center for Industry
Self-Regulation

BBB National Programs’ Center for Industry Self-Regulation (CISR), a 501(c)(3) non-profit, was created to harness the historic power of self-regulation, also called soft law, in the United States in order to empower business accountability. CISR is dedicated to education and research that supports responsible business leaders developing fair, future-proof best practices, and to the education of the general public on the conditions necessary for industry self-regulation.

Harnessing the Power of Self-Regulation to Empower Business Accountability

For Funders

Our research explores how to solve collective challenges in the business community, calling on decades of experience operating independent self-regulatory and co-regulatory programs.

 

 

For Business

Learn about the challenges facing your industry to help identify opportunities for new best practices that will enhance the trust and respect of consumers, partners, and regulators.

 

 


 

In the Incubator


 

TeenAge Privacy Program (TAPP)

The TAPP Incubator project has designed safeguards for the personal data of teens, building a bridge between privacy protections for children and adults that can serve as a global model. The TAPP Roadmap is an operational framework designed to help companies develop digital products and services that consider and respond to the heightened potential of risks and harms to teenage consumers and to ensure that businesses collect and manage teen data responsibly. Get the Roadmap

AI in Hiring and Recruiting

In the recruiting and hiring process, where algorithms increasingly provide an aid to human decision making, how can we combine important technological innovation with a proactive approach to employment law regulations and future-proof standards? The AI Incubator project has developed the Principles and Protocols for Trustworthy AI in Recruiting and Hiring, a global baseline standard for the use of AI applications in recruitment and hiring providing practical and actionable guidance for employers and vendors seeking to leverage AI technology responsibly and equitably. Learn More

Emerging Areas of Interest

Connected Vehicles: As cars become smarter and more interconnected, do the rules of the road need to change? How do we anticipate the new normal of safety, security, and data protection, while ensuring that businesses remain on a level playing field and consumers are heard?

The Metaverse: The rules of the road for the metaverse, which is being hailed as the next big technological revolution, are still being written. How can we ensure consumers are protected while encouraging innovation as businesses explore this next digital frontier?
Get Involved

 

 

 

 

Research

CISR focuses on research that addresses industry-wide challenges to develop fair, future-proof best practices.

 

 

 

 

 

Blogs

Developing Principles and Protocols for Recruiting and Hiring with AI

Aug 25, 2023, 09:12 AM by Eric D. Reicin, President & CEO, BBB National Programs
Employing AI in the recruiting and hiring process voluntarily, under the auspices of independent industry self-regulation, is often far preferable to being forced to do so under a regime of top-down government regulation.

Since I first wrote about the use of artificial intelligence tools in the recruiting and hiring process in 2021, the rise in AI across all business functions has spiked dramatically. Generative AI, in particular, has taken center stage, even raising existential questions in the minds of its creators.

Now, more than ever, business and nonprofit leaders need a local, state, federal, and international roadmap to follow for the use of AI – not just to play defense and reduce their risk of legal liability but also to play offense and to inculcate the responsible use of AI and machine learning in their organizations. Trustworthy AI is quickly becoming essential in many corporate and nonprofit functions, and its use in recruiting and hiring should be beyond reproach.

No doubt, the landscape on this issue is always evolving, and the U.S. Equal Employment Opportunity Commission’s (EEOC) recent action on AI is noteworthy. In May 2023, the EEOC issued new technical guidance demonstrating how Title VII – which prohibits employers from discrimination based on “race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), or national origin” – applies to companies using AI-based decision-making tools around employment.  

This EEOC guidance is a follow-on to the agency’s 2021 Initiative on Artificial Intelligence and Algorithmic Fairness, which aims to ensure technologies like AI are not discriminatory and it reaffirms my long-held belief that employers have the ultimate accountability for the responsible use of these tools. Even though AI tools may be developed by outside vendors, their responsible use is squarely in the hands of companies who are doing the recruiting and hiring. To put a further legal fine point on it, companies are responsible for the work of the vendor under an agency liability theory.

As a business or nonprofit leader, it is important for you to understand why and how you are going to use an AI tool for recruiting and hiring. Beyond the threshold query of “Does it work?” consider asking yourself these foundational questions:

  • How much input does this tool have on the hiring decision?
  • Is a candidate made aware that an AI or machine-learning filtering technology tool is being used during the consideration process?
  • Does the AI-enabled tool allow for flexibility when it comes to disability or other factors under relevant law? For instance, can the job applicant expect ADA-compliant recruiting conditions such as an appropriate reasonable accommodation under law?
  • What is the role of human oversight?
  • Since privacy is often of heightened concern for a job seeker, does the tool respect and follow relevant privacy laws and regulations such as rapidly evolving state privacy laws and municipal rules on AI and employment? For example, New York City’s Local Law 144, as reported by Bloomberg, “prohibits employers from using automated tools to screen candidates unless the software has undergone an independent review to check for bias against protected groups.” 

 

As Axios reported, “A team of Stanford researchers is warning that leading AI models are woefully non-compliant with responsible AI standards, as represented by the European Union's Artificial Intelligence Act.” This proposed law is currently in a late draft form but, by many accounts, is on a fast-moving train to adoption. In addition, the European Commission considers AI in systems in employment and worker management a “high-risk” area, which raises the stakes for global corporations and nonprofits to get things right.

Closer to home, I expect that the federal government will soon focus a bit more on the use of AI and tools in managing employee job performance as well as in the consumer surveillance arena. The General Counsel of the National Labor Relations Board, for example, takes the position that certain AI technology tools that employers use to manage employee job performance may be problematic. Similarly, the White House Office of Science and Technology Policy is gathering data on the subject to formulate Administration policy, and the Federal Trade Commission is considering proposed rules regarding how companies monitor consumer behavior.  

With the above in mind, a fair question to ask now is: When developments are so rapid, is it ever possible to stay completely in sync?

Certainly, it is challenging for quick innovation in any industry to keep pace with regulatory requirements or even soft law. This is especially true for things like recruiting and hiring.  

Last month, my organization’s foundation arm released self-regulatory Principles and Protocols for Trustworthy AI in Recruiting and Hiring, which offers some practical guidance for businesses leveraging AI equitably and responsibly. Some of those objectives include:

  • Whatever systems you use for recruiting and hiring, review them frequently for validity and reliability, asking yourself: Are they reducing and managing harmful bias?
  • Facilitate company-wide practices around accountability, compliance, and transparency.
  • Strive for recruiting and hiring tools that are not only safe and secure but are also easy to explain and interpret.
  • Consider whether you use an independent certification or audit to improve AI accountability. 

 

Finally, as you employ AI in the recruiting and hiring process at your organization, I hope you agree that doing so voluntarily, under the auspices of independent industry self-regulation, is often far preferable to being forced to do so under a regime of top-down government regulation. Indeed, it is within the broad umbrella of technological innovation that independent industry self-regulation can thrive, enhancing consumer trust in business and nonprofit organizations of all shapes and sizes.

Originally published in Forbes.

 

 

 

News

Press Release

Justin Connor Named Executive Director for The Center for Industry Self-Regulation, a Foundation Created by BBB National Programs

McLean, VA – May 17, 2022 – Recognizing a timely opportunity to promote and grow the next generation of independent industry self-regulation programs, The Center for Industry Self-Regulation today named Justin Connor as its inaugural Executive Director. The announcement was made by Eric D. Reicin, President...

Read the Press Release