Developing Principles and Protocols for Recruiting and Hiring with AI
Aug 25, 2023 by Eric D. Reicin, President & CEO, BBB National Programs
Since I first wrote about the use of artificial intelligence tools in the recruiting and hiring process in 2021, the rise in AI across all business functions has spiked dramatically. Generative AI, in particular, has taken center stage, even raising existential questions in the minds of its creators.
Now, more than ever, business and nonprofit leaders need a local, state, federal, and international roadmap to follow for the use of AI – not just to play defense and reduce their risk of legal liability but also to play offense and to inculcate the responsible use of AI and machine learning in their organizations. Trustworthy AI is quickly becoming essential in many corporate and nonprofit functions, and its use in recruiting and hiring should be beyond reproach.
No doubt, the landscape on this issue is always evolving, and the U.S. Equal Employment Opportunity Commission’s (EEOC) recent action on AI is noteworthy. In May 2023, the EEOC issued new technical guidance demonstrating how Title VII – which prohibits employers from discrimination based on “race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), or national origin” – applies to companies using AI-based decision-making tools around employment.
This EEOC guidance is a follow-on to the agency’s 2021 Initiative on Artificial Intelligence and Algorithmic Fairness, which aims to ensure technologies like AI are not discriminatory and it reaffirms my long-held belief that employers have the ultimate accountability for the responsible use of these tools. Even though AI tools may be developed by outside vendors, their responsible use is squarely in the hands of companies who are doing the recruiting and hiring. To put a further legal fine point on it, companies are responsible for the work of the vendor under an agency liability theory.
As a business or nonprofit leader, it is important for you to understand why and how you are going to use an AI tool for recruiting and hiring. Beyond the threshold query of “Does it work?” consider asking yourself these foundational questions:
- How much input does this tool have on the hiring decision?
- Is a candidate made aware that an AI or machine-learning filtering technology tool is being used during the consideration process?
- Does the AI-enabled tool allow for flexibility when it comes to disability or other factors under relevant law? For instance, can the job applicant expect ADA-compliant recruiting conditions such as an appropriate reasonable accommodation under law?
- What is the role of human oversight?
- Since privacy is often of heightened concern for a job seeker, does the tool respect and follow relevant privacy laws and regulations such as rapidly evolving state privacy laws and municipal rules on AI and employment? For example, New York City’s Local Law 144, as reported by Bloomberg, “prohibits employers from using automated tools to screen candidates unless the software has undergone an independent review to check for bias against protected groups.”
As Axios reported, “A team of Stanford researchers is warning that leading AI models are woefully non-compliant with responsible AI standards, as represented by the European Union's Artificial Intelligence Act.” This proposed law is currently in a late draft form but, by many accounts, is on a fast-moving train to adoption. In addition, the European Commission considers AI in systems in employment and worker management a “high-risk” area, which raises the stakes for global corporations and nonprofits to get things right.
Closer to home, I expect that the federal government will soon focus a bit more on the use of AI and tools in managing employee job performance as well as in the consumer surveillance arena. The General Counsel of the National Labor Relations Board, for example, takes the position that certain AI technology tools that employers use to manage employee job performance may be problematic. Similarly, the White House Office of Science and Technology Policy is gathering data on the subject to formulate Administration policy, and the Federal Trade Commission is considering proposed rules regarding how companies monitor consumer behavior.
With the above in mind, a fair question to ask now is: When developments are so rapid, is it ever possible to stay completely in sync?
Certainly, it is challenging for quick innovation in any industry to keep pace with regulatory requirements or even soft law. This is especially true for things like recruiting and hiring.
Last month, my organization’s foundation arm released self-regulatory Principles and Protocols for Trustworthy AI in Recruiting and Hiring, which offers some practical guidance for businesses leveraging AI equitably and responsibly. Some of those objectives include:
- Whatever systems you use for recruiting and hiring, review them frequently for validity and reliability, asking yourself: Are they reducing and managing harmful bias?
- Facilitate company-wide practices around accountability, compliance, and transparency.
- Strive for recruiting and hiring tools that are not only safe and secure but are also easy to explain and interpret.
- Consider whether you use an independent certification or audit to improve AI accountability.
Finally, as you employ AI in the recruiting and hiring process at your organization, I hope you agree that doing so voluntarily, under the auspices of independent industry self-regulation, is often far preferable to being forced to do so under a regime of top-down government regulation. Indeed, it is within the broad umbrella of technological innovation that independent industry self-regulation can thrive, enhancing consumer trust in business and nonprofit organizations of all shapes and sizes.
Originally published in Forbes.