AI Can Be A Force For Good In Recruiting And Hiring New Employees

Dec 15, 2021 by Eric D. Reicin, President & CEO, BBB National Programs

It is one of the biggest conundrums of our time: businesses posting record numbers of available jobs and not being able to fill them. As with most intractable problems, there are multiple forces at play, with one involving the role of technology. Kathryn Dill at the Wall Street Journal recently wrote: “Companies are desperate to hire, and yet some workers still can’t seem to find jobs. Here may be one reason why: The software that sorts through applicants deletes millions of people from consideration.”

This sorting software uses artificial intelligence (AI), a technology widely known but not so widely understood. The use of AI and machine learning in various employment processes is advancing rapidly. New products and services are entering the market at an explosive pace. These new technologies promise dramatic efficiencies and added value while pledging a healthy return on investment.

A challenge for rapid innovation in any industry is the ability for legal and regulatory requirements to keep pace. In the recruiting and hiring process, where AI provides aid to human decision-making and a welcome relief to managing a deluge of data, company leaders are asking themselves: How can we combine important technological innovation with a proactive approach to employment law requirements?

The need for this approach is not merely a box-checking exercise. A Harvard Business School study found that 88% of employers believe qualified applicants were filtered out by the screening software. And beyond missing out on good candidates, using this type of software also exposes companies to potential legal trouble in the form of discrimination lawsuits. The Federal Trade Commission (FTC) noted that this "apparently ‘neutral’ technology can produce troubling outcomes — including discrimination by race or other legally protected classes.”

A Brookings Institution report on auditing employment algorithms for discrimination offered the following assessment: “Speech recognition models have demonstrated clear biases against African Americans and potential problems across dialectical and regional variations of speech. Commercial AI facial analysis, aside from being largely pseudoscientific, has shown clear disparities across skin color and is highly concerning for people with disabilities.”

Scrutiny over these systems is on the rise, and I am not surprised.

While the use of AI for recruitment and hiring has long been on the radar of federal regulators, the issue is gaining steam. In December 2020, 10 U.S. Senators sent a letter to the then EEOC Chair on the design, use and effects of hiring technologies and asking for information about the EEOC’s authority and capacity to conduct the necessary oversight and research on this topic. Then, in January 2021, President Biden elevated Commissioner Charlotte Burrows to EEOC Chair.

The EEOC has acknowledged that the most relevant guidance document — Uniform Guidelines on Employee Selection Procedures — is over 40 years old and needs a refresh, something that EEOC Commissioner Keith Sonderling made clear recently: “As a public servant I am committed to ensuring that AI helps eliminate rather than exacerbate discrimination in the workplace, and as an EEOC Commissioner I am committed to providing clarity for those who have long been asking.”

Recently Burrows announced that the EEOC would be launching an initiative on AI and algorithmic fairness, which will include listening sessions, research, collecting “promising practices” and, ultimately, some form of technical assistance. It is unclear if the new initiative would include an update to the Uniform Guidelines.

Meanwhile, some states are entering the fray using data privacy, discrimination, and blanket prohibitions on traditional hiring strategies (e.g., Ban-the-Box, salary history) to limit discriminatory impact. The results are mixed, but at a minimum, they create an unwieldy array of regulations that employers are required to follow with commentators suggesting that more is on the horizon.

 

What Employers Can Do

Studies show that 99% of Fortune 500 companies rely on the aid of talent-sifting software, and 55% of human resource leaders in the U.S. use predictive algorithms to support hiring. But not every company has the resources to vet and re-vet the AI hiring systems they use. Here are some other steps companies can take when it comes to using AI for hiring:

  • Apply existing law to algorithmic decision-making. Jenny Yang, director of the U.S. Department of Labor OFCCP and former EEOC Chair, explained while she was in the private sector that even though algorithmic models “do not fit neatly within our existing laws,” there is still room to apply current law to these practices. Yang says, “employers need to ensure that both the criteria for selection and the performance measures are both fair and job-related.”
  • Develop and modify the inputs fed into your hiring programs and algorithms. Are these inputs job-related? Do they promote or impede diversity objectives? Do the data outputs follow robust privacy and data governance? What standards does your organization follow to ensure the algorithms are nearing bias neutral?
  • Look for ways to strengthen your accountability structure. This could include auditing automated tools on a regular basis, either with in-house resources or a third party. What accountability steps do you take when purchasing AI applications? What human oversight is necessary?
  • As a matter of transparency and fairness, consider what is told to applicants about the use of AI. You may want to notify applicants that AI will be used to analyze their application materials or interviews and evaluate their candidacy.

 

At the end of the day, I do not think the best solution for business is going to be top-down government regulation. That is why the hiring space, particularly the use of AI for recruiting and hiring, is ripe for industry self-regulation. When business experiences incredible technological innovation, presenting significant challenges, independent industry self-regulation can thrive, protecting consumers and enhancing their trust in business. Independent industry self-regulation of AI for recruiting and hiring can make it a force for good — for job candidates, HR executives, and legal and compliance professionals in businesses and nonprofit organizations of all shapes and sizes.

Originally published on Forbes.

Suggested Articles

Blog

American Privacy Rights Act: A Primer for Business

Was it the recent series of natural phenomena that prompted Congress to move on a bipartisan, bicameral federal privacy bill? We can’t say with certainty, but we can outline for you what we believe to be, at first glance, the most compelling elements of the American Privacy Rights Act of 2024 (APRA).
Read more
Blog

Take Care of Your “Health-Lite” Claims

Some advertisers believe they can avoid scrutiny when making health-related claims by making their claim “softer.” But context is key. Health benefit claims must comply with the FTC’s Health Products Compliance Guidance. The substantiation bar is not lowered by changing the approach to the health-related claim.
Read more
Blog

Bullish but Cautionary: A Balanced Way to Approach the Impact of AI

Business and nonprofit leaders in the U.S. may not feel so weighty a responsibility in assessing the global impact of AI, but we must realize AI’s power to impact our organizations, our local economies, our sectors, and our nation.
Read more
Blog

New Rules of the Road Can Sustain US Leadership on Interoperable Digital Data Flows

President Biden closed February 2024 with an EO that signaled an important development for how the U.S. plans to position and guard itself from global adversaries, and speaks volumes about how the U.S. views the next-generation impacts of data flows on the digital economy and how our nation can be better equipped as a global leader. Read our takeaways and future considerations.
Read more