AI Can Be A Force For Good In Recruiting And Hiring New Employees

Dec 15, 2021 by Eric D. Reicin, President & CEO, BBB National Programs

It is one of the biggest conundrums of our time: businesses posting record numbers of available jobs and not being able to fill them. As with most intractable problems, there are multiple forces at play, with one involving the role of technology. Kathryn Dill at the Wall Street Journal recently wrote: “Companies are desperate to hire, and yet some workers still can’t seem to find jobs. Here may be one reason why: The software that sorts through applicants deletes millions of people from consideration.”

This sorting software uses artificial intelligence (AI), a technology widely known but not so widely understood. The use of AI and machine learning in various employment processes is advancing rapidly. New products and services are entering the market at an explosive pace. These new technologies promise dramatic efficiencies and added value while pledging a healthy return on investment.

A challenge for rapid innovation in any industry is the ability for legal and regulatory requirements to keep pace. In the recruiting and hiring process, where AI provides aid to human decision-making and a welcome relief to managing a deluge of data, company leaders are asking themselves: How can we combine important technological innovation with a proactive approach to employment law requirements?

The need for this approach is not merely a box-checking exercise. A Harvard Business School study found that 88% of employers believe qualified applicants were filtered out by the screening software. And beyond missing out on good candidates, using this type of software also exposes companies to potential legal trouble in the form of discrimination lawsuits. The Federal Trade Commission (FTC) noted that this "apparently ‘neutral’ technology can produce troubling outcomes — including discrimination by race or other legally protected classes.”

A Brookings Institution report on auditing employment algorithms for discrimination offered the following assessment: “Speech recognition models have demonstrated clear biases against African Americans and potential problems across dialectical and regional variations of speech. Commercial AI facial analysis, aside from being largely pseudoscientific, has shown clear disparities across skin color and is highly concerning for people with disabilities.”

Scrutiny over these systems is on the rise, and I am not surprised.

While the use of AI for recruitment and hiring has long been on the radar of federal regulators, the issue is gaining steam. In December 2020, 10 U.S. Senators sent a letter to the then EEOC Chair on the design, use and effects of hiring technologies and asking for information about the EEOC’s authority and capacity to conduct the necessary oversight and research on this topic. Then, in January 2021, President Biden elevated Commissioner Charlotte Burrows to EEOC Chair.

The EEOC has acknowledged that the most relevant guidance document — Uniform Guidelines on Employee Selection Procedures — is over 40 years old and needs a refresh, something that EEOC Commissioner Keith Sonderling made clear recently: “As a public servant I am committed to ensuring that AI helps eliminate rather than exacerbate discrimination in the workplace, and as an EEOC Commissioner I am committed to providing clarity for those who have long been asking.”

Recently Burrows announced that the EEOC would be launching an initiative on AI and algorithmic fairness, which will include listening sessions, research, collecting “promising practices” and, ultimately, some form of technical assistance. It is unclear if the new initiative would include an update to the Uniform Guidelines.

Meanwhile, some states are entering the fray using data privacy, discrimination, and blanket prohibitions on traditional hiring strategies (e.g., Ban-the-Box, salary history) to limit discriminatory impact. The results are mixed, but at a minimum, they create an unwieldy array of regulations that employers are required to follow with commentators suggesting that more is on the horizon.

 

What Employers Can Do

Studies show that 99% of Fortune 500 companies rely on the aid of talent-sifting software, and 55% of human resource leaders in the U.S. use predictive algorithms to support hiring. But not every company has the resources to vet and re-vet the AI hiring systems they use. Here are some other steps companies can take when it comes to using AI for hiring:

  • Apply existing law to algorithmic decision-making. Jenny Yang, director of the U.S. Department of Labor OFCCP and former EEOC Chair, explained while she was in the private sector that even though algorithmic models “do not fit neatly within our existing laws,” there is still room to apply current law to these practices. Yang says, “employers need to ensure that both the criteria for selection and the performance measures are both fair and job-related.”
  • Develop and modify the inputs fed into your hiring programs and algorithms. Are these inputs job-related? Do they promote or impede diversity objectives? Do the data outputs follow robust privacy and data governance? What standards does your organization follow to ensure the algorithms are nearing bias neutral?
  • Look for ways to strengthen your accountability structure. This could include auditing automated tools on a regular basis, either with in-house resources or a third party. What accountability steps do you take when purchasing AI applications? What human oversight is necessary?
  • As a matter of transparency and fairness, consider what is told to applicants about the use of AI. You may want to notify applicants that AI will be used to analyze their application materials or interviews and evaluate their candidacy.

 

At the end of the day, I do not think the best solution for business is going to be top-down government regulation. That is why the hiring space, particularly the use of AI for recruiting and hiring, is ripe for industry self-regulation. When business experiences incredible technological innovation, presenting significant challenges, independent industry self-regulation can thrive, protecting consumers and enhancing their trust in business. Independent industry self-regulation of AI for recruiting and hiring can make it a force for good — for job candidates, HR executives, and legal and compliance professionals in businesses and nonprofit organizations of all shapes and sizes.

Originally published on Forbes.

Suggested Articles

Blog

CFBAI and CCAI Publish the 2023 Annual Report on Participant Compliance and Program Progress

BBB National Programs has released the Children’s Food and Beverage Advertising Initiative (CFBAI) and Children’s Confection Advertising Initiative (CCAI) 2023 Annual Report. The report notes excellent compliance by the 22 CFBAI participants and the six CCAI participants in 2023.
Read more
Blog

The Case for Teaching Industry Self-Regulation in Law, Business, and Public Policy Schools

Law schools, business schools, and public policy programs have a unique opportunity to shape the future of corporate behavior by teaching students the importance of soft law and independent industry self-regulation.
Read more
Blog

5 Missteps to Avoid When Applying or Recertifying to the DPF Program

Each year, participants in the DPF Program need to recertify with the Department of Commerce. To help companies navigate it, our Global Privacy Division has outlined five key recommendations to keep in mind to avoid common missteps with the process.
Read more
Blog

Sharing Holiday Cheer (but Not a Child’s Personal Information)

Not surprisingly, cell phones, connected toys, and toys advertised on social media top wish lists of kids everywhere. To help ensure your holiday shopping experiences are as safe as possible, the team at CARU put together some holiday tips.
Read more