AI Can Be A Force For Good In Recruiting And Hiring New Employees

Dec 15, 2021 by Eric D. Reicin, President & CEO, BBB National Programs

It is one of the biggest conundrums of our time: businesses posting record numbers of available jobs and not being able to fill them. As with most intractable problems, there are multiple forces at play, with one involving the role of technology. Kathryn Dill at the Wall Street Journal recently wrote: “Companies are desperate to hire, and yet some workers still can’t seem to find jobs. Here may be one reason why: The software that sorts through applicants deletes millions of people from consideration.”

This sorting software uses artificial intelligence (AI), a technology widely known but not so widely understood. The use of AI and machine learning in various employment processes is advancing rapidly. New products and services are entering the market at an explosive pace. These new technologies promise dramatic efficiencies and added value while pledging a healthy return on investment.

A challenge for rapid innovation in any industry is the ability for legal and regulatory requirements to keep pace. In the recruiting and hiring process, where AI provides aid to human decision-making and a welcome relief to managing a deluge of data, company leaders are asking themselves: How can we combine important technological innovation with a proactive approach to employment law requirements?

The need for this approach is not merely a box-checking exercise. A Harvard Business School study found that 88% of employers believe qualified applicants were filtered out by the screening software. And beyond missing out on good candidates, using this type of software also exposes companies to potential legal trouble in the form of discrimination lawsuits. The Federal Trade Commission (FTC) noted that this "apparently ‘neutral’ technology can produce troubling outcomes — including discrimination by race or other legally protected classes.”

A Brookings Institution report on auditing employment algorithms for discrimination offered the following assessment: “Speech recognition models have demonstrated clear biases against African Americans and potential problems across dialectical and regional variations of speech. Commercial AI facial analysis, aside from being largely pseudoscientific, has shown clear disparities across skin color and is highly concerning for people with disabilities.”

Scrutiny over these systems is on the rise, and I am not surprised.

While the use of AI for recruitment and hiring has long been on the radar of federal regulators, the issue is gaining steam. In December 2020, 10 U.S. Senators sent a letter to the then EEOC Chair on the design, use and effects of hiring technologies and asking for information about the EEOC’s authority and capacity to conduct the necessary oversight and research on this topic. Then, in January 2021, President Biden elevated Commissioner Charlotte Burrows to EEOC Chair.

The EEOC has acknowledged that the most relevant guidance document — Uniform Guidelines on Employee Selection Procedures — is over 40 years old and needs a refresh, something that EEOC Commissioner Keith Sonderling made clear recently: “As a public servant I am committed to ensuring that AI helps eliminate rather than exacerbate discrimination in the workplace, and as an EEOC Commissioner I am committed to providing clarity for those who have long been asking.”

Recently Burrows announced that the EEOC would be launching an initiative on AI and algorithmic fairness, which will include listening sessions, research, collecting “promising practices” and, ultimately, some form of technical assistance. It is unclear if the new initiative would include an update to the Uniform Guidelines.

Meanwhile, some states are entering the fray using data privacy, discrimination, and blanket prohibitions on traditional hiring strategies (e.g., Ban-the-Box, salary history) to limit discriminatory impact. The results are mixed, but at a minimum, they create an unwieldy array of regulations that employers are required to follow with commentators suggesting that more is on the horizon.

 

What Employers Can Do

Studies show that 99% of Fortune 500 companies rely on the aid of talent-sifting software, and 55% of human resource leaders in the U.S. use predictive algorithms to support hiring. But not every company has the resources to vet and re-vet the AI hiring systems they use. Here are some other steps companies can take when it comes to using AI for hiring:

  • Apply existing law to algorithmic decision-making. Jenny Yang, director of the U.S. Department of Labor OFCCP and former EEOC Chair, explained while she was in the private sector that even though algorithmic models “do not fit neatly within our existing laws,” there is still room to apply current law to these practices. Yang says, “employers need to ensure that both the criteria for selection and the performance measures are both fair and job-related.”
  • Develop and modify the inputs fed into your hiring programs and algorithms. Are these inputs job-related? Do they promote or impede diversity objectives? Do the data outputs follow robust privacy and data governance? What standards does your organization follow to ensure the algorithms are nearing bias neutral?
  • Look for ways to strengthen your accountability structure. This could include auditing automated tools on a regular basis, either with in-house resources or a third party. What accountability steps do you take when purchasing AI applications? What human oversight is necessary?
  • As a matter of transparency and fairness, consider what is told to applicants about the use of AI. You may want to notify applicants that AI will be used to analyze their application materials or interviews and evaluate their candidacy.

 

At the end of the day, I do not think the best solution for business is going to be top-down government regulation. That is why the hiring space, particularly the use of AI for recruiting and hiring, is ripe for industry self-regulation. When business experiences incredible technological innovation, presenting significant challenges, independent industry self-regulation can thrive, protecting consumers and enhancing their trust in business. Independent industry self-regulation of AI for recruiting and hiring can make it a force for good — for job candidates, HR executives, and legal and compliance professionals in businesses and nonprofit organizations of all shapes and sizes.

Originally published on Forbes.

Suggested Articles

Blog

What to Know About the North Carolina Lemon Law

Next in our blog series reviewing the state lemon laws is the Tarheel State – North Carolina. In this series, we break down what the lemon law does and does not cover in each state because although there is a federal lemon law, called the Magnuson-Moss Warranty Act, states also have their own laws to help consumers who purchase defective vehicles.
Read more
Blog

Top 10 Reasons to Resolve Lemon Law Disputes with BBB AUTO LINE

If your vehicle is still under warranty and you have an issue that the dealership has been unable to resolve, you may be able to reach a resolution directly with the manufacturer – at no cost to you - through BBB AUTO LINE. We have assembled a list of ten ways BBB AUTO LINE provides optimal resolution solutions.
Read more
Blog

What to Know About the New York Lemon Law

As we continue our blog series reviewing state lemon laws, we turn our attention to New York State. True to its reputation for making its own rules, New York includes some distinctive aspects within its lemon laws.
Read more
Blog

Defining The 'S' In ESG And Navigating Disclosures

For businesses interested in making robust ESG disclosures, not only can the sheer number of frameworks and standards make ESG performance reporting seem overwhelming, the frameworks themselves can be a bit fuzzy on how they define and measure the "S" of ESG.
Read more