AI Can Be A Force For Good In Recruiting And Hiring New Employees

Dec 15, 2021 by Eric D. Reicin, President & CEO, BBB National Programs

It is one of the biggest conundrums of our time: businesses posting record numbers of available jobs and not being able to fill them. As with most intractable problems, there are multiple forces at play, with one involving the role of technology. Kathryn Dill at the Wall Street Journal recently wrote: “Companies are desperate to hire, and yet some workers still can’t seem to find jobs. Here may be one reason why: The software that sorts through applicants deletes millions of people from consideration.”

This sorting software uses artificial intelligence (AI), a technology widely known but not so widely understood. The use of AI and machine learning in various employment processes is advancing rapidly. New products and services are entering the market at an explosive pace. These new technologies promise dramatic efficiencies and added value while pledging a healthy return on investment.

A challenge for rapid innovation in any industry is the ability for legal and regulatory requirements to keep pace. In the recruiting and hiring process, where AI provides aid to human decision-making and a welcome relief to managing a deluge of data, company leaders are asking themselves: How can we combine important technological innovation with a proactive approach to employment law requirements?

The need for this approach is not merely a box-checking exercise. A Harvard Business School study found that 88% of employers believe qualified applicants were filtered out by the screening software. And beyond missing out on good candidates, using this type of software also exposes companies to potential legal trouble in the form of discrimination lawsuits. The Federal Trade Commission (FTC) noted that this "apparently ‘neutral’ technology can produce troubling outcomes — including discrimination by race or other legally protected classes.”

A Brookings Institution report on auditing employment algorithms for discrimination offered the following assessment: “Speech recognition models have demonstrated clear biases against African Americans and potential problems across dialectical and regional variations of speech. Commercial AI facial analysis, aside from being largely pseudoscientific, has shown clear disparities across skin color and is highly concerning for people with disabilities.”

Scrutiny over these systems is on the rise, and I am not surprised.

While the use of AI for recruitment and hiring has long been on the radar of federal regulators, the issue is gaining steam. In December 2020, 10 U.S. Senators sent a letter to the then EEOC Chair on the design, use and effects of hiring technologies and asking for information about the EEOC’s authority and capacity to conduct the necessary oversight and research on this topic. Then, in January 2021, President Biden elevated Commissioner Charlotte Burrows to EEOC Chair.

The EEOC has acknowledged that the most relevant guidance document — Uniform Guidelines on Employee Selection Procedures — is over 40 years old and needs a refresh, something that EEOC Commissioner Keith Sonderling made clear recently: “As a public servant I am committed to ensuring that AI helps eliminate rather than exacerbate discrimination in the workplace, and as an EEOC Commissioner I am committed to providing clarity for those who have long been asking.”

Recently Burrows announced that the EEOC would be launching an initiative on AI and algorithmic fairness, which will include listening sessions, research, collecting “promising practices” and, ultimately, some form of technical assistance. It is unclear if the new initiative would include an update to the Uniform Guidelines.

Meanwhile, some states are entering the fray using data privacy, discrimination, and blanket prohibitions on traditional hiring strategies (e.g., Ban-the-Box, salary history) to limit discriminatory impact. The results are mixed, but at a minimum, they create an unwieldy array of regulations that employers are required to follow with commentators suggesting that more is on the horizon.

 

What Employers Can Do

Studies show that 99% of Fortune 500 companies rely on the aid of talent-sifting software, and 55% of human resource leaders in the U.S. use predictive algorithms to support hiring. But not every company has the resources to vet and re-vet the AI hiring systems they use. Here are some other steps companies can take when it comes to using AI for hiring:

  • Apply existing law to algorithmic decision-making. Jenny Yang, director of the U.S. Department of Labor OFCCP and former EEOC Chair, explained while she was in the private sector that even though algorithmic models “do not fit neatly within our existing laws,” there is still room to apply current law to these practices. Yang says, “employers need to ensure that both the criteria for selection and the performance measures are both fair and job-related.”
  • Develop and modify the inputs fed into your hiring programs and algorithms. Are these inputs job-related? Do they promote or impede diversity objectives? Do the data outputs follow robust privacy and data governance? What standards does your organization follow to ensure the algorithms are nearing bias neutral?
  • Look for ways to strengthen your accountability structure. This could include auditing automated tools on a regular basis, either with in-house resources or a third party. What accountability steps do you take when purchasing AI applications? What human oversight is necessary?
  • As a matter of transparency and fairness, consider what is told to applicants about the use of AI. You may want to notify applicants that AI will be used to analyze their application materials or interviews and evaluate their candidacy.

 

At the end of the day, I do not think the best solution for business is going to be top-down government regulation. That is why the hiring space, particularly the use of AI for recruiting and hiring, is ripe for industry self-regulation. When business experiences incredible technological innovation, presenting significant challenges, independent industry self-regulation can thrive, protecting consumers and enhancing their trust in business. Independent industry self-regulation of AI for recruiting and hiring can make it a force for good — for job candidates, HR executives, and legal and compliance professionals in businesses and nonprofit organizations of all shapes and sizes.

Originally published on Forbes.

Suggested Articles

Blog

Fifty Shades of Consumer Health Data: Unclear Expectations for Digital Privacy

While momentum continues to build around what a regulated consumer health privacy landscape looks like, the environment remains shrouded in shades of gray. To date, a risk-based approach to consumer health data does not exist, but we believe a sliding scale for the risks carried by consumer health data should.
Read more
Blog

California Privacy Enforcement: Whose Job Is It Anyway?

The California Privacy Rights Act of 2020 went into effect bringing new privacy rights to California consumers and created the California Privacy Protection Agency. CCPA will continue to be enforced by the California Office of the Attorney Genera. Which begs the question: Whose enforcement is it anyway?
Read more
Blog

Unsubstantiated Claims May Lead to Civil Penalties

The U.S. economy is built on a fair and transparent product marketplace. It is the responsibility of companies to have adequate substantiation for health and safety claims and to hold their competitors to the same standard.
Read more
Blog

Government Action And Engagement Is Part Of Being A Nonprofit Leader

It can be challenging for us nonprofit leaders to delve into new areas of organizational growth and opportunity, such as determining what government actions are important to our organizations. The key question is: When and how should you engage with the government as a nonprofit leader?
Read more