AI Can Be A Force For Good In Recruiting And Hiring New Employees

Dec 15, 2021 by Eric D. Reicin, President & CEO, BBB National Programs

It is one of the biggest conundrums of our time: businesses posting record numbers of available jobs and not being able to fill them. As with most intractable problems, there are multiple forces at play, with one involving the role of technology. Kathryn Dill at the Wall Street Journal recently wrote: “Companies are desperate to hire, and yet some workers still can’t seem to find jobs. Here may be one reason why: The software that sorts through applicants deletes millions of people from consideration.”

This sorting software uses artificial intelligence (AI), a technology widely known but not so widely understood. The use of AI and machine learning in various employment processes is advancing rapidly. New products and services are entering the market at an explosive pace. These new technologies promise dramatic efficiencies and added value while pledging a healthy return on investment.

A challenge for rapid innovation in any industry is the ability for legal and regulatory requirements to keep pace. In the recruiting and hiring process, where AI provides aid to human decision-making and a welcome relief to managing a deluge of data, company leaders are asking themselves: How can we combine important technological innovation with a proactive approach to employment law requirements?

The need for this approach is not merely a box-checking exercise. A Harvard Business School study found that 88% of employers believe qualified applicants were filtered out by the screening software. And beyond missing out on good candidates, using this type of software also exposes companies to potential legal trouble in the form of discrimination lawsuits. The Federal Trade Commission (FTC) noted that this "apparently ‘neutral’ technology can produce troubling outcomes — including discrimination by race or other legally protected classes.”

A Brookings Institution report on auditing employment algorithms for discrimination offered the following assessment: “Speech recognition models have demonstrated clear biases against African Americans and potential problems across dialectical and regional variations of speech. Commercial AI facial analysis, aside from being largely pseudoscientific, has shown clear disparities across skin color and is highly concerning for people with disabilities.”

Scrutiny over these systems is on the rise, and I am not surprised.

While the use of AI for recruitment and hiring has long been on the radar of federal regulators, the issue is gaining steam. In December 2020, 10 U.S. Senators sent a letter to the then EEOC Chair on the design, use and effects of hiring technologies and asking for information about the EEOC’s authority and capacity to conduct the necessary oversight and research on this topic. Then, in January 2021, President Biden elevated Commissioner Charlotte Burrows to EEOC Chair.

The EEOC has acknowledged that the most relevant guidance document — Uniform Guidelines on Employee Selection Procedures — is over 40 years old and needs a refresh, something that EEOC Commissioner Keith Sonderling made clear recently: “As a public servant I am committed to ensuring that AI helps eliminate rather than exacerbate discrimination in the workplace, and as an EEOC Commissioner I am committed to providing clarity for those who have long been asking.”

Recently Burrows announced that the EEOC would be launching an initiative on AI and algorithmic fairness, which will include listening sessions, research, collecting “promising practices” and, ultimately, some form of technical assistance. It is unclear if the new initiative would include an update to the Uniform Guidelines.

Meanwhile, some states are entering the fray using data privacy, discrimination, and blanket prohibitions on traditional hiring strategies (e.g., Ban-the-Box, salary history) to limit discriminatory impact. The results are mixed, but at a minimum, they create an unwieldy array of regulations that employers are required to follow with commentators suggesting that more is on the horizon.


What Employers Can Do

Studies show that 99% of Fortune 500 companies rely on the aid of talent-sifting software, and 55% of human resource leaders in the U.S. use predictive algorithms to support hiring. But not every company has the resources to vet and re-vet the AI hiring systems they use. Here are some other steps companies can take when it comes to using AI for hiring:

  • Apply existing law to algorithmic decision-making. Jenny Yang, director of the U.S. Department of Labor OFCCP and former EEOC Chair, explained while she was in the private sector that even though algorithmic models “do not fit neatly within our existing laws,” there is still room to apply current law to these practices. Yang says, “employers need to ensure that both the criteria for selection and the performance measures are both fair and job-related.”
  • Develop and modify the inputs fed into your hiring programs and algorithms. Are these inputs job-related? Do they promote or impede diversity objectives? Do the data outputs follow robust privacy and data governance? What standards does your organization follow to ensure the algorithms are nearing bias neutral?
  • Look for ways to strengthen your accountability structure. This could include auditing automated tools on a regular basis, either with in-house resources or a third party. What accountability steps do you take when purchasing AI applications? What human oversight is necessary?
  • As a matter of transparency and fairness, consider what is told to applicants about the use of AI. You may want to notify applicants that AI will be used to analyze their application materials or interviews and evaluate their candidacy.


At the end of the day, I do not think the best solution for business is going to be top-down government regulation. That is why the hiring space, particularly the use of AI for recruiting and hiring, is ripe for industry self-regulation. When business experiences incredible technological innovation, presenting significant challenges, independent industry self-regulation can thrive, protecting consumers and enhancing their trust in business. Independent industry self-regulation of AI for recruiting and hiring can make it a force for good — for job candidates, HR executives, and legal and compliance professionals in businesses and nonprofit organizations of all shapes and sizes.

Originally published on Forbes.

Suggested Articles


The ABCs of DPF and GDPR

Easing data flows across the Atlantic, the EU-U.S. DPF satisfies requirements outlined under the General Data Protection Regulation (GDPR), helping companies avoid steep fines.
Read more

The FTC Joins the Global CBPR Party

This month the Federal Trade Commission (FTC) announced participation in the Global Cooperation Arrangement for Privacy Enforcement (Global CAPE), signaling the agency’s interest in keeping pace with the increasingly global nature of commerce and marks an important step forward for the global expansion of CBPRs.
Read more

The UK Extension: Implications for International Data Transfers

The UK Extension to the EU-U.S. Data Privacy Framework (DPF) took effect, permitting the flow of personal data from the UK to the U.S. without the need for further safeguards and making UK coverage accessible for companies of all sizes. The UK Extension signals its commitment and enthusiasm for sustainable data flows.
Read more

Digital Advertising & Consumer Privacy: Roads Converge in 2024

Deprecation of traditional third-party cookie tracking and the adoption of new tracking alternatives has animated a new wave of regulatory issues that complicate business compliance with consumer privacy in digital advertising. From 2023 cases, DAAP identified best practices for responsible action in 2024 to stay out of regulatory crosshairs.
Read more