AI And Machine Learning In The Workplace: Preparing For 2023

Dec 26, 2022 by Eric D. Reicin, President & CEO, BBB National Programs

In recent years, government scrutiny over the use of artificial intelligence (AI) tools in the recruiting and hiring process has risen. Since I wrote about this topic last year, there has been significant activity within several federal government agencies regarding the use of AI and machine learning in the employment context.

A better understanding of these actions can help business leaders reduce their risk of legal liability and better understand how to use AI and machine learning responsibly in their organizations.

The Equal Employment Opportunity Commission (EEOC) has been particularly active through its EEOC initiative on AI and algorithmic fairness and its joint HIRE initiative with the U.S. Department of Labor. In May 2022, the EEOC released technical guidance regarding potential Americans with Disabilities Act (ADA) implications. The guidance offers employers recommended guardrails when it comes to using AI technologies in their hiring and workforce management systems.

The Federal Trade Commission (FTC) also has AI used for recruiting and hiring on its radar. In September 2022, speaking at the annual conference for a unit of my organization, BBB National Programs’ National Advertising Division, FTC commissioner Alvaro Bedoya said: “Some say that our unfairness authority does not reach discrimination... Congress did not define Section 5 on the basis of subject matter. Rather, Congress defined unfairness to block any conduct that substantially injures consumers, that is not reasonably avoidable, and that is not offset by a countervailing benefit.”

In October, the White House Office of Science and Technology Policy released a “Blueprint for an AI Bill of Rights,” a nonbinding road map for any government agency as they create new regulations or opine on existing ones. The framework is “a set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of AI.” Since the release of the blueprint, several federal government agencies, such as the U.S. Department of Labor, have begun to release further guidance taking this blueprint into account.

More recently, the National Labor Relations Board General Counsel Jennifer A. Abruzzo has weighed in, opining on the use of extensive electronic monitoring and algorithmic management of employees as potentially interfering with the exercise of employee rights under Section 7 of the National Labor Relations Act (NLRA) when it significantly impairs or negates employees' ability to engage in NLRA-protected activity and keep that activity confidential from their employer, if they so choose.

State and local jurisdictions are also setting rules. For example, the New York City audit bias law, which is planned to go into effect in April 2023, will require employers to conduct an independent bias audit before using AI tools.

 

Recommendations For Business Leaders

1. Ask questions before adopting a tool.

Employers are on the hook when they choose to use a vendor and must ensure their machine learning tools of choice are being used in a nondiscriminatory way under relevant law. Business leaders should ask their vendors for validation studies, ask questions to help understand how they are going to apply the technology, and confirm whether the testing that they are doing is related to the jobs that they are hiring for.

If you are going to purchase a tool, it's crucial to understand how you are going to use it: Is it a screening tool, or a tool merely to enhance the experience of the candidate and hiring manager, or a productivity tool for the workplace? How much input does the tool have on the decision? Is it one factor or is it the only factor in screening resumes and candidates? What audit and notice provisions do candidates have, and what appropriate accommodation procedures are hardwired into the tool? And what is the role of human oversight, whether it is by the vendor or by your HR or legal department?

Additionally, consider what problems you are trying to solve. Will the machine learning solution solve that problem in a way that limits legal liability? Is it complementary to your human capital approach?

2. Go back to the basics.

What did your federally required affirmative action plan say you would do? Maybe your plan had you going to minority job fairs, expanding the candidate pool and writing your job descriptions in a way that is not discriminatory and focuses on essential functions. How do you take old-school technology and activities and bring them up to date?

3. Be transparent about the accommodations process.

Make clear in written instructions how applicants and employees can request reasonable accommodations under federal and state law.

4. Rigorously self-test.

Conduct rigorous self-testing of hiring assessments before and after deployment, as well as close to continual audits for disparities once deployed.

I believe that employers overwhelmingly want to comply with the law. But these are complicated and evolving technologies, so a one-sized approach might not work for you. While there are a number of pitfalls to avoid, AI and machine learning technology, if used correctly, can allow organizations to harness the power of data, efficiently make sound decisions and reduce sourcing and recruiting time in the current environment. These tools can also help create the same selection procedure or interview questions for each candidate, reduce the implicit bias that exists with subjective decision making and expand the pool of applicants in a manner that helps companies with diversity goals.

Originally published in Forbes

Suggested Articles

Blog

5 Missteps to Avoid When Applying or Recertifying to the DPF Program

Each year, participants in the DPF Program need to recertify with the Department of Commerce. To help companies navigate it, our Global Privacy Division has outlined five key recommendations to keep in mind to avoid common missteps with the process.
Read more
Blog

Sharing Holiday Cheer (but Not a Child’s Personal Information)

Not surprisingly, cell phones, connected toys, and toys advertised on social media top wish lists of kids everywhere. To help ensure your holiday shopping experiences are as safe as possible, the team at CARU put together some holiday tips.
Read more
Blog

Rev Up for the Holidays: BBB AUTO LINE Has You Covered

If you encounter any issues with your new vehicle, the BBB AUTO LINE program is here to help you resolve disputes quickly and fairly—without the need for costly legal battles. While most vehicles perform as promised, it’s crucial to be prepared if you find yourself with a potential “lemon” on your hands.
Read more
Blog

Getting Political and Going Digital: Analyzing Political Digital Advertising Compliance

When it comes to political advertising, are consumers getting an appropriate level of disclosure and meaningful notice? Are consumers aware of their choices for opting out of viewing the ads? Are stakeholders in the political advertising space compliant? The Digital Advertising Accountability Program is analyzing this year's political advertising trends.
Read more