As Congress considers nationwide data privacy legislation, federal agencies roll up their sleeves on AI guidance
2022 PRINDBRF 0527
By Lauren Daming, Esq., Greensfelder, Hemker & Gale PC
Practitioner Insights Commentaries
November 23, 2022
(November 23, 2022) - Attorney Lauren Daming of Greensfelder, Hemker & Gale PC discusses employers' use of artificial intelligence and the lack of uniform regulation, individual agency guidance, and the prospects of pending federal legislation.
Artificial intelligence is increasingly applied to assist companies through automated decision-making, especially in employment decisions. But the potential for efficiency and cost-saving associated with the technology comes with risks, including the lack of a uniform, federal framework for regulation.
This uncertainty is causing employers to look to agency enforcement authorities and state legislation for guidance.

Will a federal data privacy law light the way?

The currently pending American Data Privacy and Protection Act (ADPPA) would be the country's first comprehensive federal data protection statute on the books.1 As written, it would dictate specific requirements related to the use of algorithmic decision-making.
While the ADPPA waits for a hearing before the full House, the country is left with a fractured collection of state and federal laws that largely regulate by sector or through fairly limited applications. The ADPPA would standardize our approach to data privacy, wiping out most of the laws of the motivated states that early-adopted comprehensive consumer privacy laws. But even if the ADPPA is enacted, its exemption of "employee data" will leave a huge swath of companies' most sensitive data unregulated.
In its current form, the ADPPA would not preempt state civil rights laws, state laws related to "the privacy rights or other protections of employees" and "employee information," or state laws that "solely address facial recognition or facial recognition technologies." This leaves plenty of room for state and local lawmakers to regulate AI in the employment sphere.
And many already have: In Illinois, the Artificial Intelligence Video Interview Act2 requires employers to notify applicants whether and how AI is used to analyze video-recorded interviews and obtain their consent in advance. Employers who use AI-based video interview analysis must also report certain demographic information to the state.
Maryland prohibits facial recognition technology used during the hiring process unless an applicant consents to the process. Beginning next year, New York City employers that use automated decision-making tools to evaluate job applicants or employees must perform a "bias audit" on the technology.
In the last year, federal agencies have been flexing their muscles on this topic, demonstrating that even in the absence of a comprehensive federal privacy statute that applies to employee data, they have a means of addressing AI within their own spheres under existing statutes.

What is artificial intelligence?

AI is a broad term that describes the process machines use to mimic human intelligence. AI systems typically combine large sets of data with intelligent processing algorithms to learn from patterns and features in the data set. AI may also refer to "automated decision-making" or "algorithmic decision-making."

How is artificial intelligence used in the workplace?

Many common applications of AI technology in the workplace relate to employee data. Chatbots may help job applicants complete their applications or answer employee questions regarding benefits enrollment. AI programs can also be used to screen candidate applications or public sources and identify promising job candidates. Video interviews can be processed with an AI algorithm that evaluates candidates. Some AI programs are even intended to monitor or track employee performance and mood.

How do we know what the rules are?

With or without federal data privacy legislation, it's likely that scrutiny of employee data privacy practices will be left to state or local authorities — or to federal agencies enforcing existing law. While far from comprehensive or uniform, federal agencies are demonstrating that even as technology evolves, their guiding statutes are flexible enough to address these advances.

Federal Trade Commission (FTC)

The Federal Trade Commission, which monitors companies for unfair or deceptive business practices, released informal guidance3 in April 2021 advising companies to be aware of potential bias when using algorithmic decision-making software.4 The FTC has plans to pass rules aimed to prevent discrimination in automated decision-making.5 And unlike many agencies, it has actually pursued enforcement for alleged unlawful use of AI and algorithmic tools used in hiring.

The Equal Employment Opportunities Commission (EEOC) and Department of Justice (DOJ)

In May 2022, the EEOC issued guidance advising employers to ensure that any hiring tools based on algorithms or artificial intelligence do not negatively impact applicants with disabilities.6 The DOJ issued companion guidance on the same day directed toward state and local employers.7
The guidance clarified that employers must provide reasonable accommodations to applicants who may be affected by automated decision-making tools due to their disabilities. The discussion clearly signaled that employers are responsible for vetting potential bias in AI-based hiring tools — even if the software is provided by a vendor.

The National Labor Relations Board (NLRB)

In October 2022, the NLRB General Counsel published a memo addressing workplace surveillance, "algorithmic-management tools," and other technologies that could interfere with workers' ability to exercise their rights to engage in protected, concerted activity.8 The memo warned that such technologies could unlawfully measure or base decisions upon activity protected by the NLRA. It proposed a balancing test that would pit an employer's business interest in using the technology against employee rights.

State consumer privacy laws

A very recent development in privacy is the passage of comprehensive data privacy laws in five states: California, Colorado, Connecticut, Utah, and Virginia. These statutes extend certain rights to consumers and obligations to businesses regarding consumer data.
However, apart from California, each state statute exempts employee information from its scope. In the absence of federal privacy legislation, it's likely that we will see more state legislation in this area, although it's unclear whether future adopters of such laws will extend their requirements to personnel data.

Authentic intelligence for moving forward with AI

Whether or not the ADPPA (or a similar law) is enacted, federal agencies are taking a hard stance on the use of AI in employment decisions. For now, the threat of agency enforcement and spotty state laws are filling the gap.
The EEOC appears to be taking a hard line on AI when it comes to disability discrimination, signaling that companies could be liable for the conduct of their vendors. The NLRB GC has proposed a fairly strict standard for evaluating whether AI will infringe on NLRA-protected activity. And the FTC is actively pursuing companies for alleged unfair use of AI and automated decision-making in the employment context.
Employers can look to these sources when considering how to incorporate AI into employee decision-making. While existing federal law is not specifically targeted at AI use in the employment context, the agencies responsible for enforcing employment statutes are positioning themselves to occupy this space. After all, they are not regulating the technology itself but only the effect that the technology may have on workers.
Notes
1 http://bit.ly/3teNm4b
2 http://bit.ly/3TYbcw0
3 http://bit.ly/3a75A1e
4 http://bit.ly/3a75A1e
5 http://bit.ly/3Gyh6ky
6 http://bit.ly/3xqzGpP
7 http://bit.ly/3AZXq5F
8 http://bit.ly/3gp2fyj
By Lauren Daming, Esq., Greensfelder, Hemker & Gale PC
Lauren Daming is an employment and labor attorney and certified information privacy professional at Greensfelder, Hemker & Gale PC. She represents clients facing issues related to discrimination, leave and accommodations, wage-and-hour laws, employee benefits and other matters. She is based in St. Louis and can be reached at [email protected].
Image 1 within As Congress considers nationwide data privacy legislation, federal agencies roll up their sleeves on AI guidanceLauren Daming
End of Document© 2024 Thomson Reuters. No claim to original U.S. Government Works.