Incorporating AI into today's risk management processes
2022 PRINDBRF 0231
By David A. Steiger, Esq., and Stratton Horres, Esq., Wilson Elser Moskowitz Edelman & Dicker LLP
Practitioner Insights Commentaries
May 26, 2022
(May 26, 2022) - Attorney David A. Steiger and Stratton Horres, a partner at Wilson Elser Moskowitz Edelman & Dicker LLP, discuss the growing use of artificial intelligence in corporate legal departments, along with the risks of AI and possible steps to mitigate those risks.
In a recent Thomson Reuters Survey of 207 in-house attorneys, 39 percent of those surveyed indicated Artificial Intelligence (AI) will be mainstream in corporate legal departments within 10 years, with an additional 21 percent believing this will occur in the next 5 years.
If the majority of in-house counsel already perceive AI as a game-changer over the next decade, more and more companies will have to consider how to adapt this technology to existing processes. McKinsey Consulting reports that nearly 80 percent of executives at companies that are deploying AI have advised they are already seeing moderate value from it.
Various sources suggest that by 2030, AI could deliver $13 to $15.7 trillion in additional annual global economic output. But as Irfan Saif and Beena Ammanath (both affiliated with Deloitte) have argued in a piece published by MIT Technology Review, AI deployment now is "less about technological hurdles and more about human-side challenges: ethics, governance, and human values."
This article examines some of the most obvious issues that organizations will face in preparing for the mass adoption of artificial intelligence into operations. And, for those skeptical about embracing AI, this article argues that much of the same behind-the-scenes work will be necessary in any event.

Where to begin?

Avi Gesser, co-chair of Data Strategy & Security at Debevoise & Plimpton LLP, a leading law firm handling reputational, regulatory, operational and governance risks related to artificial intelligence, provides a well-thought-out game plan for organizations looking to ready themselves for AI:
•Create an inventory of AI risks.
•Rate the risks that contemplated AI systems pose to align them with real-world risks.
•Implement risk mitigation measures.
•Establish a practical framework for assessing and testing AI models or external data.
•Create an AI governance process and define key stakeholders across organizations.
•Continually review and update the model risk framework.

AI risk overview

The full complement of risks associated with AI implementation at a given legal department will vary by industry, market reach and a variety of other factors. Multiple white papers and articles have identified a number of risks that have generalized application to most concerns. Some of the most-cited concerns include biased programming or data leading to incorrect, inequitable or discriminatory results; lack of transparency/ difficulties in assigning legal responsibility for harms caused by AI; issues related to continuous learning algorithms; and erosion of personal privacy protections.
François Candelon, Rodolphe Charme di Carlo, Midas De Bondt and Theodoros Evgeniou (Candelon et al.) in a recent Harvard Business Review article highlight the fundamental concern that if data used to train an AI model is biased, the AI will acquire and maybe even amplify the bias.
As Grant Little, president of ClaimsTech.org relates, you might ask an algorithm to analyze football game outcomes. It may conclude teams with darker jerseys are more likely to win, but all it is picking up on is home field advantage. It can't see causation, only correlation. Humans can factor for coincidence. Should a biased model affect products or services in a significant way, it could expose companies to crippling amounts of liability.
This is an issue not only if the model produces flatly incorrect results but also if it produces inappropriately discriminatory outcomes. Dr. Chris Russell of the Alan Turing Institute stated in a Wired piece that "one of the major challenges in making algorithms fair lies in deciding what fairness actually means."
The choice to use AI involves more than a question of risk tolerance, and is not as simple as it might otherwise appear. As Saif and Ammanath explain, if an algorithm makes or affects decisions with direct consequences on people's lives, it may be better to avoid AI in that context or at least subordinate it to human judgment.
But ignoring AI output also could backfire, since AI might be used to control for unconscious human bias by helping to reveal it. In any event, Gesser notes that companies must consider what likely bias complaints could come out of a given AI model, and what will the response be when facing bias allegations.
Little notes that one way to monitor bias is to use assisted AI. In this model, when the confidence interval for a task drops below a set threshold, the model automatically kicks the decision out to a human. An example is when a machine scanning handwritten documents attempts to distinguish letters. It will be biased toward its training sets of handwritings.
Transparency becomes an issue whenever AI use creates possible negative outcomes. In a November 2019 Policy Briefing, The Royal Society noted that some AI models can be too complicated for even expert users to fully understand, prompting "calls for some form of AI explainability (a machine learning model and its output can be explained in a way that "makes sense" to a human being at an acceptable level), as part of efforts to embed ethical principles into the design and deployment of AI-enabled systems."
The briefing also notes that explainability needs may vary, depending for instance on whether unacceptable results can result in significant consequences.
One of the most highly touted potential uses of machine learning is the promise of models engaging in continuous learning. However, as Candelon et al. point out, inputs that generated one outcome yesterday might register a different one today, depending on how the algorithm changes due to data received in the interim. As such, Candelon et al. advocate the monitoring of "autonomous AI processes and assessment of the legal, financial, reputational and physical risk" posed by evolving AI.
As to privacy, Gesser asks: Is sensitive personal data being shared with third parties without proper notice or consent? Is personal data that was collected for one purpose now being used for another purpose without making sure that all privacy obligations are being satisfied? Saif and Ammanath maintain that customers should be given appropriate control over their data, including the ability to opt in or out of having it shared and procedures to voice their concerns.

Risk assessment

In a McKinsey Quarterly piece, Benjamin Cheatham, Kia Javanmardian and Hamid Samandari (Cheatham et al.) reason that few leaders have had the opportunity to understand "the full scope of societal, organizational, and individual risks" AI presents, "or to develop a working knowledge of their associated drivers, from the data fed into AI systems to the operation of algorithmic models and the interactions between humans and machines."
Therefore, they suggest companies use a structured identification approach to pinpoint the most critical risks. As Gesser puts it, a high-risk AI scenario results from AI being used for critical infrastructure or core business functions where its failure will endanger the company's financial stability or cause substantial reputational harm.

Risk mitigation

Cheatham et al. assert that avoiding or mitigating unintended consequences requires pattern-recognition skills, necessitating an entirely new level of enterprise-wide effort in identifying and controlling for all key AI risks.
They argue that how controls develop depends on factors such as algorithm complexity, data requirements, the nature of human-to-machine and machine-to-machine interaction, the potential for malignant exploitation and the extent to which AI is embedded within business processes. As Gesser observes, more than business oversight is needed; often millions are invested in AI, while little or nothing is spent on compliance and regulatory assessment.

Establishing a practical framework for assessing and testing AI models/external data

In assessing and testing AI models, beyond ensuring the model actually works, Little asks, what your AI is actually doing. What are you automating? Are you substituting for human judgment? Because as you give a model more discretion, Little points out, there is more room for tech to commit error.
In an interview for this article, Little further highlighted a fundamental truth: you always need to carefully evaluate your data inputs, because all data has biases. The method of data collection at the time of collection affects bias as well, Little noted. Using personal health as an example, one must ask whether sets of tests results were gathered by professionals or by patients at home.
Additionally, has the collection process changed over time, and is a different organization doing the collection, versus the training data? Further, Gesser asks, have risks related to external providers of models or data been fully considered, including through vendor diligence, oversight, insurance and contractual provisions?

Creating an AI governance process and defining key stakeholders across organizations

As Cheatham et al. point out, making real progress in AI risk mitigation demands a multidisciplinary approach involving experts "in areas ranging from legal and risk to IT, security, and analytics; and managers who can ensure vigilance on the front lines."
Saif and Ammanath argue companies will need new processes and tools, including system audits, documentation and data protocols (for traceability) and diversity awareness training. They cite Google, Microsoft and BMW as examples of companies that are developing formal AI policies with "commitments to safety, fairness, diversity and privacy."
For his part, Little notes that technology naturally steers you to a more consistent answer. So, you must evaluate and audit models to validate that they are doing what you want them to do, consistent with the best human judgment.

Continual review and update of the model risk framework

As in many other aspects of business, monitoring AI-driven analytics must be an ongoing effort, rather than a one-and-done activity. Little relates that AI practitioners commonly say that setting up a model is the easy part — the hardest thing is to refine a model and keep it current. Cheatham et al. correctly point out that stakeholders need to review their risk program regularly to stay on top of "industry shifts, emerging case law, evolving consumer expectations, and rapidly changing technology."

Evolving regulations: Are we heading toward disparate impact?

As Gesser notes, unfair discrimination is a focal point of new AI regulations, not only in direct terms but also regarding "proxy discrimination," where ZIP Codes or education levels, for instance, might be used indirectly to discriminate on the basis of protected classes.
Gesser points out that how proxy discrimination is defined is contested, and it is unclear whether regulators will require that it be intentional to be prohibited. This suggests a relationship to a larger debate going on in discrimination law generally, as to whether there must be proof of intentional discrimination or merely of disparate impact.
Still, Gesser concludes there is an emerging de facto approach requiring certain companies to demonstrate up front that their models being used for hiring, lending or insurance underwriting do not discriminate.

Pushing ahead despite regulatory uncertainty

Little argues that regulators have no way of keeping up with the pace of developments, citing as evidence a recent viral video of befuddled San Francisco police officers pulling over an autonomous self-driving car service vehicle.
From Gesser's point of view, regulators are not out to stifle innovation in any way — their job is to encourage responsible AI use, and they ultimately will adapt their framework to models that work. Little concurs, offering that in an uncertain environment you just want to be in the position to be able to defend the ethics of what you are doing.

Conclusion

Sophisticated AI tools that offer next-level benefits require an equally sophisticated approach to incorporation into the legal department of even a small or medium-size business. Some might mistakenly think that they should just sit on the sidelines until technology and regulations fully develop rather than engage in the hard work of the adoption process now. However, as regulators change their focus as a result of AI, it might well affect regulation of companies whether they use AI or not.
The reality is that the same processes that will help identify and mitigate AI-related risks would be needed in any event for parallel risks existing outside the AI space, including broader discrimination concerns, cybersecurity and vendor management issues.
As regulators look ever more closely at outcomes and disparate treatment, they are bound to start holding companies to a higher standard regardless of their AI use, and without AI in their toolbox, regulated companies may have little defense available.
Further, even if regulators don't explicitly demand AI integration, customers and stakeholders will eventually demand a level of accountability and performance that only AI can deliver. One way or another, it is time for legal departments to up their game and embrace the future before they are left behind.
*Steiger and Horres tapped into a cadre of experts in the field of Artificial Intelligence to research this article, including writings by Irfan Saif, AI Co-leader and Principal, Deloitte & Touche LLP and Beena Ammanath, Executive Director of the Deloitte AI Institute; François Candelon, Rodolphe Charme di Carlo and Midas De Bondt of the Boston Consulting Group and Theodoros Evgeniou, professor at French business school INSEAD; Dr. Chris Russell, Group Leader in Safe and Ethical AI at the Alan Turing Institute; Benjamin Cheatham, Kia Javanmardian and Hamid Samandari of McKinsey & Company. Interviews were conducted with Avi Gesser, Partner and Co-chair of Data Strategy & Security at Debevoise & Plimpton LLP, and Grant Little, Co-founder and President at ClaimsTech.org.
By David A. Steiger, Esq., and Stratton Horres, Esq., Wilson Elser Moskowitz Edelman & Dicker LLP
David A. Steiger is a licensed attorney and author with extensive complex litigation management experience, both as outside and in-house counsel. A former adjunct faculty member of DePaul University's School for New Learning and the Maurer School of Law at Indiana University Bloomington, he is the author of two ABA Publishing books, "The Globalized Lawyer" and "Transactions Without Borders." Steiger is based in the Los Angeles area and can be reached at [email protected]. Stratton Horres is a partner at Wilson Elser Moskowitz Edelman & Dicker LLP in its complex tort and general casualty practice. He focuses on crisis management and catastrophic high-exposure cases. He is based in Dallas and can be reached at [email protected].
Image 1 within Incorporating AI into today's risk management processesDavid A. Steiger
Image 2 within Incorporating AI into today's risk management processesStratton Horres
End of Document© 2024 Thomson Reuters. No claim to original U.S. Government Works.