Legal practice and risk management will never be the same: ChatGPT marks the turning point for AI adoption
2023 PRINDBRF 0076
By David A. Steiger, Esq., and Stratton Horres, Esq., Wilson Elser Moskowitz Edelman & Dicker LLP
Practitioner Insights Commentaries
February 13, 2023
(February 13, 2023) - Attorney David A. Steiger and Wilson Elser Moskowitz Edelman & Dicker LLP senior counsel Stratton Horres explain why firms and individual attorneys should embrace and leverage current developments in artificial intelligence technology.
Every few decades, new technology emerges that represents the dawning of a new era. Just as Lindbergh's Spirit of St. Louis laid the foundation for transatlantic air travel and the first iPhone paved the way for handheld internet and 24-hour social media access, ChatGPT is today's watershed moment. It has made clear that generative AI, algorithms that generate new outputs based on the data on which they have been trained, will change the lives of everyone in the world — professionally and personally.
Beginning in the waning weeks of 2022, a trickle of articles touting the capabilities of American artificial intelligence research laboratory OpenAI's newest tool became a tsunami. Millions of ordinary people got a chance to experience a taste of what AI could do for them.
And it turned out that in addition to creating term papers and solving complex math equations, it could write dad jokes, help people prepare for their job interviews and even scout out potential new business opportunities. ChatGPT has made generative AI real and accessible to the average consumer in a practical way as nothing before. There is no turning back.

Generative AI is still in the capability overhang phase

Marco Nasca, Chief Innovation Officer for Lineal, a legal services company that uses proprietary AI technology to solve problems in litigation and compliance, observes that the tech industry has slowly rolled out AI products to market — never allowing its full potential to be clearly seen, though it was available.
Now OpenAI has crashed down the doors, and the possibilities of what this technology can bring are "intriguing, amazing, and perhaps a bit scary for some."
What Nasca was expounding on is what has been described elsewhere, by authors such as James Vincent at TheVerge.com, as "capability overhang."1 Because this technology has so much inherent potential, people haven't completely grasped what it can do, much less what impact it ultimately will have on our lives.

The battle to monetize generative AI

How this new technology will be monetized is an important question since it is estimated to cost $100,000/day to run the models supporting ChatGPT alone. Generative AI cannot continue to develop unless it generates a reliable revenue stream.
Who will emerge as the leaders in generative AI? We'll soon find out. OpenAI has announced $20 monthly subscriptions to ChatGPT. Google, which has no intention of presiding over its own obsolescence, recently announced the impending release of its own conversational AI product, called Bard, which reportedly will be available to the public in the coming weeks.
An AI Space Race between Microsoft via OpenAI and mighty Google could push development in this area into hyperdrive.
Meanwhile, others continue to leverage open-source development. BigScience Research Workshop, an open science project composed of hundreds of researchers around the world, recently released Petals, which lets people daisy chain their computing power together to run large AI language models that would normally require a high-end GPU or server.
And it's not just Silicon Valley that's in the game. According to a recent Reuters article,2 China search firm Baidu will be releasing a chatbot service in March.

Another vector shaping the direction of generative AI: impending regulation

Like most emerging technologies, AI can be compared to the Wild West right now. To the extent AI is currently prone to spit out inaccurate information occasionally, the need for regulation and accountability is pretty clear.
But as Avi Gesser, partner at international law firm Debevoise & Plimpton, points out, the quick adoption of AI may create some regulatory urgency. Gesser notes that it is becoming common for employees of large-scale public and private companies to log on to various generative platforms using their personal credentials, and then applying the AI tool's output to their work for their employer.
Some commentators have argued that the EU's AI Act3 might end up having a substantial impact by getting to the regulatory space first. Gesser disagrees, noting that as it is being rolled out, the EU AI Act won't become effective until 2025. While he sees the emerging EU framework as pretty sensible, it applies to individual products and situations in only a very general, overarching sense.
Gesser sees a different regulatory environment taking shape. He cites the Colorado Division of Insurance's upcoming regulations to prevent carriers from using AI to unfairly discriminate against protected consumer classes.
According to a recent article by H. Michael Byrne and Amanda Kane of McDermott Will & Emery,4 the process in Colorado "will initially focus on life insurance underwriting practices, with the first meeting scheduled for February 7, 2023, and later expand to auto, health, marketing and claims practices."
From Gesser's perspective, sweeping regulation at a national or multinational level will not be quick enough or specific enough; it is more likely that, at least initially, sector-specific state and local regulations will be applied to quickly address emerging issues.
In part, this is because those who perceive they are losing out to AI will demand immediate action from policymakers. In some areas, such as copyright, disputes may be handled by existing law. Essentially what Gesser is predicting is a loose patchwork approach to regulation, in the short term at a minimum.
The problem with a patchwork approach is that there is the potential for unintended consequences: conflicting rules, uncertain compliance. Gesser concedes such a system will likely be "messy."
But regulators do tend to borrow from one another, so don't be surprised if Colorado insurance regulators have an outsized role in the system by virtue of actually getting there first.
Ironically, in the case of regulating AI, human tendencies such as the need to be seen as "doing something" could become very important in mapping the future.

The effects of AI development on the legal profession going forward

There are predictions that range from massive layoffs in the professions to minimal disruption that frees professionals from mind-numbing tedium and paves the way for creative thinking. To what extent either prediction is right is anybody's guess, despite the avalanche of hype currently being generated.
As Kit Mackie, Chief Technology Officer for Lineal and co-founder of AI tool NexLP concedes, 3D printing and Bitcoin each had hype storms surrounding them, with mixed results in terms of their current impact on the marketplace. So what about ChatGPT and similar models? Will they be an algorithmic flash in the pan?
And what can we say about all the things that the current version of ChatGPT can't do? Two Texas lawyers recently reported in a Law360 piece5 that in their judgment, the currently available version of ChatGPT essentially failed in its attempt to write a legal brief that could satisfy attorneys' ethical and professional responsibility obligations.
Fair enough. However, maybe focusing on what generative AI's capabilities are at this moment is somewhat beside the point. After all, aviation technology didn't stand still after Lindbergh's flight.
In a recent interview for this article, Marco Nasca of Lineal noted that "ChatGPT will learn. It is, after all, a learning model." In essence, ChatGPT and similar models are akin to junior attorneys. They may make mistakes as they develop professionally, and will surely need to be supervised. And like those rookies, the AI models will learn from mistakes and generate better work product.
Logically, this suggests that eventually AI will be called upon to do a lot of the tedious, monotonous work that first-year associates do now. So then, what will junior lawyers do? How does that impact Big Law's business model and attorney training?
One assumes the practice and business of law will adapt. In part, that will have to begin with legal education. There are hopeful signs that law schools are beginning to see the need to teach the use of emerging technology to law students. Nasca points to ITT-Kent School of Law in Chicago as an example.
Lineal's Mackie compares ChatGPT to Tesla — a new product that proves a technology is possible and that it is the future.
But as to those who fear the robots are coming for their jobs, he notes that it is important to remember AI is not creative — it can help you see and understand things you couldn't see by yourself. It is up to a human user to be creative and tie what the tool can provide to the user's purposes. Mackie concludes, "AI can tell you what is — and you take it from there."

Will AI break big law?

The AmLaw 100 use their sheer size, substantial resources and sophistication to make the argument that they can handle bet-the-company litigation and critical transaction work for the world's largest concerns, and that they are worth the resulting premium fee structure that their model produces. Will AI break Big Law by allowing small firms to harness massive computer power to negate Big Law's traditional resources advantage?
As Nasca posits, could the real threat to Big Law perhaps be a 40-something partner who creates a boutique spinoff from the white shoe firm, one that can credibly handle the most complex work at a rate structure that doesn't have to support massive Big Law overhead?
As Nasca also notes, however, after e-Discovery companies created competition for Big Law in document review work, some Big Law firms created in-house practice groups to take them on, and have proven profitable.

How long will adoption take?

It often takes a period of years for technological advances to go mainstream. As Mackie notes, e-Discovery in 2023 is being handled in roughly the same way as it was in 2010. Cloud computing, around since the mid-2010s, is just now seeing wider commercial acceptance.
What might hold up AI adoption in the litigation and risk management space? Mackie offers one of two possibilities: either because sufficient computing capacity hasn't been built yet or because of the stubbornness and risk aversion of lawyers and their clients. After all, attorneys are geared to looking backward at precedent more than looking forward toward innovation.
Still, once a product captures the imagination of the general public, as ChatGPT has, it has a habit of breaking through resistance. Kids watching the original Star Trek in the 1960s grew up and invented workable cellular phones to do much the same thing as the crew's communicators. Picasso famously said, "What you can imagine is real."

The debate isn't about what firms will do anymore

For the past year, the authors have been arguing that attorneys and risk managers need to understand what AI can do and why they need to change their business models to take advantage of what it offers before their competition does.
But as we enter 2023, if the explosive popularity of ChatGPT is any indication, the conversation has moved on. It isn't about what your business is going to do to avoid becoming a dinosaur — it's now about what you as an individual are going to do to leverage this technology.
Fail and you will find yourself as useless as someone who can't effectively create a pivot table in Excel, or load an app onto a smartphone. Your career growth and livelihood are going to depend on it, no matter what your profession.
As has been quoted multiple times on LinkedIn feeds, you won't be replaced by AI — you will be replaced by someone using AI. At the beginning of 2023, that surely seems likely. AI isn't a theoretical topic for futurists anymore. The future is here and now. What was that line from The Shawshank Redemption? "Get busy living, or get busy dying."
Notes
1 "ChatGPT proves AI is finally mainstream — and things are only going to get weirder," 12/8/22, http://bit.ly/3RL4biT.
2 "China's Baidu to launch ChatGPT-style bot in March, " Reuters, http://bit.ly/3x3KPMi.
3 The Artificial Intelligence Act, http://bit.ly/3JMFBvY.
4 "Insurance Regulators Continue Big Data Scrutiny," Lexology, http://bit.ly/40Doseb.
5 Furness and Mallick, "Evaluating the Legal Ethics of a ChatGPT-Authored Motion" Law360, http://bit.ly/3jJsTTW
By David A. Steiger, Esq., and Stratton Horres, Esq., Wilson Elser Moskowitz Edelman & Dicker LLP
David A. Steiger is a licensed attorney and author with extensive complex litigation management experience, both as outside and in-house counsel. He is the author of two ABA Publishing books, "The Globalized Lawyer" and "Transactions Without Borders." He is based in the Los Angeles area and can be reached at [email protected]. Stratton Horres is senior counsel at Wilson Elser Moskowitz Edelman & Dicker LLP in its complex tort and general casualty practice. He focuses on crisis management and catastrophic high-exposure cases and is co-chair of the firm's national trial team. He is based in the firm's Dallas office and can be reached at [email protected].
Image 1 within Legal practice and risk management will never be the same: ChatGPT marks the turning point for AI adoptionDavid A. Steiger
Image 2 within Legal practice and risk management will never be the same: ChatGPT marks the turning point for AI adoptionStratton Horres
End of Document© 2024 Thomson Reuters. No claim to original U.S. Government Works.