Ride the Lightning
Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.
Law Firms Worry About Hallucinating AI
February 23, 2023
Wired published a sobering article on February 22 about the use of generative AI tools in law firms. David Wakeling, the head of London-based law firm Allen & Overy’s markets innovation group, first came across law-focused generative AI tool Harvey in September 2022. He approached OpenAI, the system’s developer, to conduct a small experiment. A handful of his firm’s lawyers would use the system to answer simple questions about the law, draft documents, and take first passes at messages to clients.
Trial was small at first but grew. Approximately 3,500 workers across the company’s 43 offices ended up using the tool, asking it around 40,000 queries. The law firm entered into a partnership to use the AI tool more widely across the company, though Wakelink declined to say how much the agreement was worth. According to Harvey, one in four at Allen & Overy’s team of lawyers now uses the AI platform every day, with 80 percent using it once a month or more. Other large law firms are starting to adopt the platform too, the company says.
“I think it is the beginning of a paradigm shift,” says Wakeling. “I think this technology is very suitable for the legal industry.”
The technology, which uses large datasets to learn to generate pictures or text that appear natural, could be a good fit for the legal industry, which relies heavily on standardized documents and precedents.
“Legal applications such as contract, conveyancing, or license generation are actually a relatively safe area in which to employ ChatGPT and its cousins,” says Lilian Edwards, professor of law, innovation, and society at Newcastle University. “Automated legal document generation has been a growth area for decades, even in rule-based tech days, because law firms can draw on large amounts of highly standardized templates and precedent banks to scaffold document generation, making the results far more predictable than with most free text outputs.”
But there’s a small (maybe not so small problem with current generations of generative AI. Most significantly, their tendency to confidently make things up—or “hallucinate.” That is problematic enough in search, but in the law, the difference between success and failure can be both serious and costly.
Via email, Gabriel Pereyra, Harvey’s founder and CEO, said that the AI has a number of systems in place to prevent and detect hallucinations. “Our systems are finetuned for legal use cases on massive legal datasets which greatly reduces hallucinations compared to existing systems,” he says.
In spite of that, Harvey has gotten things wrong, says Wakeling—which is why Allen & Overy has a careful risk management program around the technology.
“We’ve got to provide the highest level of professional services,” Wakeling says. “We can’t have hallucinations contaminating legal advice.” Users who log in to Allen & Overy’s Harvey portal are confronted by a list of rules for using the tool. The most important, to Wakeling’s mind? “You must validate everything coming out of the system. You have to check everything.”
Wakeling has been particularly impressed with Harvey’s prowess at translation. It’s strong at mainstream law, but struggles on specific niches, where it’s more prone to hallucination. “We know the limits, and people have been extremely well informed on the risk of hallucination,” he says. “Within the firm, we’ve gone to great lengths with a big training program.”
Other lawyers who spoke to WIRED were cautiously optimistic about the use of AI in their practice.
“It is certainly very interesting and definitely indicative of some of the fantastic innovation that is taking place within the legal industry,” says Sian Ashton, client transformation partner at law firm TLT. “However, this is definitely a tool in its infancy and I wonder if it is really doing much more than provide precedent documents which are already available in the business or from subscription services.”
AI is likely to remain used for entry-level work, says Daniel Sereduick, a data protection lawyer based in Paris, France. “Legal document drafting can be a very labor-intensive task that AI seems to be able to grasp quite well. Contracts, policies, and other legal documents tend to be normative, so AI’s capabilities in gathering and synthesizing information can do a lot of heavy lifting.”
But, as Allen & Overy has found, the output from an AI platform is going to need careful review, he says. “Part of practicing law is about understanding your client’s particular circumstances, so the output will rarely be optimal.”
Sereduick says that while the outputs from legal AI will need careful monitoring, the inputs could be equally challenging to manage. “Data submitted into an AI may become part of the data model and/or training data, and this would very likely violate the confidentiality obligations to clients and individuals’ data protection and privacy rights,” he says.
This is particularly an issue in Europe, where the use of this kind of AI might breach the principles of the European Union’s General Data Protection Regulation (GDPR), which governs how much data about individuals can be collected and processed by companies.
“Can you lawfully use a piece of software built on that foundation [of mass data scraping]? In my opinion, this is an open question,” says data protection expert Robert Bateman.
Law firms would probably need a firm legal basis under the GDPR to feed any personal data about clients they control into a generative AI tool like Harvey, and contracts in place covering the processing of that data by third parties operating the AI tools, Bateman says.
Wakeling states that Allen & Overy is not using personal data for its deployment of Harvey, and wouldn’t do so unless it could be convinced that any data would be ring-fenced and protected from any other use. Deciding on when that requirement was met would be a case for the company’s information security department. “We are being extremely careful about client data,” Wakeling says. “At the moment we’re using it as a non-personal data, non-client data system to save time on research or drafting or preparing a plan for slides—that kind of stuff.”
International law is already toughening up when it comes to feeding generative AI tools with personal data. Across Europe, the EU’s AI Act is looking to more stringently regulate the use of artificial intelligence. In early February, Italy’s Data Protection Agency stepped in to prevent generative AI chatbot Replika from using the personal data of its users.
But Wakeling believes that Allen & Overy can make use of AI while keeping client data safe and secure—all the while improving the way the company works. “It’s going to make some real material difference to productivity and efficiency,” he says. Small tasks that would otherwise take valuable minutes out of a lawyer’s day can now be outsourced to AI. “If you aggregate that over the 3,500 lawyers who have got access to it now, that’s a lot,” he says. “Even if it’s not complete disruption, it’s impressive.”
Though I admire all that Harvey can do, I am relatively confident that hallucinating generative AI will cause disaster(s) somewhere along the way.
Sharon D. Nelson, Esq., President, Sensei Enterprises, Inc.
3975 University Drive, Suite 225, Fairfax, VA 22030
Email: Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology