The High Court of England and Wales warned lawyers on Friday that they could face criminal prosecution for presenting to judges false material generated by artificial intelligence, after a series of cases cited made-up quotes and rulings that did not exist.
In a rare intervention, one of the country’s most senior judges said that existing guidance to lawyers had proved “insufficient to address the misuse of artificial intelligence” and that further steps were urgently needed.
The ruling by Victoria Sharp, president of the King’s Bench Division of the High Court, and a second judge, Jeremy Johnson, detailed two recent cases in which fake material was used in written legal arguments that were presented in court.
In one case, a claimant and his lawyer admitted that A.I. tools had generated “inaccurate and fictitious” material in a lawsuit against two banks that was dismissed last month. In the other case, which ended in April, a lawyer for a man suing his local council said she could not explain where a series of nonexistent cases in the arguments had come from.
Judge Sharp drew the two examples together using rarely exercised powers that were designed to enable the “court to regulate its own procedures and to enforce duties that lawyers owe.”
“There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,” she wrote, warning that lawyers could be convicted of a criminal offense or barred from practicing for using false A.I.-generated material.
The ruling warned that A.I. tools “such as ChatGPT are not capable of conducting reliable legal research” and that their apparently “coherent and plausible responses may turn out to be entirely incorrect.” It added: “The responses may make confident assertions that are simply untrue. They may cite sources that do not exist.”
The judges’ warning underscores longstanding concerns among technology researchers about the propensity of A.I. chatbots to make things up in unpredictable ways, even as their use spreads rapidly.
Since late 2023, a Silicon Valley company called Vectara has tracked how often chatbots veer from the truth by asking them to perform a straightforward task that is readily verified: summarize specific news articles. Even then, chatbots persistently invent information. The leading systems hallucinate 0.7 percent to 2.2 percent of the time, and others invent information at significantly higher rates, according to the company.
But when people ask chatbots to generate large amounts of text from scratch — as many lawyers have done — hallucination rates are even higher. OpenAI, the company behind ChatGPT, recently said that its latest technologies hallucinated 51 percent to 79 percent of the time when asked general questions. Its previous technology hallucinated 44 percent of the time on these questions.
In the British case that was dismissed last month, a man sought millions in damages for alleged breaches of a financing agreement by two banks. In witness statements and correspondence put to the court by the claimant and his lawyer, 45 citations were given, of which 18 included nonexistent cases, the court found.
Where the rulings did exist, they did not contain the quotations used, did not support the legal propositions for which they were cited or were not relevant, the court added.
After the false information was discovered, the claimant said he had generated the citations “using publicly available artificial intelligence tools, legal search engines and online sources.” He admitted he had a misplaced “confidence in the authenticity of the material.”
His lawyer said he had relied on his client’s research and had not independently verified the information. He apologized and referred himself to a professional regulator.
The second case began in May 2023, when a man who was evicted from his London home and was unwell requested emergency accommodation from the local authority, but was refused.
The man became homeless, and his lawyers began legal action against the council, accusing it of failing to follow the required laws. They filed documents listing five past cases, complete with names and official-looking citations, as examples to support their arguments. But when the opposing legal team looked them up, it found they did not exist.
A judge who first considered the case last month suspected that A.I. was responsible for the nonexistent cases. Further suspicions were raised by the fact that American spellings were used in the documents — a grave sin in British legal filings — and, the court ruling said, the “somewhat formulaic style of the prose.”
The lawyer responsible denied using A.I. tools or misleading the court, but admitted putting false material into another case heard in April that had also contained references to cases that did not exist.
She told the High Court that she “may have carried out searches on Google or Safari” that incorporated A.I.- generated summaries of the results, but was not able to provide evidence of such searches or identify any source for the named cases on the internet. The lawyer was referred to an official regulator.
Judge Sharp said that the decision was “not a precedent” and warned that lawyers who provided false information faced “severe sanctions” including potential criminal prosecution for perverting the course of justice or contempt of court.
The ruling on Friday listed other cases in which A.I. tools had misinterpreted laws, created fake quotes from rulings or invented nonexistent cases for official citations in California, Minnesota and Texas, as well as in Australia, Canada and New Zealand.
Judge Sharp said that A.I. was a “powerful technology” and had legitimate uses, but warned that it brought “risks as well as opportunities.”
Cade Metz contributed reporting from San Francisco.
The post England’s High Court Warns Lawyers to Stop Citing Fake A.I.-Generated Cases appeared first on New York Times.