High Court Issues Rare Warning on AI Misuse in Legal Proceedings
In an unprecedented ruling, the High Court of England and Wales has issued a strong warning to legal professionals about the risks of using artificial intelligence in legal proceedings. The court made clear that lawyers who submit fabricated legal material generated by AI tools could face criminal prosecution or be disbarred.
The intervention was issued by Dame Victoria Sharp, President of the King’s Bench Division, along with Justice Jeremy Johnson, following two troubling cases where AI-generated submissions included entirely fictitious legal citations and quotes.
Cases That Triggered the Court’s Action
One of the cases involved a claimant seeking millions in damages from two banks, based on an alleged breach of a financing agreement. The court discovered that out of 45 citations submitted by the claimant and his lawyer, 18 referenced cases that simply didn’t exist. Other citations either misquoted real rulings, drew legally inaccurate conclusions, or cited irrelevant material.
The claimant later admitted to using AI-powered legal tools and online resources to compile his evidence, expressing misplaced confidence in their accuracy. His lawyer, in turn, said he had relied on his client’s research and failed to verify the citations independently. He issued an apology and referred himself to the relevant professional body.
The second incident concerned a legal challenge against a local council over emergency housing. Lawyers representing a man who became homeless after eviction cited five past legal cases in their filing — none of which existed. Opposing counsel quickly discovered the fabrications.
Clues pointing to AI involvement included the use of American spelling — a red flag in British legal documents — and the “formulaic style of the prose”, as described in the court’s ruling. While the lawyer denied direct use of AI tools, she acknowledged submitting similar false citations in an earlier case. She was unable to produce any valid sources and has been referred to a regulatory authority.
Judiciary Tightens Expectations on AI Use
In their ruling, the judges invoked seldom-used powers to reinforce the court’s right to regulate its procedures and enforce ethical standards among practitioners. Sharp emphasized the gravity of the issue:
"There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused." — Dame Victoria Sharp, President of the King’s Bench Division
The judgment also cautioned that AI tools like ChatGPT cannot perform reliable legal research. Despite often producing coherent-sounding responses, they are known to confidently assert false information, cite nonexistent sources, and distort the law.
Global Concerns Around AI ‘Hallucinations’
The UK ruling reflects a broader global concern about AI’s tendency to fabricate information — a phenomenon often referred to as “hallucination”.
Silicon Valley-based Vectara has monitored how often AI systems deviate from factual content by assigning them tasks such as summarizing verified news articles. Even under those conditions, leading AI systems hallucinate between 0.7% and 2.2% of the time. When asked to generate large amounts of content from scratch — a common legal use case — the rates skyrocket.

OpenAI, the developer of ChatGPT, recently disclosed that its latest models hallucinate between 51% and 79% of the time when asked general questions. The company’s prior-generation systems hallucinated at a rate of 44% for similar prompts.
Wider Implications for the Legal Profession
The High Court’s warning is not legally binding as a precedent, but its implications are stark. Lawyers who submit fictitious material created by AI may face severe disciplinary actions, including criminal charges for perverting the course of justice or contempt of court.
"This decision is not a precedent. But legal professionals should be aware that the misuse of AI in legal proceedings will be met with serious consequences." — Dame Victoria Sharp, President of the King’s Bench Division
The court also cited examples from abroad, referencing similar misuse of AI in jurisdictions such as California, Minnesota, Texas, Australia, Canada, and New Zealand — a reminder that the risks are global and growing.
AI in Law: Powerful but Risky
While acknowledging that AI has its place in legal research and document review, the ruling stressed the importance of human oversight. AI is a powerful tool — but one that demands caution.
As the legal profession increasingly integrates advanced technologies into daily practice, this ruling serves as a critical marker. Accuracy, accountability, and ethical responsibility remain non-negotiable — regardless of how advanced the tools become.