
A UK High Court judge has warned that artificial intelligence is being misused by lawyers in court. Justice Victoria Sharp said AI-generated fake cases had been cited in legal proceedings, threatening public trust in the justice system. In one instance, a lawyer cited 18 non-existent cases in a £90 million lawsuit. In another, five fake cases appeared in a tenant dispute. The judges referred both lawyers to regulatory bodies. Sharp emphasized that unverified AI use could amount to contempt of court or even perverting the course of justice, a crime punishable by life in prison.
Court Flags Fake AI Cases in Major UK Lawsuits
Two recent cases in the UK have exposed lawyers submitting AI-generated, fabricated legal precedents in court. Justice Victoria Sharp and Justice Jeremy Johnson investigated after lower courts raised alarms. One case involved a £90 million lawsuit against Qatar National Bank, where lawyer Abid Hussain submitted 18 fictitious legal cases. His client, Hamad Al-Haroun, admitted to using public AI tools but took sole responsibility. Sharp criticized the absurdity of a lawyer relying on a client for legal research. In a separate housing dispute, barrister Sarah Forey cited five false cases in support of a tenant’s claim against the London Borough of Haringey.
Forey denied using AI, but the court found her explanation lacking. Both lawyers were referred to regulatory bodies. The judges warned that submitting false information as genuine evidence could lead to criminal charges. Sharp stated, “AI is a powerful tool, but without proper oversight, it poses serious risks to justice.” The court stopped short of immediate punishment but signaled that misuse of generative AI could amount to contempt or perverting justice, crimes carrying severe penalties.
Judicial Response Reflects Global AI Concerns in Courtrooms
The UK ruling reflects growing concerns globally about the unchecked use of generative AI in legal settings. As more lawyers adopt AI tools like ChatGPT or legal-specific bots, the risk of generating convincing yet entirely false citations grows. Sharp’s ruling stressed that the legal profession must integrate AI with accountability and human oversight to maintain trust. AI misuse in court isn’t new. In the US, a New York lawyer was fined in 2023 for submitting fictitious ChatGPT-generated case law. The UK’s latest warning follows suit, emphasizing that legal professionals can’t offload responsibility onto machines.
What’s particularly alarming is the ease with which AI can fabricate plausible legal precedents. Justice Sharp noted that tools producing realistic-sounding legal text risk misleading judges if unchecked. She advocated for a clear regulatory framework that aligns AI use with ethical and professional standards. Sharp concluded that, while AI offers real promise in speeding up legal research and document preparation, its deployment must be accompanied by rigorous fact-checking and accountability mechanisms. Without this, there is a risk of systemic erosion in the rule of law, and in extreme cases, criminal charges for those who knowingly submit false AI-generated content to the courts.
AI in Law Needs Regulation, Not Blind Adoption
Justice Sharp’s warning underscores the urgent need for ethical guidelines and oversight in how AI is used in legal practice. While AI can assist lawyers, it cannot replace professional diligence. Citing fake legal precedents not only undermines the court’s integrity but could also lead to criminal consequences. As judicial systems worldwide adopt AI tools, the UK’s example highlights a critical lesson: without clear regulatory safeguards, even helpful technologies can compromise justice. Sharp urged the legal community to approach AI with caution, responsibility, and a commitment to factual accuracy if public trust is to be maintained in the AI era.