The Curious Case of the Cases That Were Not
Our Assistant Editor, Giovanni D’Avola, explains the problem with unreliable citations, whether generated by AI or otherwise. … Continue reading

Since the first release of Chat GPT and the entering of generative AI into the public consciousness in 2022, it has been apparent that that technology, and AI more generally, potentially lends itself to many applications in the legal sector as a tool to assist and augment the work of human lawyers.
Legal research has been identified as an area in which use of AI could deliver significant benefits. However, it did not take long for reports to emerge from the United States (the US legal profession being early adopters of the new technology) of instances of AI hallucinated fictitious case citations being relied upon in court filings.
Fast forward a few months and the first instances (or at least the first know instances) of fake citations being relied upon in court proceedings in England and Wales have recently come to light. It should be noted that in neither of the cases mentioned below has there been an admission or a finding on the balance of probabilities that the fake cases were the result of the use of some form of generative AI tool as the court focused on effect and not cause.
In the case of R (Ayinde) v Haringey London Borough Council [2025] EWHC 1040 (Admin); [2025] WLR(D) 249 the claimant brought judicial review proceedings against the local authority concerning a housing matter. After the authority breached court orders by failing to file an acknowledgment of service, statement of facts or grounds of defence on time, it was barred from further participation in the proceedings. Consequently, the court ordered that the claimant be provided with interim accommodation and later ruled that he was entitled to recover a substantial portion of his legal costs. It subsequently became apparent that five of the authorities, including purported decisions of the High Court and Court of Appeal, cited and relied upon the in the claimant’s statement of facts and grounds for judicial review, which had been settled by counsel, a second-six pupil, did not in fact exist. Attempts to brush off the non-existent authorities as “minor citation errors” and “cosmetic errors” were given short shrift by Ritchie J. Having decried the “appalling professional misbehaviour of the claimant’s solicitors and barrister” and stated in no uncertain terms that producing submissions based on fake authorities amounted to misleading the court, the judge made a wasted costs order against the claimant’s legal representatives. Matters did not end there. Both the barrister involved and her instructing solicitors have been referred to their respective regulatory bodies, and the case was referred to the Divisional Court under the Hamid jurisdiction for consideration of whether proceedings for contempt of court should be initiated.
On 6 June 2025 (Dame Victoria Sharp P and Johnson J) [2025] EWHC 1383 (Admin) handed down a judgment in which it was concluded that although the threshold for contempt proceedings had been met in the case of the barrister involved, in the particular circumstances of the case (adumbrated at para 69) proceedings for contempt would not be issued. However, the court went on to stress that that decision was not to be treated as a precedent and warned that lawyers who did not comply with their professional obligations when putting material before the court risked severe sanction. In respect of the solicitors involved, the court found that the threshold for contempt proceedings was not met. Importantly the court had this to say at para 9:
“There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers and managing partners) and by those with the responsibility for regulating the provision of legal services. Those measures must ensure that every individual currently providing legal services within this jurisdiction (whenever and wherever they were qualified to do so) understands and complies with their professional and ethical obligations and their duties to the court if using artificial intelligence. For the future, in Hamid hearings such as these, the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled.”
In Bandla v Solicitors Regulation Authority [2025] EWHC 1167 (Admin); [2025] WLR(D) 268 the appellant was a former solicitor seeking to appeal against a decision of the Solicitors Regulation Authority (“SRA”) to strike his name from the roll of solicitors. In support of his appeal, the appellant purported to cite a large number of case authorities. However, investigations by the SRA revealed that some 25 of those cases did not in fact exist. The appellant denied that the fake citations arose as a result of the improper use of AI, contending that he had simply carried out a standard Google search for case authority supporting the propositions which he wished to put forward in advance of his appeal, but accepted that he had not “double-verified” the results of those searches. Unsurprisingly, Fordham J took a dim view of the citation of fake authorities to the court and exercised the discretion of the court to strike out the appellant’s grounds of appeal as an abuse of process on that basis. In addition, costs were awarded against the appellant on the indemnity basis.
In the light of those cases, which may be isolated incidents or the tip of an as yet unidentified iceberg (although the examples set out in the Appendix to the judgment of the Divisional Court in the Ayinde case may point towards the latter), there can be no doubt that seeking to rely upon fake authorities (whether hallucinated by a generative AI tool or otherwise) has potentially severe consequences for the litigants and practitioners involved. But the consequences don’t end there, because the citation of fake authorities has potentially wide-ranging consequences both for the integrity of the justice system and the orderly development of the common law.
In regard to the former, the negative effects of the purported reliance upon fake authorities were elegantly explained by Judge Kastel in the American case of Mata v. Avianca, Inc., 22-cv-1461 (S.D.N.Y. 22 Jun 2023) – a case involving the admitted use of Chat GPT by lawyers):
“Many harms flow from the submission of fake [authorities]. The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus [judgments] and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the…judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”
In regard to the latter, Lord Bingham of Cornhill in Kay v Lambeth London Borough Council [2006] UKHL 10; [2006] 2 AC 465 referred to the doctrine of judicial precedent as the “cornerstone of our legal system” and “an indispensable foundation upon which to decide what is the law and its application to individual cases”. The integrity of the system of precedent is protected and promoted by the identification and publication of those cases which materially advance the development of the common law in law reports. It is beyond doubt that purported reliance upon non-existent cases in court proceedings poses challenges to the development of the common law.
By way of example, when a case is selected for publication in a series of law reports published by ICLR all citations of authorities contained in the judgment, cited in argument or referred to in the parties’ skeleton arguments are rigorously checked for accuracy by (i) the law reporter preparing the report (a qualified barrister or solicitor) and (ii) a team of desk editors before being included in the published report. So, if you are citing an ICLR law report you can be confident that not only are you complying with the terms of the Practice Direction (Citation of Authorities) [2012] 1 WLR 780, but also that any authority referred to therein is genuine and that you are relying on the most authoritative source of case law.
Difficulties potentially arise where fake case citations are not identified and find their way into a judgment which, although not reported in a series of law reports, is one of the thousands of unreported judgments published via The National Archives each year. What is to stop that judgment and the fake cases contained therein from being relied upon as precedent for a particular proposition in a subsequent case and thus having a metastatic effect on the integrity of the common law? The making publicly available of a skeleton argument containing references to fake cases as part of the drive for open justice and transparency poses similar dangers.
Even where the fake citations are identified as such within the judgment, issues remain, because the fake citations look, structurally, like citations and search and retrieval tools, whether AI-augmented or not, might extract and compile them as though they are the same as any other case citation.
There can be no doubt that generative AI has the potential to be a powerful tool in assisting the work of lawyers. But it must be remembered that with great power comes great responsibility. Ultimately, the responsibility remains on the shoulders of:
- Individual lawyers to check the output of any AI tool used in their workflows, so that only verified and genuine material is placed before the courts in accordance with their professional obligations under the Legal Services Act 2007, the applicable codes of conduct of their approved regulators, and the terms of the Practice Direction on the citation of authorities. Indeed, as the Divisional Court observed in para 7 of its judgment in Ayinde, practitioners who seek to rely upon artificial intelligence to conduct legal research are under a professional duty to check the accuracy of that research before using it in the course of their professional work. The court went on to identify the official law reports produced by ICLR as one such authoritative source.
- Those responsible for the education, regulation and supervision of lawyers to ensure that individuals practising law are fully aware of their responsibilities in this area and the of the risks that the inappropriate use of AI poses to the integrity both of the legal profession and the common law.
In conclusion, to borrow again from the US in the Standing Order Regarding Use of Artificial Intelligence, issued by Magistrate Judge David L. Horan in Willis v US Bank National Association (15 May 2025, US District Court, Northern District of Texas, Dallas Division): “the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.” In this regard ICLR remains the intelligent choice for authoritative legal research.
Giovanni D’Avola,
Assistant Editor of the Weekly Law Reports
Featured image: an approximation of what an elephant might look like, from a medieval ms in the Sloane collection, which is in the public domain, reproduced with thanks.