The AI Accountability Reckoning: Why Lawyers Cannot Delegate Professional Responsibility to Algorithms
In this guest post, Jean Gan reflects on the question: “When AI gets it wrong, who is accountable?”… Continue reading about The AI Accountability Reckoning: Why Lawyers Cannot Delegate Professional Responsibility to Algorithms
The AI Accountability Reckoning: Why Lawyers Cannot Delegate Professional Responsibility to Algorithms
The legal profession stands at an inflection point. As artificial intelligence tools become embedded in legal workflows, from research and document drafting to contract review and litigation strategy, a fundamental question emerges: Who is accountable when AI gets it wrong?
The answer, as courts and bar associations worldwide are making abundantly clear, is unequivocal: the lawyer.
This accountability reckoning signals a critical moment for the legal profession. Professional responsibility cannot be outsourced to algorithms, no matter how sophisticated the technology. The courts, regulatory bodies, and bar associations are enforcing this principle with unprecedented rigour, and the consequences are substantial.
The Growing Wave of AI Sanctions
In the past two years, courts across the United States, UK, and Ireland have issued sanctions against lawyers who submitted AI-generated fabrications as genuine legal authority. The cases share a troubling pattern: fabricated case citations, non-existent judicial opinions, and invented quotes presented as binding precedent.
A striking measure of the scale of the problem comes from the work of legal researcher Damien Charlotin, who maintains a publicly accessible and continuously updated database tracking judicial decisions involving hallucinated case law, false quotations or other fabricated facts produced by AI tools. As of 2 October 2025, the database recorded roughly 380 such decisions. Just two months later the number had climbed to 684. The database can be filtered by jurisdiction, legal role and type of error, revealing not only how widespread the problem has become but how quickly it is accelerating.
Significant US Decisions on Fake Authorities
Mata v. Avianca, Inc.
The most infamous case emerged in 2023. Attorneys Peter LoDuca and Steven Schwartz used ChatGPT to generate legal research in opposition to Avianca’s motion to dismiss. The problem: the cases they cited did not exist. Judge P. Kevin Castel found the conduct inexcusable and ordered the attorneys to notify every judge falsely identified as having authored the fabricated opinions. The law firm paid a $5,000 penalty jointly and severally with the individual attorneys. More significantly, the court directed them to provide copies of the sanctions order and hearing transcript not only to their client but to every affected judge: a public reprimand on a national scale.
Frankie Johnson v. Jefferson S. Dunn
The fallout intensified in July 2025. Butler Snow LLP, an Am Law 200 firm, faced far more severe consequences. In a 51-page sanctions order, Judge Anna Manasco of the Northern District of Alabama sanctioned three attorneys for filing motions containing fabricated legal authority generated by AI. The Judge declared that such misconduct “demands substantially greater accountability than the reprimands and modest fines that have become common.”
The sanctions included public reprimands requiring the attorneys to notify every pending judge and opposing counsel in all cases where they are counsel of record. The court referred the matter to the Alabama State Bar’s General Counsel for possible disciplinary proceedings. Critically, the firm itself avoided sanctions because it had established an AI committee, clear policies, and verification protocols that the culpable attorneys violated. This demonstrates that institutional governance structures can mitigate individual misconduct.
The Ayinde Decision and Fake Citations in England and Wales
It is helpful to acknowledge the leading domestic authority. In Ayinde, the High Court addressed the use of AI generated case references that contained fabricated material. The judgment confirmed that presenting fictitious authorities is a serious breach of professional obligations and that responsibility rests entirely with the legal representative, regardless of whether the error arose from reliance on an AI tool.
The Ethical Framework: What the Rules Require
United States: Regulatory and Ethical Guidance on Lawyer Use of AI
The American Bar Association Model Rules
The American Bar Association’s Formal Opinion 512, issued in July 2024, established the foundational ethical paradigm for lawyers using generative AI. Under Model Rule 1.1, lawyers must possess the legal knowledge, skill, thoroughness, and preparation reasonably necessary for representation.
The ABA clarifies that competence now requires lawyers to develop a reasonable understanding of the capabilities and limitations of the specific AI technology they employ. This does not demand expertise in machine learning or neural networks. Rather, it requires knowing what questions a tool can reliably answer, what biases might exist in its training data, and when independent verification is essential.
The competence standard explicitly prohibits ignorance of AI’s ethical considerations and guidelines. Formal Opinion 512 states that uncritical reliance on content created by a generative AI tool can result in inaccurate legal advice or misleading representations to courts and third parties. A lawyer’s reliance on, or submission of, a tool’s output without appropriate verification may therefore breach the duty of competent representation.
Confidentiality Obligations in the AI Age
Model Rule 1.6 imposes strict confidentiality duties that become particularly critical when using AI tools. Lawyers must assess whether AI applications expose confidential information to third parties, whether such information could be incorporated into training datasets, or whether it might inadvertently appear in future outputs.
Singapore’s draft guidance on generative AI echoes this approach by emphasising data classification protocols, avoiding free-to-use tools for confidential matters, and ensuring that enterprise-level solutions include contractual protections against training use.
Transparency and Client Communication
Model Rule 1.4 requires lawyers to consult with clients about the means of accomplishing their objectives. This includes discussing AI use where it may materially affect representation, cost, or decision-making. Many firms are now incorporating AI usage disclosures directly into engagement letters to set expectations on when AI will be used, what data will be processed, and how confidentiality will be protected.
Supervisory Responsibilities and Firm-Wide Governance
Model Rules 5.1 and 5.3 place responsibility on supervising attorneys to ensure compliance with professional conduct rules when AI systems are used. This includes establishing internal AI policies, providing staff training on ethical and security considerations, and supervising external AI service providers.
The Butler Snow case demonstrates the value of institutional governance. The firm avoided sanctions because it had implemented an AI committee, formal policies, and verification protocols that the individual attorneys failed to follow. This shows how structured governance can mitigate organisational exposure when individual lawyers misuse AI tools.
England and Wales: Professional and Judicial Guidance on AI Use
The Law Society of England and Wales
Across England and Wales, a growing suite of domestic guidance now addresses the risks, duties and expectations surrounding the use of generative AI in legal practice. This reflects a wider recognition that misuse of AI in litigation and advisory work has already produced real consequences, particularly when fabricated authorities or inaccurate outputs have been placed before the courts.
The Law Society’s publication Generative AI: the essentials provides a structured overview of both opportunities and risks associated with generative AI tools. It explains the core technology, the regulatory landscape, and the professional duties that remain unchanged even where AI is used to support legal work. The guide gives detailed treatment to confidentiality, data protection, intellectual property, cyber security and the responsibility to review and verify AI generated content. It also highlights the regulatory developments following the Ayinde and Al Haroun cases, identifying accuracy, fact checking and the duty to the court as central pillars for any solicitor who interacts with AI systems. This is supported by practical checklists aimed particularly at smaller firms and in house teams to help them assess use cases, mitigate risk and maintain full professional oversight.
The Bar Council
The Bar Council’s updated guidance of 26 November 2025 speaks directly to the professional duties of barristers in light of recent High Court cases involving fabricated authorities and misleading pleadings. It stresses that LLMs are predictive systems, not research tools, and warns of hallucinations, bias, confidentiality risks and cyber vulnerabilities. Responsibility for accuracy remains entirely with the barrister. Verification of citations and legal propositions is described as mandatory, supported by references to Ayinde and the wasted costs and disciplinary risks of failing to do so. Practitioners are also reminded that authoritative resources in the Inns of Court libraries remain fully accessible and should anchor all research.
This reflects the wider direction of UK guidance. Following several troubling cases involving fake AI generated citations, Barbara Mills KC noted that recent judgments highlight the serious risks AI misuse poses to public confidence in justice. The consistent judicial message is simple. Poor work cannot be defended by saying the AI suggested it.
Judicial Office Guidance
Judicial expectations have also been clarified. The Judicial Office updated its Artificial Intelligence (AI) Guidance for Judicial Office Holders on 31 October 2025, setting out in detail how judges and judicial staff should understand, evaluate and manage AI tools in the preparation of judgments or administrative work. The guidance stresses that public AI chat bots do not operate from authoritative sources and must not be relied upon for legal research. It warns that hallucinations can involve invented cases, incorrect statutory propositions or misleading factual narratives.
Judges are reminded that they retain personal responsibility for all material produced in their name, and that any information taken from AI tools must be independently verified. The guidance also recognises that litigants in person increasingly rely on AI chat bots and advises judges to make appropriate inquiries where submissions appear inaccurate or machine generated. It also addresses emerging risks such as deepfakes and the use of concealed “white text,” reinforcing the need for vigilance and scrutiny.
A Consolidated Domestic Position
Taken together, these sources illustrate a coherent domestic position. AI may offer efficiency and support in some tasks, but it cannot replace independent legal judgement or the duty to ensure accuracy. The profession is expected to understand how these systems work, maintain strict control over inputs and outputs, safeguard confidentiality, and ensure that any generated content is carefully reviewed against authoritative legal materials. The clear message across all guidance is that responsibility for legal work remains with the practitioner, and that errors arising from uncritical reliance on AI will be treated as serious professional misconduct.
Building an Accountability Culture
Law firms are responding to this landscape by institutionalising AI governance. Eighty percent of AmLaw 100 firms have now established AI governance boards, moving from experimental adoption to enterprise-wide transformation.
Practical Governance Elements
Clear verification protocols: Firms are creating layered review requirements. AI-assisted research memos must be checked against primary sources. AI-generated contract clauses must be validated against precedent. AI-drafted briefs must be reviewed for hallucinations and accuracy before filing.
AI tool cards: Documentation of each system’s capabilities, limitations, and appropriate use cases builds competency and ensures consistent application across the firm.
Proactive client disclosure: Firms that communicate upfront about AI use, including safeguards for confidentiality and verification procedures, build client trust and reduce malpractice exposure.
Bias audits: Regular reviews of AI outputs to identify whether certain demographics, jurisdictions, or legal arguments are underrepresented or mischaracterised in the system’s outputs.
The Regulatory Landscape Is Tightening Globally
United States
Beyond ABA Formal Opinion 512, individual state bars have issued their own guidance. The Florida Bar’s Opinion 24-1 (January 2024) confirms lawyers may use AI while prioritising client confidentiality, practising accurately and competently, avoiding unethical billing, and complying with advertising restrictions. The New York City Bar Association’s Formal Opinion 2024-5 addresses similar considerations.
Singapore
Singapore has adopted a principles-based, innovation-driven approach. The Law Society of Singapore’s 4R Decision Framework, published in May 2025, provides structured guidance based on four considerations: Repetition, Risk, Regulation, and Reviewability. The framework encourages AI use for repetitive, low-risk tasks whilst cautioning against deployment in high-stakes matters involving nuanced legal interpretation or confidential information.
Singapore’s draft Guide for Using Generative AI in the Legal Sector outlines three core principles: professional ethics, confidentiality, and transparency, emphasising a “lawyer-in-the-loop” approach and the taking of ultimate responsibility for all work product.
European Union
The EU Artificial Intelligence Act represents the world’s first comprehensive AI legislation, introducing a risk-based classification system. Key provisions for legal professionals include:
AI literacy requirements effective from February 2, 2025, requiring organisations to ensure staff have sufficient understanding of AI systems tailored to their roles
Prohibited practices enforceable since February 2025, including biometric categorisation based on sensitive characteristics and manipulative systems
High-risk system obligations beginning August 2, 2025, requiring transparency, human oversight, and risk management for AI systems used in legal services and justice administration
Extraterritorial scope mirroring GDPR, meaning any lawyer serving EU clients may fall under its purview
The August 2025 milestone saw the operationalisation of governance provisions and financial penalties of up to €35 million or 7% of global turnover for non-compliance.
Israel and Japan
The Israeli Bar Association’s National Ethics Committee published their Ethical Guidelines On Using AI in May 2024, emphasising that lawyers must critically review and verify AI content before use, avoid inputting personal or sensitive client information, and confirm facts with external references.
Japan’s AI Guidelines for Businesses create a non-binding “soft law” framework grounded in human-centric principles, calling for executive-level responsibility and governance embedded into organisational structures.
The Path Forward
The legal profession’s AI transformation is inevitable. The question is whether that transformation will be responsible and accountable.
What courts and regulators are signalling is that the centuries-old framework of professional responsibility, competence, diligence, confidentiality, and supervision remains the lodestar. AI changes how these duties are discharged, not whether they apply.
As the UK Bar Council articulates: “The best-placed barristers will be those who make the efforts to understand these systems so that they can be used with control and integrity.”
Lawyers who thrive in this environment will be those who embrace AI as a powerful tool for efficiency and insight whilst maintaining unwavering accountability for every output that bears their name. The algorithm may draft. The lawyer must answer. The technology may assist. The professional responsibility is yours alone.