28 fake citations: How AI failed major law firm in court
Sullivan & Cromwell admitted AI hallucinations produced fictitious case citations in a federal bankruptcy court filing
Sullivan & Cromwell isn't a firm that makes careless mistakes, or at least, it wasn't supposed to be. The 145-year-old Wall Street institution has now admitted to a federal bankruptcy court that an April filing contained AI-generated fabrications, including case citations that don't exist.
The errors appeared in an April 9 motion filed in the US Bankruptcy Court for the Southern District of New York, where the firm represents liquidators from the British Virgin Islands pursuing claims against Prince Group and its owner, Chen Zhi.
US authorities allege Chen directed large-scale scam operations across Southeast Asia targeting victims worldwide; he was detained in Cambodia earlier this year and later repatriated to China.
Opposing counsel at Boies Schiller Flexner identified the problems first. Their review found at least 28 erroneous citations, including quotes attributed to the court itself that do not exist, case law that was mischaracterised, and at least one citation that referenced a different decision in an entirely different circuit. Language attributed to the US Bankruptcy Code, they said, could not be located anywhere in the statute.
Sullivan & Cromwell withdrew the original motion and submitted a corrected version, with restructuring head Andrew Dietderich writing directly to Judge Martin Glenn to acknowledge the firm's AI use policies had not been followed.
The law firm’s internal rules require lawyers to complete two training modules before gaining access to generative AI tools, with completion tracked and verified. The training, the firm noted in its court letter, specifically flags the risk of hallucinated citations and instructs lawyers to "trust nothing and verify everything".
This guideline was not adhered to in the drafting of the April filing. The law firm failed to determine which lawyers were accountable for such action. They also acknowledged that their internal review identified other minor errors in their filing drafts, but they dismissed these errors as mere mistakes from humans and not AI.
The difference between policy and practice is exactly what the courts warned about. The law firm's training materials had predicted such a scenario, and yet it still occurred.
Sullivan & Cromwell's disclosure lands in a legal landscape already strained by AI-related missteps. US courts have sanctioned lawyers for creating fabricated citations using AI technology in their filings. An Australian attorney forfeited their practice licence last year after a similar issue arose from their filings.
Law schools have started mandating instruction on generative AI, and senior judges have issued direct warnings that misuse threatens the integrity of proceedings. Some courts are simultaneously piloting their own AI systems to manage caseload pressure, creating an environment where the technology is advancing faster than the professional norms designed to govern it.
Defendants in the Prince Group case asked the court to adjourn a scheduled hearing and hold a status conference, arguing the late correction was prejudicial after they had already submitted objections based on the flawed original filing.
-
Apple criticises EU measures to help AI rivals access Google services
-
WhatsApp to get ‘Incognito Chat’ as Meta expands private AI features
-
AutoScientist lets AI models train themselves faster
-
Alibaba shares fall after sharp decline in core profitability
-
Adobe Premiere is finally coming to Android this summer, Google confirms
-
Meta Connect 2026: Mark Zukerberg hints at major wearables
-
Claude Code's creator runs ‘few thousand’ AI agents overnight on his phone
-
China leads new healthcare alliance to expand its dominance across Asia-Pacific
