NEW YORK CITY, New York: Judges around the world are grappling with an unexpected new problem: legal filings drafted with the help of artificial intelligence that cite cases that don’t exist.
The rise of these flawed documents, riddled with “hallucinations”, fabricated facts, or case law produced by AI tools, is becoming an urgent concern for courts, lawyers, and professionals across industries experimenting with the technology at work.
Damien Charlotin, a French data scientist and lawyer who studies the intersection of technology and law, has cataloged nearly 500 such filings globally over the past six months. Many of them were submitted in U.S. courts by self-represented litigants, but some came from practicing attorneys and even major companies. “Even the more sophisticated player can have an issue with this,” said Charlotin, a senior research fellow at HEC Paris. “AI can be a boon. It’s wonderful, but also there are these pitfalls.”
Charlotin’s database tracks cases where judges determined that lawyers or litigants had relied on generative AI systems that produced non-existent citations, inaccurate summaries, or misleading arguments. In most instances, judges issued warnings, but in some cases, they imposed fines or sanctions. In one notable example, a federal judge in Colorado ruled that a lawyer for MyPillow Inc. had submitted a brief containing nearly 30 defective citations in a defamation case involving the company’s founder, Michael Lindell.
The problem reflects a larger issue: the growing use of AI tools at work, often without clear guidelines or sufficient oversight. Across professions, people are turning to chatbots and digital assistants to speed up research, draft documents, or brainstorm ideas. But as the legal world is learning, AI’s confident tone can mask serious errors — and in high-stakes situations, those mistakes can have real consequences.
Maria Flynn, CEO of the nonprofit Jobs for the Future, said AI should be treated like a junior assistant, not a replacement for human judgment. “Think about AI as augmenting your workflow,” she said. Flynn often uses her organization’s in-house AI system to help prepare for meetings or develop discussion questions from articles she shares with her team. “Some of the questions it proposed weren’t the right context for our organization,” she said. “I was able to give it feedback, and it came back with better, more thoughtful questions.”
Still, Flynn cautions that AI’s output can be unreliable. When she asked the system to compile a list of her organization’s past projects, it mixed up completed work with pending proposals. “In that case, our AI tool wasn’t able to identify the difference between something that had been proposed and something that had been completed,” she said. She caught the error only because she knew the background. “If you’re new in an organization, ask coworkers if the results look accurate,” she suggested.
Legal experts say the most significant risk is complacency. “People assume because it sounds so plausible that it’s right,” said Justin Daniels, an Atlanta-based attorney at Baker Donelson. “Having to go back and check all the cites, or reread the contract an AI summarized, that’s inconvenient and time-consuming. But as much as you think the AI can substitute for that, it can’t.”
Another emerging concern is privacy. Many AI systems collect or store user inputs to improve future responses, meaning sensitive data can inadvertently be exposed. Flynn warned against uploading confidential material such as client information, proprietary data, or unreleased financials into public AI platforms. “It doesn’t discern whether something is public or private,” she said. “Once you’ve entered that information, it could resurface in someone else’s results.”
Even simple workplace applications of AI can carry legal risks. Many employees use AI-powered note-taking tools that record meetings or generate summaries, but in some jurisdictions, recording without consent is illegal. “Before using AI notetakers, pause and consider whether the conversation should remain privileged and confidential,” said Danielle Kays, a Chicago-based partner at Fisher Phillips. She recommends checking with legal or human resources departments before deploying such tools in sensitive discussions, such as performance reviews or legal investigations.
Despite these risks, experts emphasize that avoiding AI entirely is not the answer. As more organizations integrate the technology into daily operations, understanding its strengths and weaknesses is becoming a critical skill. “The largest potential pitfall in learning to use AI is not learning to use it at all,” Flynn said. “We’re all going to need to become fluent in AI, and taking the early steps of building your familiarity, your literacy, your comfort with the tool is going to be critically important.”