A new Sixth Circuit decision on AI hallucinations turns on a quieter failure mode than the one most lawyers have learned to spot. United States v. Farris, decided April 3, 2026, did not involve fabricated case citations. The cases the brief relied on were real, and they were relevant to the issues at stake. The hallucinations lived inside those citations: invented quotations, misstated holdings, and overreaching descriptions of what the cases stood for. The brief was produced with Thomson Reuters CoCounsel, a legal research tool. Farris is a more demanding test of what diligence looks like when the citation is correct but the content is not.
The hallucination that looks like real research
Most coverage of AI hallucinations in court filings has focused on the headline failure. A model invents a case name, generates a plausible reporter and page citation, and an attorney files the brief without checking. The signal that something is wrong is the citation itself, which collapses on contact with a citator. Many firms have built verification habits around catching exactly that.
Farris involves a different pattern. The cases cited in the filing were genuine cases that bore on the legal questions at issue. The brief did not invent the cases. What the brief did invent was substantive content attributed to those cases. According to the court, the tool produced quotations that did not appear in the cited cases, or in any other case the court could locate. It described holdings the cited cases did not deliver. It extended the legal propositions the cases were said to stand for beyond what the cases actually held.
Each of these is a familiar kind of error in different settings. Lawyers regularly argue interpretations and extensions of case law, and the line between aggressive advocacy and a misstatement of authority is a question Rule 11 has long policed. What is new in Farris is the source. When the misstatement comes from an AI tool and the brief presents it as a direct quotation, the failure looks less like advocacy and more like a representation of fact that cannot be backed up.
Reliance and Rule 11 reasonableness
Farris also presses a question that has been hovering over the hallucination cases. What does Rule 11 reasonableness look like when the lawyer is relying on a tool positioned as authoritative? The brief in Farris was not generated with a consumer chat product. It was produced through CoCounsel, the Thomson Reuters research system built for legal work, and lawyers extend more trust to that kind of tool than they would to a general-purpose model. That higher trust shapes how carefully the output gets checked.
Lawyers still carry the duty to verify. What shifts is the cognitive baseline. A lawyer who would not sign a brief drafted by a consumer chat tool without character-by-character review may approach a brief drafted by a vetted research system with less skepticism. Understanding the limits of any tool in use, including whether it is capable of producing quotations that do not exist in the source, becomes part of competent practice.
Rule 11 measures diligence by reasonableness, and reasonableness is shaped by what the profession knows about the technology. The hallucination decisions so far have treated AI-generated inaccuracies as something close to strict liability. The lawyer answers for whatever an AI tool puts in the brief. As the profession develops a more granular understanding of what different AI systems can and cannot do, the reasonableness analysis is likely to follow. A system that quietly hallucinates inside otherwise valid citations does not call for the same diligence as a system that openly invents case names.
How auditable systems change diligence
Catching the Farris pattern is not a matter of reading more carefully. Reading the underlying cases catches misstated holdings and over-extended legal propositions. It is harder for a careful reader to catch a hallucinated quotation, because the quoted text is internally coherent and looks at a glance like something a competent court would write. Catching it requires a character-by-character match against the underlying case.
That kind of match is a job suited to a computer rather than a lawyer's eye. Systems with direct access to the underlying case law can confirm that a string presented as a quotation actually appears in the cited case, and produce an audit trail showing where each quoted passage came from. The verification becomes a deterministic check rather than a probabilistic judgment. The lawyer in the loop still has to be the lawyer, but the surface area of the verification work changes. Reviewing a system-generated audit of matched quotations is a different task than sweeping the brief by hand.
None of this delivers a perfect brief. Lawyers err. AI tools make mistakes. Professional review still matters. What changes is what counts as a reasonable practice. Where the system has surfaced an audit trail, the lawyer's diligence may look different than it would against a system without those guardrails. The eventual answer to what counts as reasonable diligence under Rule 11 will likely turn, at least in part, on those structural distinctions among AI tools.
Farris is one decision in a still-forming doctrine. What it usefully clarifies is that hallucination is not a single problem with a single shape. The kind Farris addresses, fabricated content inside genuine citations, is harder to spot by reading than the fake-citation failure lawyers have already learned to look for.
For practicing attorneys, the practical implication is to know the architecture of the AI tools in use, not just their marketing. As the technology becomes more differentiated, the reasonable-diligence question will likely grow more specific. What kind of help the system offers in seeing what to check is one of the questions the next round of cases is likely to take up.
