Client Alert: Federal Court Rules That AI-Generated Documents Are Not Protected by Attorney-Client Privilege — What Banks, Fintechs, and Nonbank Lenders Need to Know
A First-of-Its-Kind Ruling Highlights the Risks of Using Consumer AI Tools for Sensitive Legal and Business Matters
On February 10, 2026, Judge Jed S. Rakoff of the U.S. District Court for the Southern District of New York ruled that documents a criminal defendant generated using Claude, a publicly available generative AI platform, and potentially the information inputted into Claude by the defendant, are not protected by attorney-client privilege or the work product doctrine.
The ruling in United States v. Heppner is believed to be the first of its kind in the country, and its reasoning carries significant implications for any business that uses — or is considering using — AI tools in connection with legal matters, regulatory inquiries, or internal investigations. 2026 U.S. Dist. LEXIS 32697 (S.D.N.Y Feb. 17, 2026). It may even have implications for other legal protections and concepts that require companies to take reasonable measures to protect the confidentiality of information, such as trade secrets.
In light of this ruling, companies should approach generative AI tools with caution, including by being aware of the privacy policies that govern them, before exposing a company’s most sensitive information to such tools. Those who wish to use AI tools to facilitate interactions with their attorneys should also consult with those attorneys first before doing so.
What Happened
Bradley Heppner, a financial services executive, was indicted on securities and wire fraud charges in October 2025. After learning he was the target of a government investigation and retaining defense counsel, Heppner used the consumer version of Claude to research legal questions, organize information related to his defense, and generate written reports outlining potential defense strategies to discuss with his defense attorneys. Some of the information he fed to the AI tool had originally been obtained from his attorneys, and he did not discuss the use of the tool for this purpose with them before doing so.
When the FBI executed a search warrant at Heppner's home in connection with his arrest, agents seized documents generated through the AI tool. Heppner's attorneys argued the documents were protected by attorney-client privilege, but the court ultimately decided that they were not and permitted the government to use them in its case against Heppner.
Why the Court Rejected Privilege
In general, the attorney-client privilege protects confidential communications between a client and their attorney where the purpose of the communication is to seek or receive legal advice. The court reasoned that the privilege did not apply to Heppner's use of Claude for several reasons, including:
The communication was not with an attorney. Claude is not a licensed attorney and cannot form an attorney-client relationship with a user. Any communication with the AI tool was therefore not between a client and their attorney.
The communication was not confidential. For attorney-client privilege to attach, the parties must intend for the communications to be confidential. This is why exposing communications with an attorney to a third party can destroy privilege. In this case, Anthropic, the company behind the AI tool, expressly reserves the right to use data entered into and produced by Claude to train the model, as well as disclose such data to third parties. As such, the court concluded that Heppner had “no reasonable expectation of confidentiality” with respect to his use of Claude.
Not for the purpose of obtaining legal advice. Although Heppner's lawyers argued that he communicated with Claude for the “express purpose of talking to counsel,” the court focused on the fact that Heppner used Claude of his own volition and not at his attorney’s direction. Claude, like most major generative AI models, expressly disclaims providing legal advice.
Sending AI outputs to a lawyer after the fact does not establish privilege. The court was clear that documents that are not privileged at the time they are created do not become privileged simply because they are later transmitted to an attorney.
Why This Matters
Many companies routinely use generative AI to perform research, compose emails and documents, organize information, and record and summarize internal and external communications. Some also permit their employees to freely use public or non-enterprise versions of AI tools in their day-to-day work. Many of these tools (including some enterprise versions) employ similar privacy policies and practices as Claude, suggesting that information fed to or obtained from a wide variety of AI models could be treated as a non-confidential disclosure to a third party. While the court in Heppner applied this rationale to reject an assertion of attorney-client privilege, this reasoning could also have ramifications in other contexts that require confidentiality for protections to apply. For example, with respect to trade secrets and other sensitive business information subject to statutory or regulatory protections apart from attorney-client privilege.
It is important to note that this is a single district court ruling and is not binding on courts in other jurisdictions. Indeed, another court in the Eastern District of Michigan reached the opposite conclusion in a civil case shortly after Judge Rakoff's ruling, finding that a litigant had properly asserted work product privilege over materials generated using ChatGPT. (Warner v. Gilbarco, Inc., No. 2:24-cv-12333, at 11 (E.D. Mich Feb. 10, 2026).)
However, this simply illustrates that the law in this area is developing rapidly and that AI use should be approached with caution when sensitive information is involved.
Practical Takeaways
In light of this ruling, companies should consider the following steps:
Review and update AI governance frameworks and usage policies. Companies should establish or enhance AI governance frameworks where third-party models are used, and consider whether policy changes should be made with respect to employee use of AI tools. These policies should make clear that AI platforms that have not been vetted and specifically sanctioned by the company should not be used, or used only for certain purposes.
Evaluate your AI tools and their terms of service. Not all AI platforms are created equal from a confidentiality standpoint. Consumer and individual-tier AI subscriptions generally permit very broad data collection and use by the provider, as well as disclosure to third parties (including the government) by default. Companies considering AI for any work that touches sensitive or legal matters should ensure they are using tools with appropriate privacy protections, data collection and use restrictions, and other safeguards.
Train employees on the limits of AI confidentiality. Many employees may not realize that their AI conversations are not private and may be seen as equivalent to sharing information with an unaffiliated third party with no obligation to maintain confidentiality.
Involve legal counsel before using AI for legal or regulatory work. The court's reasoning suggests that operating under an attorney’s direction in the use of AI tools is a key factor in preserving privilege. To the extent employees need to use AI in connection with legal or regulatory matters, that use should be done at the direction and under the supervision of legal counsel.
The central lesson of United States v. Heppner is not that third-party generative AI and legal privilege are inherently incompatible. Rather, it illustrates how third-party AI tools are just like any other vendor-provided technology and should be evaluated carefully for risk and fitness for purpose, and their use subject to appropriate oversight and internal controls. With the right policies, training, and technology choices, companies can continue to capture the operational benefits of AI while continuing to protect the confidentiality of their most sensitive communications.
If you have any questions about the Heppner ruling and its implications, or your company’s use of AI tools, please reach out to Chris Napier or Shelby Schwartz.
Download a PDF of this article HERE.
About the Authors
Chris Napier is a Partner at Mitchell Sandler. His practice focuses on providing regulatory counseling, strategic advice and representation during government enforcement matters, including matters involving commercial, consumer and alternative credit products; money transmission and payments; deposit issues; and partnerships between fintech companies, depository institutions, and lenders.
Shelby Schwartz is Counsel at Mitchell Sandler. Her practice focuses on financial regulatory and compliance matters, with a concentration on deposit accounts, financial data privacy, and state lending laws. She advises a wide variety of financial services providers, from banks to financial technology companies. Shelby has successfully assisted clients in responding to regulatory inquiries and enforcement matters, including those brought by the Consumer Financial Protection Bureau, the Department of Justice, and various state regulators. She regularly assists clients in assessing their deposit account fee structures and deposit account agreements, analyzing data breach obligations, developing privacy policies, and developing financial products and services within appropriate regulatory models.
SIGN UP FOR UPDATES
Never miss our news, insights or events.
FEATURED NEWS