The Hidden Legal Risk in Generative AI: Court Rules Communications With AI Platform Are Not Protected From Disclosure
The United States District Court for the Southern District of New York recently became the first court in the nation to address whether a person's communications about legal issues with a generative AI platform are protected from disclosure in litigation. The court found that they are not.
In United States of America v. Bradley Heppner, 25-cr-00503-JSR, a district court judge ruled that a criminal defendant's written exchanges with the generative AI platform Claude were not protected by attorney-client privilege or the work product doctrine, even though Mr. Heppner had communicated with Claude after receiving a grand jury subpoena and later showed those communications to his lawyer.
This ruling has significant implications for parties to legal proceedings and should change how businesses and executives think about using public AI tools in connection with legal matters.
What happened?
After Mr. Heppner received a grand jury subpoena, he used Claude to generate written reports outlining a potential defense strategy and later showed those documents to his lawyer. Mr. Heppner argued that the reports should be privileged (so that he would not have to turn them over in discovery) because they were created in anticipation of his indictment, they were based on information learned from his attorney, and they were shared with his attorney.
The court disagreed.
The court explained that the attorney-client privilege requires a communication between a client and their attorney that is intended to be confidential and is for the purpose of obtaining legal advice. The court then held that the attorney-client privilege did not apply to Mr. Heppner's communications with Claude for several reasons:
- Not an attorney. An AI program is not a legal professional and so there can be no attorney-client relationship. The attorney-client privilege protects a "trusting human relationship" with a licensed professional, not written exchanges with software.
- Not confidential. Exchanges with Claude were not confidential because the AI platform's privacy policy allowed the company to collect, retain, use, and disclose user data, including in connection with litigation. Additionally, there is no reasonable expectation of confidentiality in communications that a person has voluntarily shared with such a platform.
- Not for legal advice. Although the court found this issue to be a closer call, it ultimately concluded that Mr. Heppner did not intend to use Claude for legal advice because he chose to interact with Claude himself, rather than at the direction of his attorney, and because Claude itself disclaims providing legal advice. The court left open the question of whether the use of AI at the direction of counsel might be considered legal advice, discussing that if counsel had directed Mr. Heppner to use Claude, then Claude "might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer's agent within the protection of the attorney-client privilege."
- Sharing results with counsel insufficient. The court also expressly rejected Mr. Heppner's argument that the documents became privileged when he shared them with his lawyer. Non-privileged communications do not acquire protection merely because they are transferred to an attorney.
Finally, the court concluded that the communications were not protected by the work product doctrine. The work product doctrine protects materials prepared by or at the direction of an attorney in anticipation of litigation or for trial. Again, the court found that Mr. Heppner's exchanges with Claude were not protected because Mr. Heppner was acting on his own when he created the AI documents and the documents did not disclose his attorney's strategy.
How can individuals and businesses mitigate their risk?
Even if an individual or business is not currently involved in litigation, this ruling should serve as a reminder that communications with AI may be discoverable later—and therefore communication with AI on legal matters carries risk. Individuals and businesses that use public AI tools for legal matters should take steps to mitigate that risk:
- Don't input attorney communications or other privileged information into public AI platforms since communications with public AI tools about legal matters may not be privileged.
- Don't use AI notetakers in conversations related to legal matters because even though the purpose of the conversation may be to obtain legal advice, a privacy policy like Claude's might lead a court to conclude that there was no reasonable expectation of confidentiality and that the notes are not protected from disclosure.
- Review and establish internal policies restricting the use of public AI for legal matters and train key personnel accordingly.
- Review cloud and AI practices. The court here noted that cloud-based software is not "intrinsically privileged." So, while ordinary business practices do not automatically destroy privilege, the confidentiality controls for software are important. Businesses should evaluate data practices, vendor terms, and internal handling procedures.
- Remember that once privilege is waived, it cannot be recovered.
Public AI tools may be powerful, but they are not confidential legal advisors. Consult counsel before using AI in connection with any legal issue.
If you would like help drafting internal AI policies, updating engagement letters, or evaluating your company's risk exposure, our team is available to help.
