We get it. When something legal comes up, or you have a concept that raises legal questions — an upcoming capital raise, a new venture concept, a contract question, a regulatory concern, a dispute with a business partner, or a deal you are trying to close — the temptation to open ChatGPT, Claude, Gemini, or another AI chatbot and start typing is real. You think, this will help me save time, money and give me a head start on the legal process and save me in legal fees. These tools are fast, accessible, and often remarkably capable. But when you are working with an attorney, using a consumer/public AI platform to research or strategize about your legal matter could cause serious, and potentially irreversible, harm to your matter – whether it involves active litigation, a government inquiry, or what appears to be an entirely routine business transaction or entity formation. Here is why.
The Attorney-Client Privilege Exists to Protect You
One of the most valuable protections you have when you hire a lawyer is the attorney-client privilege. It means that what you tell your attorney (and what they tell you) — your fears, your strategy, the facts you are not sure about, the documents you share — stays between you both. It cannot be handed over to the other side in a dispute. It cannot be compelled by a court or a government investigation. It belongs to you. Forever.
The privilege goes both ways. It covers communications (whether in documents, phone, email, or otherwise) from you to your lawyer, and from your lawyer to you.
But that protection only works if the communication stays between you and your attorney. The moment you bring a third party into the conversation, the privilege can evaporate.
A 2026 Federal Court Case Changed the Landscape
In February 2026, a federal judge in the Southern District of New York issued a landmark ruling in United States v. Heppner that every individual, executive, business owner and entrepreneur working with legal counsel should understand.
The defendant in that case, after receiving a grand jury subpoena and retaining legal counsel, used a publicly available version of an AI platform to draft documents outlining his potential defense strategies. He later shared those documents with his attorney. His legal team argued the documents were privileged. The government disagreed. The court sided with the government. All 31 of those AI chats, discovered during an FBI executed search warrant, were shared with the government and introduced as evidence against the defendant.
Why? Three reasons, all of which apply equally to civil and transactional matters:
-
- The AI is not your attorney. Attorney-client privilege protects communications between a client and your licensed attorney. An AI platform is not an attorney, does not owe you any fiduciary duty, and cannot form an attorney-client relationship with you. Full stop.
- Consumer AI platforms are not confidential. When you type into a free or consumer-facing AI tool, you are agreeing to that company’s terms of service and privacy policy. In most cases, that policy expressly permits the platform to collect your inputs, store your outputs, use your conversations to train its model, and share your data with third parties — including, in some cases, government authorities. The court found that by using such a platform, the defendant had voluntarily disclosed his information to a third party. That voluntary disclosure destroyed any expectation of confidentiality.
- You were acting on your own, not at your attorney’s direction. Even if an argument could be made that certain AI-assisted work might be protected under the work product doctrine — which shields materials prepared by an attorney in anticipation of litigation — that protection requires that the work be done by counsel, or at the direction of counsel. The defendant in Heppner acted entirely on his own initiative. His attorneys had not asked him to do it, had not directed it, and had not supervised it. That independence was fatal to any privilege claim.
But I Was Just Doing Some Background Research…
This is the most common response we hear. And it is the most dangerous assumption.
The privilege risk does not depend on whether you thought the use was informal, preliminary, or unimportant. It does not disappear because you were “just curious” or “just trying to understand the issue.” And critically, it applies even before litigation has started — in fact, the risk is highest in the period before a dispute becomes formal, when you are still gathering your thoughts and figuring out your position.
Once you have typed confidential facts about your business or your legal matter into a consumer AI platform, you cannot un-type them. And, once you’ve destroyed the attorney-client privilege over a communication, it cannot be restored.
The Risk Is Just as Real in Transactional Matters
The risk in deal or formation work may in fact be more acute than in obvious litigation scenarios, for a counterintuitive reason: transactional clients are far less likely to perceive any adversarial stakes. In a lawsuit or government investigation, clients tend to understand that their words can be used against them. In a deal context, the dynamic can feel entirely collaborative — at least until it is not. AI-generated content created during entity formation or deal negotiations carries the same vulnerability as anything generated in a litigation setting. A buyer’s AI-generated entity formation and compliance plan or due diligence notes could be used in post-closing litigation to establish knowledge of a condition it now claims was undisclosed. A seller’s AI prompts about disclosure obligations could be used to establish scienter in a fraud or securities matter. Content created in what felt like a casual, exploratory chatbot conversation during a transaction can become significant adverse evidence once a dispute arises.
Consider two formation scenarios. First, a founder uses a consumer AI platform during the formation of a cooperative to research governance and operational structure. The chatbot output flags a potential structural deficiency – for example, that the proposed membership voting mechanism may not satisfy state cooperative statute requirements. The founder notes the concern, adjusts course, but never raises it with counsel. Years later, a member brings a claim that the cooperative was operated non-compliantly due to a design defect in its foundational documents. Those AI chat logs, surfaced in discovery, establish that the founder had actual awareness of the structural risk at formation – this is the kind of knowledge evidence that transforms a design dispute into a willful noncompliance claim.
Second, a cooperative organizer uses a free AI tool to explore whether the entity will qualify for a state or federal tax exemption. The chatbot identifies ambiguities in the qualification criteria and notes that the proposed structure presents meaningful exemption risk. The organizer proceeds without addressing those ambiguities with counsel. When the tax exemption is later challenged or denied, the AI-generated analysis – obtained outside the attorney-client relationship and stored on a third-party platform – is discoverable. It can be used to establish that the organizer knew the exemption was uncertain and proceeded anyway, foreclosing any argument of good-faith reliance and potentially exposing the organizer and the cooperative to penalties for a known, unmitigated risk.
The transactional context creates real exposure. Consider what deal participants routinely do without thinking twice: a client inputs term sheet positions or negotiating strategy into a consumer chatbot to pressure-test their approach; a seller uses a free AI tool to analyze disclosure obligations under a purchase agreement; an executive pastes a confidential due diligence summary into a chatbot for a quick synthesis; a buyer’s representative asks an AI platform to evaluate representations and warranties exposure. In each of those instances, the Heppner framework applies without modification. The AI platform is not a lawyer. No attorney-client relationship exists. The platform’s terms of service destroy any reasonable expectation of confidentiality. And if the transaction later becomes the subject of a dispute, a regulatory inquiry, or post-closing litigation, those AI-generated documents are fair game in discovery. Even if actual litigation does not ensue, the exposure one faces knowing this evidence is out there and discoverable is costly.
Most clients associate privilege and confidentiality risks with litigation or government investigations. The same principles apply with equal force to transactional matters — contract negotiations, mergers and acquisitions, due diligence, regulatory compliance reviews, and business restructurings. ABA Formal Opinion 512, issued in July 2024, confirms that the confidentiality obligations governing the attorney-client relationship are not limited to adversarial proceedings. The risk analysis the Opinion requires is explicitly fact-driven and dependent on the client, the matter, the task, and the specific tool used — language that encompasses every type of legal representation, including the most routine business transaction.
What About Enterprise AI Tools?
Some of our clients, and our team, use enterprise-grade AI platforms — tools with contractual confidentiality protections, no-training commitments, and stronger data security standards. These tools do offer meaningfully better protections than a free consumer chatbot. But the identity of who is using the tool, and under whose direction, matters as much as the tool itself – and that distinction is legally significant.
When your attorney uses an enterprise AI platform as part of their legal work on your matter, that use occurs within the attorney-client relationship. The attorney owes you fiduciary duties, is bound by professional conduct rules, and is operating under the obligation of confidentiality. The AI tool, in that context, functions analogously to other work product tools – a research database, a drafting assistant, a document review platform – used at counsel’s direction and within counsel’s control. Courts have historically recognized that attorneys may use third-party tools and services without automatically waiving privilege, provided the disclosure is necessary to the legal representation and the attorney takes reasonable steps to preserve confidentiality. Enterprise AI used by counsel, under that framework, has a credible – though still developing – argument for protection.
When you use the same enterprise tool independently, that framework does not apply. You are not an attorney. You do not owe yourself fiduciary duties in the legal sense. Your use of the tool, however sophisticated, is not an act of legal representation – it is a client acting unilaterally outside the scope of the attorney-client relationship. The privilege analysis reverts to the same core question the Heppner court asked: was this communication made in confidence to an attorney, for the purpose of obtaining legal advice, and kept confidential? Independent client use of any AI tool – enterprise or otherwise – cannot satisfy that test.
The critical variable, then, is not just the tool. It is the direction. AI-assisted work that your attorney initiates, supervises, and controls as part of a coordinated legal strategy has the strongest available argument for protection. AI-assisted work that you initiate on your own – even using a premium, enterprise-grade platform – has no such argument. No court has yet held that enterprise AI use is categorically privileged, and the law continues to develop. Until it does, the safest posture is clear: if AI is going to be used in connection with your matter, that use should originate with, and be directed by, your counsel.
What Should You Do Instead?
If you have a question about your legal matter, call us or send us an email. That is what we are here for.
If you want to use an AI tool in connection with your matter — for any reason — contact us first. We can advise you on which tools, if any, are appropriate for your specific situation and how to use them in a way that preserves your protections. Further, under an attorney’s direction, the use of certain AI tools may be protected.
We are not anti-technology. Our firm actively uses and evaluates enterprise-grade AI tools in our own practice, and we do so under strict protocols designed to protect your confidential information. We want our clients to benefit from these tools too, where appropriate. But “appropriate” requires coordination, and it requires care.
The Bottom Line
Consumer AI platforms are extraordinary tools for many things. Researching your pending legal matter is not one of them – and that is as true of an active deal as it is of active litigation. A casual conversation with a chatbot about your business dispute, regulatory question, or contract concern could hand your adversary — or the government — a window into your strategy that no amount of good lawyering can close. The same applies to transactional AI use: an offhand prompt about a negotiating position, a disclosure obligation, or a valuation assumption during a deal could resurface in post-closing litigation as evidence of your knowledge or intent at the time the transaction was signed.
When in doubt, reach out to us before you type.
This blog post is intended for general informational purposes and does not constitute legal advice. If you have questions about your specific matter, please contact our office directly.