AI, eDiscovery, and Privilege: The Train Tracks That Are Now Colliding

May 11, 2026 | Artificial Intelligence

The Privilege Problem: How AI is Reformulating eDiscovery’s Most Sensitive Issue

AI has arrived in the legal technology space with It brings with it the prospect of cost cutting in discovery, acceleration of document production timelines, and pattern recognition amongst large data sets that would take humans months to pick up, if at all. But alongside these efficiencies comes a familiar challenge at an unfamiliar scale: reliably identifying and protecting privileged materials.

The attorney-client privilege serves to foster candid and open communication between attorneys and their clients. The work product doctrine serves a related but distinct function: protecting the integrity of the adversarial process by shielding attorneys’ mental impressions and litigation preparation from discovery. In the world of paper documents, privilege review was painstaking but predictable. eDiscovery changed the scale of that challenge, where a single corporate litigation matter can involve millions of emails, Teams messages, and cloud-stored files, each requiring the same privilege determinations that once applied to banker’s boxes, binders, and gaylords filled with documents. AI introduces yet another layer of complexity, not because it changes the underlying doctrine, but because it creates new categories of potentially privileged material that existing review workflows were not designed to handle.

The Promise of AI-Assisted Privilege Review

Modern eDiscovery platforms use machine learning models trained on privilege determinations to flag potentially protected documents for human review. Technology Assisted Review (TAR) and predictive coding tools can dramatically reduce the volume of documents requiring attorney eyes, cutting costs, and compressing timelines that once stretched on for months on end. In large scale matters – think mass tort litigation – these tools have become a practical necessity.

Case Law Developments and Lessons to Be Learned

AI’s role in litigation is not limited to making review more efficient. Attorneys and parties are increasingly using AI tools themselves – to draft filings, prepare for depositions, analyze case strategy, and synthesize research. Those interactions generate their own trail of prompts and outputs, and courts are now grappling with whether those materials are discoverable.

As AI use in litigation becomes routine, the privilege questions it raises grow more urgent – and the technology is evolving faster than the legal framework that governs it.

Four federal decisions from early 2026 have begun to sketch out a judicial framework for how privilege and work product doctrine apply to AI-generated materials. The picture is clearer than what it was six months ago, but it remains far from settled.

In United States v. Heppner (S.D.N.Y. Feb. 17, 2026), a criminal defendant facing fraud charges created roughly 31 documents by interacting with a publicly available AI platform without any involvement from his attorney. His counsel later claimed these materials reflected defense strategy and were protected by both attorney-client privilege and the work product doctrine. The Court rejected both arguments. On privilege, there was no attorney-client relationship, no reasonable expectation of confidentiality, and the platform’s own terms disclaimed any ability to provide legal advice. On work product, the materials were prepared on the defendant’s own initiative, not at counsel’s direction, and did not reflect counsel’s mental impressions or litigation strategy at the time they were created.

Just one week before Heppner, the Eastern District of Michigan came to a different result on a different set of facts in Warner v. Gilbarco, Inc. Here a pro se plaintiff in an employment discrimination case had used a generative AI tool to prepare litigation-related materials, and the defendants moved to compel production of everything related to her AI use. The Court denied the motion, reasoning that the AI platforms are “tools, not persons” and that disclosure to them does not constitute disclosure to an adversary. Furthermore, compelling the production of those materials would force disclosure of the plaintiff’s internal mental impressions, effectively nullifying work product protection in virtually every modern drafting environment.

The most comprehensive ruling of the three came in March 2026, when the Magistrate Judge of the District of Colorado issued her decision in Morgan v. V2X, Inc. This case, another employment matter with a pro se plaintiff, added a dimension that neither Heppner nor Warner addressed: the confidentiality risks of feeding discovery materials into AI platforms that may use inputs for model training. The court confirmed that AI-assisted litigation materials prepared using public AI tools are protected as mental impressions and litigation preparation materials under Rule 26(b)(3), and that using a consumer AI platform does not automatically waive that protection. Critically, the court required that any AI tool used to process confidential information be subject to key contractual safeguards. These include prohibitions on using inputs for model training, restrictions on onward disclosure, and the ability to delete the data on request.

In Jeffries v. Harcros Chemicals, Inc., the United States District Court for the District of Kansas granted a motion to amend the protective order at issue to restrict the use of “open or publicly accessible AI tools not only for confidential information but also for all discovery materials, including specifically information that may not be confidential.” The Court followed a similar line as the court in Morgan in drawing distinctions between publicly accessible AI systems, which may retain and use submitted data to train models, and more secure or closed systems that operate in a much more fenced in environment. The major conclusion here is that the use of such publicly accessible AI tools presents unique risks, including the practical inability to remove or delete data once it has been fed to the model.

These 2026 decisions build on an earlier ruling that remains foundational. In Tremblay v. OpenAI, Inc. (N.D. Cal. Aug. 8, 2024), the court held that when counsel crafts AI prompt in connection with litigation, both the prompts and the resulting outputs constitute opinion work product because the prompts reflect counsel’s mental impressions and opinions. Opinion work product is virtually undiscoverable absent a showing that counsel’s mental impressions are directly at issue and the need for the material is compelling. Tremblay established a clear doctrinal baseline: the closer an attorney is to the creation of AI prompts, the stronger the argument that the resulting materials are protected. Subsequent decisions have reinforced this framework. In Concord Music Group, Inc. v. Anthropic PBC N.D. Cal. May 23, 2025), the court followed Tremblay in treating attorney-crafted investigation prompts as opinion work product, while finding narrowly tailored waiver where the publishers placed specific prompts at issue through witness testimony.

What these cases are showing is that in early stages of litigation, it is imperative to address AI use explicitly at the start of discovery. Morgan and Jeffries have made it abundantly clear that protective orders are the preferred method for managing AI-related risks in discovery. Parties will now need to address these issues with opposing counsel in protective order negotiations before discovery commences. This may prove cumbersome in mass tort litigation where coordination and judicial intervention may be key to preventing chaos and inconsistencies. Parties should be prepared to speak to the data protection capabilities of their eDiscovery vendors, including whether the platform deletes not only the submitted data, but the derivative data, whether any inputs are retained for model training, and what security certifications the vendor holds.

The eDiscovery vendor landscape is evolving rapidly, and new AI-enabled platforms are entering the market regularly. These developments underscore the importance of establishing solid information governance frameworks early in case management, including clear policies on which AI tools may be used and under what conditions.

These recent judicial developments highlight how courts will incorporate assumptions about behavior and system design into future analyses of how AI disrupts our previous notions surrounding confidentiality, privilege, and work product. Now legal doctrine in this area will be driven not only from the rapid evolution of technology, but by the parallel development in how attorneys (and pro se litigants) use and think about the emerging technology.

Practical Guidance for Practitioners

These rulings have immediate operational implications. Most organizations’ litigation hold notices do not address AI interactions, a gap that is increasingly difficult to defend. After Heppner, Warner, and Morgan, AI prompts and outputs clearly fall within the scope of potentially relevant ESI that must be preserved once litigation is reasonably anticipated. Defense counsel advising corporate clients should be incorporating AI-specific language into hold notices now, including identification of which platforms employees are using, whether those platforms retain user data, and whether AI outputs have been copied into other documents that existing holds may or may not capture.

Vendor selection demands a similar level of attention. The Morgan safeguards should be treated as baseline requirements for any AI tool that will touch discovery materials. Established eDiscovery platforms have generally addressed these concerns through existing data processing agreements, but as AI-assisted review features become more prevalent, the open-versus-closed distinction that Morgan and Jeffries draw will become increasingly relevant to platform selection.

Proportionality under Rule 26(b)(1) will be equally critical. The Concord Music court’s rejection of a request for all undisclosed prompts and outputs provides a useful guideline. Waiver must be narrowly tailored, and discovery requests must be tied to specific facts at issue in the case. When opposing counsel seeks broad production of AI-related materials, defense practitioners should be prepared to challenge the scope of such requests.

The case law surveyed here confirms that courts are not creating new doctrine to address AI, they are applying existing privilege and work product frameworks to a new category of ESI. But the practical implications are significant. Organizations that fail to account for AI-generated materials in their preservation protocols, vendor agreements, and protective order negotiations risk forfeiting protections that the doctrine would otherwise provide. The law will continue to develop, and future decisions will inevitably refine the boundaries of what is protected and what is discoverable. What practitioners can control now is how they prepare – by building AI-specific considerations into their litigation workflows before an adversary or court forces the issue.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe!

To be notified when a new article is available, please subscribe below.

Lists*


Loading