Educational Notice: This article provides educational analysis of how attorney-client privilege may apply when AI tools are used in legal practice. It is not legal advice and does not create an attorney-client relationship. Consult qualified legal counsel for guidance on specific situations.
Introduction: When Technology Meets Confidentiality
Imagine this: A corporate attorney needs to quickly analyze a complex merger agreement. She copies confidential financial data, trade secrets, and privileged communications into ChatGPT for a summary. Within seconds, she has her answer. But she may have also exposed her client’s most sensitive information to OpenAI’s servers, potentially violated attorney-client privilege, and created a data breach that could cost her firm millions.
This scenario isn’t hypothetical—it’s happening in law firms across the country right now. With 79% of lawyers reportedly using AI in their practice as of 2024, but only 10% of firms having policies to guide its use, we’re witnessing a perfect storm of innovation colliding with one of law’s most sacred duties: protecting client confidentiality.
The central question facing legal professionals today is both urgent and unsettled: Does attorney-client privilege survive when AI enters the conversation? As artificial intelligence tools like ChatGPT, Harvey AI, and Casetext become integral to legal workflows, lawyers and clients alike are navigating uncharted territory where century-old privilege doctrine meets cutting-edge technology.
This issue matters profoundly. Attorney-client privilege forms the bedrock of effective legal representation, enabling clients to speak freely without fear that their words will be used against them. When AI platforms enter the equation—potentially storing, analyzing, or even training on confidential communications—the traditional boundaries of privilege face unprecedented challenges. Recent warnings from OpenAI’s own CEO, who stated that ChatGPT conversations lack legal privilege or confidentiality, have only heightened these concerns.
Understanding Attorney-Client Privilege: The Foundation
Before examining how AI affects privilege, we must understand what privilege actually protects. Attorney-client privilege is one of the oldest and most respected protections in Anglo-American law, designed to encourage full and frank communication between lawyers and their clients.
The Four Essential Elements
For attorney-client privilege to attach, four elements must be present:
- A communication made between privileged persons (attorney or client, or their necessary agents)
- In confidence with the expectation of privacy
- For the purpose of obtaining or providing legal advice
- By a client who has asserted and not waived the privilege
When all four elements are satisfied, the communication is protected from disclosure in legal proceedings. However, if any element fails—particularly confidentiality—the privilege can be lost entirely.
Who Is Covered?
The privilege protects communications between lawyers admitted to practice and their clients. Importantly, it extends to necessary agents of the attorney or client—such as paralegals, legal assistants, accountants working under counsel’s direction, and interpreters—when their involvement is essential to the legal representation. This extension, established in cases like United States v. Kovel, becomes critical when considering whether AI tools qualify as such agents.
Privilege vs. Confidentiality: A Critical Distinction
Attorney-client privilege is distinct from—though related to—the broader duty of confidentiality. Privilege is narrower, protecting only confidential communications made for legal advice. The duty of confidentiality, governed by rules like ABA Model Rule 1.6, is broader and prohibits lawyers from revealing any information relating to representation, regardless of source. Both protections are implicated when lawyers use AI tools.
How Privilege Can Be Lost
Privilege is precious but fragile. It can be waived or lost through:
- Voluntary disclosure to third parties not necessary to the legal representation
- Failure to maintain confidentiality through reasonable security measures
- Placing privileged communications at issue in litigation (subject matter waiver)
- Sharing information for non-legal purposes, such as business decisions unrelated to legal advice
This last point is particularly relevant to AI: courts have consistently held that disclosing privileged information to third parties destroys the privilege unless those third parties are essential agents of the attorney-client relationship.
Where AI Enters the Legal Process
Artificial intelligence has permeated nearly every aspect of legal practice. Understanding where and how AI is being used helps clarify where privilege risks emerge.
Legal Research and Case Analysis
Tools like Westlaw’s AI-Assisted Research, LexisNexis+ AI, and Harvey AI help lawyers research case law, statutes, and regulations more efficiently. These platforms can analyze thousands of cases in seconds, identify relevant precedents, and synthesize complex legal doctrines. Harvey AI, for instance, has been adopted by major firms like Allen & Overy and can be trained on a firm’s proprietary document library for customized research.
However, the research quality varies significantly. A Stanford study found that leading legal AI research tools hallucinate—generate false or nonexistent case citations—between 17% and 33% of the time, underscoring the need for careful human review.
Document Drafting and Review
AI assists in drafting contracts, pleadings, briefs, and legal memoranda. Lawyers input case-specific facts and receive generated drafts that can include suggested clauses, risk warnings, and boilerplate language based on jurisdictional requirements. Tools like Casetext’s CoCounsel and Harvey AI can review hundreds of thousands of contracts for due diligence, identifying missing provisions and compliance issues far faster than manual review.
E-Discovery and Document Analysis
In litigation, AI-powered e-discovery platforms analyze millions of emails, documents, and communications to identify relevant evidence. These systems use machine learning to flag privileged communications, detect patterns, and categorize documents—processes that would be prohibitively expensive if done entirely by human reviewers.
Client Intake and Legal Chatbots
Some firms deploy AI chatbots for initial client screening and intake. These systems collect basic information, assess potential claims, and route clients to appropriate attorneys. However, this raises immediate privilege questions: are communications with an AI intake bot protected in the same way as conversations with a law firm receptionist or paralegal?
The Critical Distinction: Human-in-the-Loop vs. Autonomous AI
A fundamental distinction exists between AI tools that operate under direct attorney supervision (human-in-the-loop) and those that function autonomously. When a lawyer uses AI to generate a draft but then reviews, modifies, and exercises professional judgment over the output, the lawyer remains responsible for the work product. This supervision arguably preserves the attorney’s role as the provider of legal advice.
Conversely, when clients or even lawyers rely on AI-generated advice without meaningful review or when AI systems directly advise clients, the fundamental attorney-client relationship may be absent. This distinction becomes crucial in determining whether privilege attaches.
The Core Legal Question: Does Privilege Still Apply?
The use of AI in legal practice presents several distinct scenarios, each with different privilege implications. Let’s examine the key situations lawyers and clients face.
Scenario 1: Lawyer Uses AI Internally for Legal Work
The Situation: An attorney uses Harvey AI or a similar legal-specific platform to research case law, draft a contract, or analyze documents for a client matter. The lawyer reviews and modifies the AI output before using it in representation.
Privilege Analysis: This scenario presents the strongest case for maintaining privilege, but critical factors determine the outcome:
- Platform Security: Enterprise-grade legal AI platforms with robust confidentiality agreements, secure data processing, and commitments not to use client data for model training are more defensible than consumer tools.
- Lawyer Supervision: The attorney must exercise independent professional judgment, reviewing and verifying AI outputs rather than blindly accepting them.
- Contractual Protections: Business Associate Agreements or similar contracts that preserve confidentiality and limit data retention strengthen privilege claims.
Courts have recognized that using third-party technology doesn’t automatically waive privilege if reasonable precautions are taken. Just as storing documents with a cloud service provider doesn’t waive privilege when encryption and contractual protections exist, using secure legal AI platforms under similar safeguards may preserve privilege.
Scenario 2: Client Communicates Directly with AI Tools
The Situation: A client uses ChatGPT or another public AI chatbot to get legal advice, describe their case, or ask questions about their legal issues—without involving their attorney.
Privilege Analysis: This scenario presents the gravest privilege risks. Multiple factors doom privilege protection:
- No Attorney-Client Relationship: AI systems are not licensed attorneys. They cannot form attorney-client relationships or owe fiduciary duties. Communications with AI lack the fundamental relationship privilege requires.
- Third-Party Disclosure: Inputting information into ChatGPT or similar platforms constitutes disclosure to a third party (the AI provider). OpenAI’s CEO explicitly stated that ChatGPT conversations are not protected by privilege or confidentiality.
- Data Retention and Use: Consumer AI platforms may retain user inputs for model training, quality improvement, or other purposes. This data could be legally discoverable through subpoena.
Legal scholars and courts have been clear: direct client communications with AI systems receive no privilege protection. As one recent analysis stated, privilege attaches to relationships, not functions abstracted from human anchors. The AI is neither a person nor a professional subject to bar discipline or ethical rules.
Scenario 3: Third-Party AI Vendors Process Legal Data
The Situation: A law firm uses an AI-powered e-discovery platform or document review service where a third-party vendor’s AI processes privileged communications and work product.
Privilege Analysis: This scenario mirrors traditional outsourcing arrangements that courts have addressed. Key considerations include:
- Agent Status: If the vendor acts as the lawyer’s agent—performing tasks the lawyer would otherwise do—and operates under strict confidentiality obligations, privilege may be preserved.
- Necessity: The vendor’s involvement must be necessary or beneficial to the legal representation, not merely convenient.
- Control and Security: The firm must maintain control over the data and ensure the vendor implements appropriate security measures and confidentiality protocols.
Scenario 4: Cloud-Based vs. On-Premises AI Systems
The deployment model significantly affects privilege analysis. Public cloud AI tools pose heightened risks because data may be stored on external servers, potentially accessible to the provider’s employees for quality control or compliance monitoring. These systems often reserve rights to use inputs for training or improvement.
Private, on-premises AI systems—or secure cloud deployments with strict data isolation and contractual guarantees—offer stronger privilege protection. These systems typically include encryption, access controls, and commitments not to access or use client data beyond providing the service.
However, even private systems don’t guarantee privilege protection. The fundamental question remains: is the AI functioning as a necessary agent in the attorney-client relationship, and are communications maintained in confidence?
The Current Legal Landscape: Where We Stand Today
The law governing AI and attorney-client privilege is developing rapidly. While binding court precedent remains limited, regulatory guidance and early judicial decisions are beginning to establish a framework.
ABA Formal Opinion 512: The Current Paradigm
On July 29, 2024, the American Bar Association issued Formal Opinion 512, its first comprehensive guidance on generative AI in legal practice. This opinion doesn’t create new privilege doctrine but applies existing ethical rules to AI use. Key takeaways include:
- Competence Requirement (Model Rule 1.1): Lawyers must understand AI tools’ capabilities and limitations. This includes knowing how the tool stores and processes data, what privacy protections exist, and whether inputs are used for training.
- Confidentiality Obligations (Model Rule 1.6): Lawyers must take reasonable measures to protect client information when using AI. This may require using enterprise versions with enhanced privacy protections or avoiding certain platforms entirely.
- Client Communication (Model Rule 1.4): Depending on the circumstances, lawyers may need to inform clients about AI use in their matters and obtain informed consent, particularly when confidentiality concerns exist.
- Independent Verification: Lawyers cannot rely uncritically on AI outputs. The degree of verification needed depends on the task—using AI for idea generation requires less review than using it for legal research or document drafting.
Opinion 512 emphasizes that GAI tools are a rapidly moving target. What’s considered reasonable practice today may evolve as technology and legal standards develop.
State Bar Guidance and Variations
Multiple state bars have issued their own guidance, creating a patchwork of approaches:
- California, Florida, New York, Pennsylvania, and New Jersey have all issued ethics opinions addressing AI use, with common themes around competence, confidentiality, and supervision.
- Colorado’s Artificial Intelligence Act took effect February 1, 2026, creating specific compliance requirements for AI use in certain high-risk contexts including legal services.
- The UK has seen regulatory developments, with the Solicitors Regulation Authority approving the first AI-driven law firm in May 2025, signaling growing acceptance of AI in legal practice when properly governed.
Early Court Decisions: Work Product and AI Prompts
While direct privilege cases are rare, courts have begun addressing related questions about work product protection for AI-assisted legal work. The landmark decision came in Tremblay v. OpenAI, Inc. (N.D. Cal. Aug. 8, 2024).
In Tremblay, copyright plaintiffs used ChatGPT to test whether it reproduced their works. When OpenAI sought discovery of all the plaintiffs’ ChatGPT prompts and outputs—including negative results that didn’t support the claims—U.S. District Judge Araceli Martínez-Olguín held that the prompts constituted opinion work product. The court reasoned that the prompts were queries crafted by counsel containing mental impressions and opinions about how to interrogate ChatGPT.
Key holdings from Tremblay:
- Strategically crafted AI prompts reflecting attorney thinking can constitute opinion work product entitled to strong protection
- Using some AI outputs doesn’t automatically waive protection for all related research and prompts
- Courts will distinguish between AI use for bare factual generation versus strategic legal analysis
Subsequent cases like Concord Music Group v. Anthropic (N.D. Cal. May 23, 2025) have reinforced these principles, confirming that prompts and related settings can constitute work product when they reflect attorney strategy.
What Remains Unsettled
Despite emerging guidance, fundamental questions remain unanswered:
- Third-Party Doctrine Application: No court has definitively ruled on whether using consumer AI platforms constitutes privilege-waiving disclosure to third parties in all circumstances.
- Agent Status: The legal test for when AI providers qualify as necessary agents hasn’t been fully developed.
- Reasonable Security Standard: What constitutes ‘reasonable’ confidentiality measures for AI use continues to evolve with technology.
- Client Misuse: How to address privilege when clients independently use AI without lawyer involvement remains unclear.
Courts will likely address these issues case-by-case as disputes arise, making predictability challenging.
Key Risks Lawyers and Clients Should Know
Understanding the specific risks AI poses to privilege protection helps lawyers and clients make informed decisions about technology use.
Risk 1: Inadvertent Privilege Waiver
The most immediate risk is unintentional waiver through disclosure to third parties. When lawyers or clients input privileged information into public AI platforms like standard ChatGPT, they may be disclosing that information to OpenAI—a third party outside the attorney-client relationship. Courts have consistently held that voluntary disclosure to unnecessary third parties destroys privilege.
Real-World Example: A lawyer copies an internal litigation strategy memo into ChatGPT to improve the writing. Even if chat history is disabled, OpenAI’s terms reserve rights to review conversations for abuse monitoring. If opposing counsel discovers this through discovery, they could argue the memo is no longer privileged.
Risk 2: Data Retention and Reuse
Many AI tools—particularly consumer-grade platforms—retain user inputs for purposes beyond mere service provision. Common practices include:
- Model Training: Using conversations to improve AI models unless explicitly disabled
- Quality Improvement: Human reviewers reading conversations to assess quality
- Compliance Monitoring: Reviewing inputs to detect abuse or prohibited content
Even if data isn’t actively used for training, its existence on external servers creates discoverability risk. A subpoena could potentially compel production of stored conversations, as OpenAI’s CEO acknowledged when he noted that platforms could be legally required to produce communications during litigation.
Risk 3: AI Hallucinations and Misinformation
While not directly a privilege issue, AI hallucinations create professional liability risks that interact with privilege concerns. AI systems can generate plausible but entirely fabricated case citations, misstate legal principles, or create confident-sounding but incorrect legal analysis.
When lawyers uncritically rely on AI outputs containing hallucinations—particularly in court filings—they face sanctions and malpractice liability. The resulting disciplinary proceedings could expose otherwise privileged communications about how the lawyer used AI, what verification steps were (or weren’t) taken, and what communications occurred with the client about AI-generated errors.
Risk 4: Compliance with Professional Conduct Rules
Using AI irresponsibly can violate multiple ethical obligations:
- Competence (Rule 1.1): Failing to understand AI limitations or verify outputs
- Confidentiality (Rule 1.6): Using platforms without adequate data protection
- Communication (Rule 1.4): Failing to inform clients about AI use when material to representation
- Candor to Tribunal (Rules 3.1, 3.3): Submitting AI-generated content with false citations or misleading information
Disciplinary proceedings resulting from these violations could require lawyers to disclose privileged information about their AI use, client communications, and decision-making processes—creating a troubling intersection between privilege and professional accountability.
Risk 5: Client-Initiated AI Use
An often-overlooked risk occurs when clients use AI tools independently. A client might input details about their case into ChatGPT, share privileged communications, or seek legal advice from AI—all without realizing they’re potentially waiving privilege.
This risk is particularly acute because clients may view AI interactions as private, similar to journaling or thinking aloud. However, courts are unlikely to extend privilege to these communications given the absence of an attorney-client relationship.
Best Practices to Preserve Attorney-Client Privilege
While the legal landscape remains unsettled, lawyers and clients can take concrete steps to minimize privilege risks when using AI tools.
1. Conduct Rigorous Vendor Due Diligence
Before using any AI tool with client data, lawyers should thoroughly evaluate:
- Data Handling Practices: How is data stored, processed, and retained? Is it used for model training?
- Security Measures: What encryption, access controls, and security protocols are in place?
- Privacy Policies: Do terms of service permit human review of inputs? Are there exceptions for law enforcement or litigation?
- Compliance Standards: Is the vendor SOC 2 compliant? Do they understand legal-industry requirements?
2. Require Comprehensive Confidentiality Agreements
Enterprise AI contracts should explicitly address privilege protection:
- No Model Training: Contractual guarantee that client inputs won’t train public models
- Data Isolation: Client data remains segregated and inaccessible to other users
- Limited Access: Only authorized personnel can access data, and only for defined purposes
- Deletion Rights: Ability to delete data upon request
- Audit Rights: Ability to verify compliance with security and confidentiality commitments
3. Prioritize Private or On-Premises AI Deployments
When handling highly sensitive matters, consider:
- On-Premises Solutions: AI models deployed on firm servers with no external data transmission
- Private Cloud Instances: Dedicated cloud deployments with strong data isolation
- Legal-Specific Platforms: Tools like Harvey AI, CoCounsel, or Westlaw AI designed for legal confidentiality requirements
4. Provide Clear Client Disclosures
Transparency with clients is both ethically required and practically wise:
- Inform About AI Use: Explain when and how AI will be used in the representation
- Obtain Consent: Get informed consent when AI use involves confidentiality risks
- Set Expectations: Clarify that lawyer supervision and verification remain essential
- Warn Against Independent Use: Advise clients not to input case information into consumer AI platforms
5. Maintain Robust Lawyer Supervision
Never treat AI as a substitute for professional judgment:
- Verify All Outputs: Check case citations, validate legal analysis, confirm factual accuracy
- Exercise Independent Judgment: Make strategic decisions based on legal expertise, not AI suggestions
- Document Supervision: Keep records of how AI was used and what verification occurred
- Frame Prompts Strategically: As the Tremblay case showed, thoughtfully crafted prompts reflecting legal strategy may receive work product protection
6. Implement Firm-Wide AI Governance Policies
Law firms should develop comprehensive AI use policies covering:
- Approved Tools: Whitelist of vetted AI platforms authorized for client work
- Prohibited Uses: Clear prohibitions on using consumer AI tools for privileged information
- Training Requirements: Mandatory education on AI capabilities, limitations, and ethical obligations
- Incident Response: Procedures for addressing potential privilege breaches or AI errors
- Regular Review: Periodic reassessment as technology and law evolve
Future Outlook: How Courts May Treat AI-Assisted Advice
While current law provides limited guidance, emerging trends suggest how privilege doctrine may evolve as AI becomes ubiquitous in legal practice.
Likely Judicial Approaches
Functional Equivalence Standard: Courts will likely develop a test asking whether AI tools function as the functional equivalent of traditional legal assistants. If an AI platform operates under lawyer supervision, maintains strict confidentiality, and serves as a necessary tool for legal representation—similar to paralegals or expert consultants—courts may extend privilege protection.
Reasonable Security Standard: Expect courts to adopt a reasonableness test for confidentiality measures. Just as cloud storage doesn’t automatically waive privilege if reasonable security exists, AI use with appropriate safeguards may be deemed acceptable. The standard will evolve as technology improves and best practices emerge.
Categorical Distinctions: Courts will likely distinguish sharply between enterprise AI platforms designed for legal use (which may preserve privilege) and consumer platforms explicitly lacking confidentiality protections (which won’t). Platform-specific analysis rather than blanket rules seems probable.
Regulatory Trends to Watch
Several regulatory developments may shape AI privilege law:
- Federal AI Legislation: While comprehensive federal AI regulation seems unlikely in the near term, sector-specific rules for legal services could emerge, potentially establishing privilege standards.
- State Law Patchwork: More states will follow Colorado in enacting AI-specific laws, creating compliance challenges for multi-jurisdictional practice.
- Professional Conduct Updates: Bar associations will likely update Model Rules to explicitly address AI, providing clearer guidance on competence, confidentiality, and disclosure requirements.
- International Harmonization: As AI use becomes global, pressure for consistent international standards regarding legal AI and privilege may increase, particularly in cross-border transactions and litigation.
The Normalization Factor
As AI tools become standard in legal practice—much like word processors, legal research databases, and email—courts may view their use as ordinary and expected rather than exceptional. This normalization could work both ways:
Positive: Courts may treat supervised AI use as no different from using Westlaw or email, preserving privilege when reasonable precautions exist.
Negative: Widespread adoption might lead courts to impose stricter standards, requiring lawyers to demonstrate extraordinary care precisely because AI has become commonplace and risks are well-known.
No Independent ‘AI Privilege’
Legal scholars have been clear: there will be no separate ‘AI privilege’ creating confidentiality for direct communications with AI systems. Harvard’s Journal of Law & Technology published a comprehensive analysis arguing that extending privilege to AI communications would be premature, unworkable, and inconsistent with privilege doctrine’s historical foundations.
Traditional privileges rest on fiduciary duties, licensure, and accountability structures completely absent from AI interactions. AI systems are neither persons nor professionals; they owe no duties, face no discipline, and can’t form the relationships privilege doctrine protects. Any privilege protection for AI use will continue flowing from the traditional attorney-client relationship, not from standalone AI interactions.
Clear Takeaways & Practical Guidance
What We Know
- Traditional privilege rules still apply. AI doesn’t create new privileges or exceptions. Existing doctrine governs—communications must remain confidential and within the attorney-client relationship.
- Consumer AI platforms are extremely risky. Public tools like standard ChatGPT provide no privilege protection and likely constitute waiver through third-party disclosure.
- Enterprise platforms offer better protection but no guarantees. Legal-specific AI tools with robust confidentiality agreements and security measures are defensible, though not risk-free.
- Lawyer supervision is essential. AI must function as a tool under attorney direction, not as an independent advice provider.
- Direct client-AI communications lack privilege. Clients using ChatGPT for legal advice create no privileged relationship and may waive existing privileges.
What Remains Unclear
- Precise boundaries of permissible AI use with privilege protection
- Standards for “reasonable” confidentiality measures in AI contexts
- How courts will treat specific platforms and vendor arrangements
- Whether work product protection for AI prompts will extend beyond litigation contexts
- How to address inadvertent client AI use without lawyer knowledge
Who Should Exercise Caution and Why
Lawyers
Exercise extreme caution with AI use. Understand that even inadvertent privilege waiver can be catastrophic for clients and carry devastating professional consequences. The duty of confidentiality isn’t optional or flexible—it’s absolute. When in doubt, use more secure tools, obtain client consent, and verify outputs meticulously.
Clients
Never input case details, strategy discussions, or privileged communications into consumer AI platforms. What seems like private reflection could become discoverable evidence. Always consult your lawyer before using AI tools in connection with legal matters.
Startups and Legal Tech Companies
Design products with privilege protection as a core feature, not an afterthought. Provide transparent documentation about data handling, offer robust confidentiality controls, and help legal professionals use your tools responsibly. The market increasingly demands provable security and compliance.
Compliance Teams and General Counsel
Develop clear AI governance policies before problems arise. Vet vendors thoroughly, negotiate protective contract terms, monitor compliance, and train teams on proper use. The 79% of lawyers using AI versus 10% with governing policies gap must close.
Quick Reference: What to Do Next
Immediate Actions:
- Review your current AI tools and assess privilege risks
- Stop using consumer AI platforms for any client-related work
- Educate clients about AI privilege risks and prohibit independent use
- Document your AI use practices and supervision procedures
Short-Term Steps (30-90 days):
- Evaluate and select vetted, secure legal AI platforms
- Negotiate comprehensive confidentiality agreements with AI vendors
- Draft and implement firm AI use policies
- Provide mandatory training on AI ethics and privilege protection
Ongoing Practices:
- Monitor legal and regulatory developments
- Periodically reassess vendor security and compliance
- Update policies as technology and standards evolve
- Maintain detailed records of AI use and verification steps
Frequently Asked Questions
- Can I use ChatGPT to help draft legal documents if I don’t include client names?
Simply removing names is insufficient. Modern AI systems and data analytics can re-identify individuals from contextual information, and supposedly anonymized data often isn’t truly anonymous. Many legal matters contain unique fact patterns that could identify clients even without names. Moreover, you’re still potentially exposing confidential legal strategies, privileged information, and sensitive details. The ABA’s position is clear: without explicit client consent and robust security guarantees from the AI provider, using consumer AI tools for client matters is extremely risky and likely violates ethical obligations.
- What’s the difference between using Harvey AI versus ChatGPT for legal work?
Harvey AI is an enterprise legal platform specifically designed for law firms with security features, confidentiality agreements, and legal-specific training. It typically includes data isolation, no model training on client inputs, and compliance with legal industry standards. ChatGPT (free or standard versions) is a consumer product with terms of service that may permit data retention, human review for quality control, and use of inputs for model improvement. While ChatGPT Enterprise offers enhanced privacy, general ChatGPT lacks the contractual protections and security infrastructure necessary for handling privileged legal communications.
- If my client uses AI to prepare information for me, does that waive privilege?
It depends on what they share and with whom. If your client inputs privileged information into a public AI platform like ChatGPT, they may inadvertently waive privilege by disclosing it to a third party. The critical question is whether their use of AI constitutes a confidential communication or an unnecessary disclosure. To minimize risks, explicitly instruct clients not to use consumer AI tools when preparing information related to their case. If they want to use AI, guide them to approved tools or have them work directly with you.
- Are my AI prompts protected as work product?
Potentially, yes—but only if they reflect your mental impressions and legal strategy. The Tremblay v. OpenAI decision established that carefully crafted AI prompts containing counsel’s opinions about how to interrogate AI for legal purposes may constitute opinion work product. However, simple factual queries or routine requests likely wouldn’t qualify. To maximize protection, frame your prompts to clearly reflect strategic legal thinking, keep records of your prompts, and avoid treating AI as a mere fact generator. Remember that using some AI outputs in litigation doesn’t automatically waive protection for all related research and prompts.
- Do I need to tell my clients I’m using AI?
It depends on the circumstances. ABA Formal Opinion 512 suggests that disclosure and consent may be required when AI use involves confidentiality risks, when client data will be shared with third parties, or when the AI use is material to the representation. Best practice is to disclose AI use proactively, explain what safeguards are in place, and obtain informed consent—particularly for sensitive matters. This transparency builds trust, ensures compliance with ethical rules, and protects you if questions arise later. Many firms now include AI use disclosures in their engagement letters.
- What if my AI tool gets subpoenaed or hacked?
This is a real risk, particularly with consumer platforms. OpenAI’s CEO acknowledged that stored conversations could be legally compelled in serious cases like criminal investigations. If privileged communications exist on AI servers, they may be discoverable unless protected by strong contractual confidentiality provisions and the court recognizes privilege. A data breach exposing privileged information could waive privilege and create massive malpractice liability. This is why using enterprise platforms with robust security, encryption, data isolation, and incident response capabilities is critical. Your vendor selection and contract terms are your primary defenses.
- How do I verify that my AI provider is actually keeping data confidential?
Demand transparency and verification rights. Your contracts should include audit provisions allowing you to verify compliance with security commitments. Look for SOC 2 Type II compliance reports, which demonstrate independent assessment of security controls. Request detailed information about data storage, encryption, access controls, and retention practices. Ask whether data is segregated or commingled, who can access it, and for what purposes. Reputable legal AI vendors understand these concerns and should readily provide documentation. If a vendor is evasive or unwilling to demonstrate compliance, that’s a red flag.
Conclusion: Navigating the Intersection of Innovation and Protection
The collision between artificial intelligence and attorney-client privilege represents one of the most consequential challenges facing the legal profession. As AI tools become indispensable for competitive legal practice, lawyers must harness their power without sacrificing the confidentiality that forms the bedrock of effective representation.
The current legal landscape offers both clarity and uncertainty. We know that traditional privilege doctrine still governs, that consumer AI platforms create severe risks, and that lawyer supervision remains non-negotiable. We know that no independent ‘AI privilege’ will emerge to protect direct client-AI communications. And we know that the duty of confidentiality isn’t diminished by technological advancement—if anything, it’s heightened.
Yet much remains unsettled. Courts are only beginning to address how privilege applies in AI contexts. Regulatory frameworks are emerging but incomplete. Best practices continue evolving as technology advances and experience accumulates. This uncertainty demands that lawyers approach AI use with both enthusiasm for its potential and caution about its risks.
The path forward requires balancing innovation with responsibility. Lawyers must become educated consumers of AI technology—understanding not just what tools can do, but how they work, where data flows, and what risks emerge. Firms must invest in secure, legal-specific platforms rather than relying on convenient but dangerous consumer tools. Clients must be educated about privilege risks and guided to use AI appropriately. And the profession must continue developing the ethical frameworks, contractual protections, and oversight mechanisms necessary to preserve privilege in an AI-augmented world.
Success isn’t about avoiding AI—that ship has sailed, and abstention would leave clients underserved. Success is about using AI thoughtfully, transparently, and securely. It’s about reading vendor contracts carefully, asking hard questions about data handling, maintaining robust human supervision, and never sacrificing client confidentiality for convenience.
Attorney-client privilege has survived technological disruption before—from typewriters to fax machines, email to cloud storage. Each innovation initially created privilege concerns that the profession ultimately addressed through careful practice, appropriate safeguards, and evolving legal standards. AI presents unique challenges given its complexity and data-intensive nature, but the fundamental principles remain constant: preserve confidentiality, maintain the attorney’s professional judgment, and protect the relationship that privilege serves.
As we navigate this transformation, one certainty emerges: lawyers who take privilege protection seriously, who invest in secure tools and robust governance, and who maintain vigilant supervision will be best positioned to leverage AI’s benefits while honoring their paramount obligation to clients. The technology will continue advancing at breathtaking speed. Our commitment to confidentiality must remain unwavering.
—
About This Analysis
This article synthesizes current legal authority, regulatory guidance, and emerging judicial decisions as of February 2026. Given the rapid pace of AI development and legal evolution, readers should verify current law and consult qualified counsel for guidance on specific situations. This analysis is educational only and does not constitute legal advice.
