HD Tech - SecurITy Delivered
Back to Blog
Cybersecurity

Can Law Firms Use ChatGPT? Attorney-Client Privilege and AI Risk

By Tom Hermstad · HD Tech

Can Law Firms Use ChatGPT? Attorney-Client Privilege and AI Risk

Quick Answers: ChatGPT and Law Firm Risk

Can law firms use ChatGPT?

Yes — with major caveats. ChatGPT (the standard consumer and even ChatGPT Plus version) sends your inputs to OpenAI's servers, where they may be used to train future models. Entering client names, case facts, or privileged communications into standard ChatGPT is an ethics violation in most jurisdictions. Law firms can use AI safely with enterprise tools that have proper data processing agreements and do not train on your data.

Does using ChatGPT violate attorney-client privilege?

It can — and the case law is actively developing. Attorney-client privilege applies to confidential communications between lawyer and client made for legal advice purposes. When an attorney enters privileged client information into a third-party AI system — even for research — they may be "disclosing" that information to a third party, which can waive privilege. Courts have already split on the question: in Warner v. Gilbarco, ChatGPT outputs were protected as attorney work product (the court reasoning that generative AI programs are "tools, not persons" and the plaintiff had not disclosed materials to an adversary), while in United States v. Heppner (February 2026, Judge Rakoff, S.D.N.Y.) the court held that conversations with AI tools are not protected by attorney-client privilege — the first ruling of its kind. Several state bars have issued ethics opinions warning that attorneys must assess whether AI tools meet confidentiality obligations before use.

What AI tools are safe for law firms?

Bar-compliant AI tools for law firms include: Microsoft Copilot for Microsoft 365 (with a compliant tenant — does not train on your data), Harvey AI (built for legal, enterprise data agreements), Westlaw AI and Lexis+ AI (legal research with privilege protections), and ChatGPT Enterprise (data is not used for training, but requires verification). General-purpose ChatGPT Free and Plus should not be used for any client-specific work.

What do bar associations say about AI?

As of 2026, over 20 state bars have issued formal ethics opinions or guidance on attorney AI use, led by ABA Formal Opinion 512 (July 29, 2024) — the ABA's first comprehensive generative AI ethics guidance, which applies the Model Rules on competence, informed consent, confidentiality, and fees to AI use. The consensus: attorneys have a duty of competence (Model Rule 1.1) that includes understanding AI tools they use, a duty of confidentiality (Model Rule 1.6) that requires ensuring client data is protected, and a duty of supervision (Model Rule 5.3) requiring that AI outputs be reviewed for accuracy before use with clients.

The Real Risk: It Is Not About the Chatbot, It Is About the Data

The ChatGPT risk for law firms is not that the AI gives bad legal advice — it is that your confidential client data leaves your control the moment you hit Enter. Standard ChatGPT (Free, Plus, and even some API configurations) may use conversation data to improve OpenAI's models. Even where OpenAI states they do not train on API data, the terms change and the risk remains: your client's privileged information is now on a third-party server.

This is not theoretical. In 2023, Samsung engineers accidentally leaked proprietary code via ChatGPT — the information was retained by OpenAI and potentially exposed. Law firms face the same risk, with far higher stakes: a single disclosure of client confidences can result in bar complaints, malpractice claims, and disqualification from cases.

The Three Duties at Risk

Model RuleDutyAI Risk
Rule 1.1CompetenceUsing AI without understanding how it works, what it retains, or when it hallucinates
Rule 1.6ConfidentialitySending client data to a third-party AI platform without proper data agreements
Rule 5.3SupervisionFailing to review AI-generated legal research or documents before use

The Case Law Is Splitting: Warner, Heppner, and the Privilege Question

Two recent federal decisions have reached different conclusions on whether AI interactions are privileged, and firm leaders should know both. In Warner v. Gilbarco, the court held that ChatGPT outputs were protected as attorney work product, reasoning that generative AI programs are "tools, not persons" and that the plaintiff had not disclosed materials to an adversary. In United States v. Heppner (February 2026), Judge Rakoff of the Southern District of New York ruled that conversations with AI tools are not protected by attorney-client privilege — the first ruling of its kind and now the leading case directly on point.

The practical takeaway: do not rely on a privilege assumption when using any third-party AI tool. Courts are actively splitting, and the facts of each matter — what was entered, which tool was used, whether outputs were later disclosed — will drive the outcome. Until the doctrine settles, treat AI inputs as if they could be discoverable.

What "Safe" AI Actually Looks Like for a Law Firm

Safe AI use for law firms requires three things: (1) a Data Processing Agreement (DPA) confirming the vendor does not use your data for training; (2) data residency in a compliant environment (U.S. only, or your jurisdiction's requirements); and (3) a written firm AI acceptable use policy that all attorneys and staff sign.

Microsoft 365 Copilot with a proper enterprise license meets all three — your data stays within your Microsoft tenant, Microsoft has a published DPA, and it does not train foundation models on your firm's data. Harvey AI, built specifically for legal work, also meets these standards and adds legal-specific training data that reduces hallucination risk on case law questions.

The Hallucination Problem Is a Malpractice Problem

In 2023, attorneys in Mata v. Avianca submitted a brief citing six ChatGPT-fabricated cases. On June 22, 2023, the court issued a $5,000 sanction against the two lawyers and their firm (Levidow, Levidow & Oberman) collectively — a single total sanction, not per person — and the incident became a cautionary tale cited by bar ethics committees nationwide. AI hallucinations are not a curiosity — for attorneys, they are a malpractice and sanctions exposure. Every AI-generated legal citation must be verified in Westlaw or Lexis before filing.

Tom Hermstad, President of HD Tech: "Law firms are our most cautious clients — rightfully so. The conversation we have is not 'should you use AI' but 'how do you use it in a way that protects your clients and your license.' The answer exists. It just requires the right tools and a policy your staff will actually follow."

California Bar Guidance (2023, Updated 2025)

The California State Bar's generative AI guidance — originally issued on November 16, 2023 and updated in 2025 — states that California attorneys must: perform reasonable due diligence on AI tools before use with client matters; disclose AI use to clients when it may affect fees or work product (if required by engagement agreement); and maintain competence in understanding AI limitations including bias and hallucination risks. Similar guidance has been issued by New York, Florida, and Texas bars.

Frequently Asked Questions

ChatGPT Enterprise does not use your conversations for model training and offers stronger data controls than standard ChatGPT. However, your data still goes to OpenAI's infrastructure. Most bar ethics advisors recommend using legal-specific tools (Harvey, Westlaw AI, Lexis+ AI) or Microsoft Copilot within your existing Microsoft tenant for matters involving client confidences.

Disclosure requirements vary by jurisdiction and engagement agreement. As of 2026, most bars do not require automatic disclosure, but best practice is to address AI use in your engagement letter — particularly around billing (if AI reduces time significantly, clients may expect fee adjustments) and work product review procedures.

A law firm AI policy should specify: approved tools and prohibited tools; what data categories may never be entered into any AI system (client names, case facts, privileged communications); required review steps before using AI output in any filing or client communication; who approves new AI tool adoption; and annual training requirements for all staff.

Under Rule 5.3, attorneys are responsible for supervising all non-attorney staff. If a paralegal uses ChatGPT for client-related work without proper controls, the supervising attorney may be responsible for the resulting ethics violation. Your AI policy must cover all staff, not just attorneys.

Ask the vendor directly for their DPA, review it for: (a) no training on your data, (b) data deletion procedures, (c) breach notification timelines, and (d) subprocessor lists. If a vendor cannot produce a DPA, do not use their tool for any client-related work.

Protect Your Practice — Get an AI Risk Assessment

HD Tech helps Orange County law firms evaluate their current AI tool usage, identify ethics risks, and implement a compliant AI stack with a written acceptable use policy. Free, no-pressure assessment — plain English, not legalese.

Schedule Your Free IT and AI Risk Review or call 877-540-1684.

Serving Law Firms Across Orange County and Los Angeles

HD Tech provides IT compliance and AI risk assessments to law firms in Irvine, Santa Ana, Newport Beach, Anaheim, Orange, Fullerton, Long Beach, and throughout Southern California.

ChatGPTLaw FirmsAttorney-Client PrivilegeAI RiskLegal TechnologyBar Compliance
Tom Hermstad, President of HD Tech

Tom Hermstad

President & CMO, HD Tech

Tom Hermstad has led HD Tech since 1995, building one of Southern California's most trusted managed IT and cybersecurity firms. He specializes in helping Orange County businesses eliminate IT headaches and stay ahead of evolving cyber threats — in plain English.

Need Help With Your IT?

Get a free, no-pressure IT health check. We'll show you exactly where you're exposed — in plain English.

Can Law Firms Use ChatGPT? Attorney-Client Privilege and AI Risk | HD Tech