Abstract
Generative AI can now draft pleadings, summarize discovery, and generate litigation checklists in seconds. That reality has reignited a recurring question: will AI replace lawyers? This essay argues that AI will replace many discrete legal tasks, but it is unlikely to replace lawyers wholesale—especially in litigation—because the core value of counsel lies in strategy, judgment, credibility, ethics, and the human management of conflict. Using the proverb “he who represents himself has a fool for a client” as a starting point, the essay explains why AI does not cure the fundamental dangers of self-representation, highlights emerging AI-related pitfalls (including sanctions for fabricated citations), and offers a practical, client-centered middle path: limited scope representation supported by responsible AI-enabled collaboration.
Introduction: The Question Behind the Question
When people ask whether AI will replace lawyers, they usually mean something more specific: “Will a client be able to get the same outcome without paying for counsel?” For a growing number of litigants—especially those priced out of full-service representation—AI appears to promise exactly that. But courtroom outcomes do not turn solely on who can generate the cleanest prose. They turn on procedural discipline, evidentiary foundations, credibility, strategic tradeoffs, and—unavoidably—the psychology of conflict.
Those are precisely the areas where the oldest proverb in the book still does work.
I. “He Who Represents Himself Has a Fool for a Client”: What the Saying Really Means
The proverb “a person who represents himself has a fool for a client” is often (mis)attributed to Abraham Lincoln, but it appears in print well before Lincoln’s presidency. Scholars of quotation history trace early forms to the early 1800s, including an 1814 reference associated with Henry Kett. [1] [2]
This line is not best understood as an insult to people who choose (or are forced) to proceed pro se. It is a warning about role conflict. The “client” role is personal and emotionally invested. The “lawyer” role must be strategic, procedural, and objective. When the same person occupies both roles, even a smart and sincere litigant can become vulnerable to a predictable set of litigation errors: overconfidence, tunnel vision, and the inability to evaluate risk like an outsider.
Generative AI does not erase this dynamic. It can provide words; it cannot supply objectivity.
II. Pro Se is a Right (Sometimes) — But It Is Not a Litigation Advantage
American law recognizes self-representation in multiple contexts. In federal court, statute provides that parties may “plead and conduct their own cases personally or by counsel.” [3] In criminal prosecutions, the Supreme Court has recognized a constitutional right of self-representation where the defendant knowingly and intelligently waives counsel. [4]
Courts also acknowledge the practical reality that many self-represented litigants lack legal training. Pro se filings may be construed liberally in some contexts. [5] But this is not a license to ignore procedural requirements. The Supreme Court has warned that it has “never suggested that procedural rules in ordinary civil litigation should be interpreted so as to excuse mistakes by those who proceed without counsel.” [6]
The core problem is structural: litigation is not just about being right; it is about proving that you are right in a system governed by rules. AI can help a person understand those rules at a surface level, but it cannot guarantee compliance—especially when the litigant does not know which facts matter, which deadlines are jurisdictional, or which omissions are fatal.
III. Why AI Doesn’t Solve the Core Intangibles of Litigation
AI is strongest where legal work is text-intensive, repetitive, and pattern-based. Litigation, however, is only partly a writing exercise. The most valuable “intangibles” that competent counsel bring to a case are not reducible to prompts.
1. Emotional distance and reality testing. Litigation pressures people into extreme positions. A lawyer’s job includes telling a client hard truths: what the record will show, how a judge may react, what a jury will likely hear, and what a settlement number actually means.
2. Strategic judgment under uncertainty. The key decisions are rarely “write a motion” versus “don’t write a motion.” They are choices among imperfect options: move now or develop the record; attack credibility or stay narrow; consent to discovery limits or fight and risk sanctions.
3. Credibility and courtroom execution. A well-written brief can be undermined by a single careless misstatement at oral argument, an exhibit admitted without foundation, or a witness who collapses on cross.
4. Institutional knowledge and professional relationships. Experienced counsel understand local rules, courtroom customs, and judge-specific preferences that rarely appear in published opinions. Knowing judges is not about improper influence; it is about knowing how to litigate efficiently in a particular forum. Likewise, lawyers routinely consult colleagues, co-counsel, and specialists to stress-test strategy—an ecosystem AI cannot replicate.
In short, AI may help create a draft. Counsel must still decide whether that draft should exist at all.
IV. AI Creates New Pitfalls — and Courts Are Already Sanctioning Them
The most dangerous feature of generative AI is not that it makes mistakes—humans do too—but that it makes mistakes confidently. That matters in law because the system assumes that citations and quotations are real.
Recent cases illustrate how quickly AI errors can become sanctionable misconduct:
• In Mata v. Avianca, a federal court imposed sanctions after attorneys filed papers containing multiple nonexistent judicial opinions and fabricated quotations generated through AI-assisted research and drafting. [7]
• In Park v. Kim, the Second Circuit referred an attorney to its Grievance Panel after counsel cited a nonexistent case obtained through ChatGPT. [8]
• In Smith v. Farwell, a Massachusetts Superior Court judge sanctioned counsel for filing briefs containing fictitious citations located through unidentified AI systems without adequate verification. [9]
• In Gauthier v. Goodyear Tire & Rubber Co., the Eastern District of Texas sanctioned an attorney after fabricated citations and quotations appeared in a filing linked to AI tool usage. [10]
These matters share a theme: AI does not dilute the lawyer’s duty of competence, diligence, and candor. It intensifies it.
For pro se litigants, the risk is different but no less real: AI may encourage overconfidence, leading a person to file something that “sounds legal” but fails on procedure, evidence, or jurisdiction.
V. Rules and Ethics Are Catching Up: Disclosure Orders and Professional Guidance
Courts are responding not by banning AI, but by reallocating accountability back onto humans.
Some judges have issued standing orders requiring disclosure and/or certification when generative AI is used in filings. For example, Judge Baylson (E.D. Pa.) requires parties to disclose AI use and certify that every citation to law or the record has been verified as accurate. [11] Judge Fuentes (N.D. Ill.) has required disclosure of the specific generative AI tool used for drafting and/or research. [12] Judge Starr (N.D. Tex.) famously required a certificate attesting either that no portion of a filing was drafted by generative AI or that any AI-drafted language was checked for accuracy by a human. [13]
Ethics authorities are also providing a framework. The American Bar Association’s Formal Opinion 512 addresses lawyers’ ethical obligations when using generative AI tools, emphasizing competence, confidentiality, communication, supervision, and reasonable fees. [14] Model Rule 1.1’s commentary explicitly connects competence to the “benefits and risks associated with relevant technology.” [15] And confidentiality obligations remain a bright line: lawyers must protect information relating to representation. [16] In New York, ethics guidance likewise discusses how these duties apply to generative AI. [17] The message is consistent: AI is permitted, but professional judgment cannot be delegated.
VI. What Clients Should Ask: “Do You Use AI — and How?”
A practical implication of this new landscape is that clients should feel empowered to ask whether counsel uses AI—and, more importantly, whether counsel uses it responsibly.
Not all “AI” is the same. Beyond general-purpose chatbots, there are legal-focused tools designed to work with authoritative legal content and verification workflows (for example, platforms marketed by major legal publishers). [18] [19]
Client-friendly questions include:
• Confidentiality: What information do you input into AI tools? Do you anonymize client facts?
• Verification: What is your process to confirm every citation, quote, and record reference?
• Court rules: Does the assigned judge require disclosure or certification of AI use?
• Billing: How do AI efficiencies affect the fee? Do clients benefit from time saved?
• Strategy ownership: Who makes the strategic decisions—the client, the lawyer, or the tool?
Properly used, AI can improve service: faster iterations on drafts, better organization of timelines, and clearer client communication. But the attorney remains responsible for strategy, ethics, and filings. That responsibility cannot be outsourced.
VII. Limited Scope Representation: A Middle Path Between “Full Retainer” and Pro Se
One reason AI feels “disruptive” is that it arrives in a market where many people cannot afford full-service counsel. The real access-to-justice question is not whether AI replaces lawyers, but whether AI can help people avoid the false binary of “pay for everything” or “go it alone.”
Limited scope representation—sometimes called “unbundled” legal services—can provide that middle path. ABA Model Rule 1.2(c) allows lawyers to limit the scope of representation when reasonable and when the client gives informed consent. [20]
Limited scope assistance can be especially valuable in litigation “pressure points,” such as:
• drafting or reviewing a complaint, answer, or key motion;
• preparing a client for testimony or deposition;
• negotiating and drafting a settlement stipulation;
• appearing at a critical hearing.
On Gilmer Legal’s site, readers can learn more here:
• Limited Scope Representation
• Flat Fee / Unbundled Legal Services
When clients use AI to organize facts, build chronologies, and identify questions, limited scope representation becomes even more practical: the client can do structured preparation, and counsel can focus on strategy, risk management, and courtroom execution. But limited scope must be clearly defined in writing so the client understands what counsel will—and will not—do.
Practice Pointers: Avoiding the Most Common AI-Enabled Missteps
- Treat AI output as a draft, not an authority. Verify every case citation, quote, and procedural rule against primary sources.
- Do not upload confidential client information into public AI tools without an informed, documented confidentiality strategy.
- Use AI to generate checklists and issue-spotting—then confirm local rules, standing orders, and judge preferences separately.
- Beware overproduction: AI may encourage litigants to file too much, too often, and too emotionally. Strategy is restraint as much as action.
- If you are pro se, consider using counsel for limited scope coaching before a key hearing or filing; a short consult can prevent a fatal error.
Conclusion: AI Will Replace Tasks, Not the Lawyer’s Core Role
Generative AI will change how legal work is performed. It will compress drafting time, accelerate research, and empower clients to participate more actively in their cases. But litigation remains a human system that demands judgment, credibility, and ethical accountability.
That is why the “fool for a client” warning survives the technological moment. AI does not eliminate the hardest part of self-representation: the inability to be objective about your own dispute. The future is not “AI versus lawyers.” It is AI-enhanced practice—where the best outcomes come from responsible tools, transparent processes, and counsel who remain accountable for strategy and results.
Suggested Footer Disclaimer
This article is for general informational purposes only and does not constitute legal advice. Reading this article does not create an attorney-client relationship. If you need advice about your specific situation, consult a qualified attorney in your jurisdiction.
References (Sources Cited)
- Quote Investigator, “A Man Who Is His Own Lawyer Has a Fool for a Client” (July 30, 2019). https://quoteinvestigator.com/2019/07/30/lawyer/
- The Phrase Finder, “A man who is his own lawyer has a fool for a client” (discussing Henry Kett, 1814). https://www.phrases.org.uk/meanings/a-man-who-is-his-own-lawyer-has-a-fool-for-a-client.html
- 28 U.S.C. § 1654, Appearance personally or by counsel (Cornell Legal Information Institute). https://www.law.cornell.edu/uscode/text/28/1654
- Faretta v. California, 422 U.S. 806 (1975) (Justia Supreme Court Center). https://supreme.justia.com/cases/federal/us/422/806/
- Haines v. Kerner, 404 U.S. 519 (1972) (Cornell LII). https://www.law.cornell.edu/supremecourt/text/404/519
- McNeil v. United States, 508 U.S. 106 (1993) (Cornell LII). https://www.law.cornell.edu/supct/html/92-6033.ZO.html
- Mata v. Avianca, Inc., Opinion and Order on Sanctions, 22-cv-1461 (S.D.N.Y. June 22, 2023) (Doc. 54) (Justia). https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2022cv01461/575368/54/
- Park v. Kim, No. 22-2057 (2d Cir. Jan. 30, 2024) (Justia). https://law.justia.com/cases/federal/appellate-courts/ca2/22-2057/22-2057-2024-01-30.html
- Smith v. Farwell, Order Imposing Sanctions (Mass. Super. Ct. Feb. 12, 2024) (PDF). https://masslawyersweekly.com/files/2024/02/12-007-24.pdf
- Gauthier v. Goodyear Tire & Rubber Co., Memorandum and Order (E.D. Tex. Nov. 25, 2024) (PDF via Courthouse News). https://www.courthousenews.com/wp-content/uploads/2024/11/attorney-sanctioned-for-using-ai-hallucinations.pdf
- Standing Order – In Re: Artificial Intelligence (“AI”) in Cases Assigned to Judge Baylson (E.D. Pa. June 6, 2023) (PDF). https://www.paed.uscourts.gov/sites/paed/files/documents/procedures/Standing%20Order%20Re%20Artificial%20Intelligence%206.6.pdf
- Standing Order for Civil Cases Before Judge Fuentes (N.D. Ill. revised June 21, 2023) (PDF). https://www.ilnd.uscourts.gov/_assets/_documents/_forms/_judges/Fuentes/Standing%20Order%20For%20Civil%20Cases%20Before%20Judge%20Fuentes%20revision%206-21-23.pdf
- Judge Brantley Starr (N.D. Tex.), Mandatory Certification Regarding Generative Artificial Intelligence (certificate form, DOC). https://www.txnd.uscourts.gov/sites/default/files/documents/CertReStarrJSR.doc
- American Bar Association, news release on Formal Opinion 512 (July 29, 2024). https://www.americanbar.org/news/abanews/aba-news-archives/2024/07/aba-issues-first-ethics-guidance-ai-tools/
- ABA Model Rule 1.1, Comment [8] (technology competence). https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_1_competence/comment_on_rule_1_1/
- ABA Model Rule 1.6 (confidentiality of information). https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_6_confidentiality_of_information/
- New York City Bar Association, Formal Opinion 2024-5: Generative AI in the Practice of Law (Aug. 7, 2024). https://www.nycbar.org/reports/formal-opinion-2024-5-generative-ai-in-the-practice-of-law/
- Thomson Reuters, CoCounsel Legal (product overview). https://legal.thomsonreuters.com/en/products/cocounsel-legal
- LexisNexis, Lexis+ AI / Lexis+ with Protégé (product overview). https://www.lexisnexis.com/en-us/products/lexis-plus-ai.page
- ABA Model Rule 1.2(c) (limited scope representation). https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_2_scope_of_representation_allocation_of_authority_between_client_lawyer/
