Share on Facebook
Share on Twitter
Share on LinkedIn

Logic vs. Common Sense: Why This Matters in Court

AI can be brilliant at producing a clean-sounding argument. It can also be wildly confident while being quietly wrong. That’s not a contradiction—it’s the central issue.

Think of it like this: AI has “logic” (it can follow patterns and build a chain of reasons). But court is full of “common sense” problems—context, timing, credibility, tone, and the human reality that a case is not a math equation.

That gap matters most when you’re self-represented. If you’re going pro se because you don’t qualify for a free lawyer but you also can’t afford full private representation, you’re not alone—and you’re not foolish. You’re navigating the justice gap. The question is how to do it without letting an AI tool steer you into avoidable mistakes.

1) AI Can “Reason,” But It Doesn’t Live in the Real World

Modern large language models are trained to predict language. They’re not people, and they don’t have lived experience. That matters because “common sense” is often about what the world is like—not just what text about the world looks like. For a thoughtful overview of why we should stop treating language models like people, see MIT’s CBMM summary of a MIT Technology Review piece.

Researchers also note that these systems can be overconfident and struggle with metacognition (knowing when they might be wrong), which can look like “missing common sense.” One example, in a very different high-stakes context, is a 2025 study discussing how lack of metacognition and common sense can create risk when people over-rely on LLM outputs. Scientific Reports (Nature Portfolio).

And the research community is actively cataloging and benchmarking reasoning failures—because the problem isn’t just “accuracy,” it’s unpredictable failure modes. For example: Large Language Model Reasoning Failures (arXiv, 2026).

2) In Court, “Common Sense” Is Mostly Procedure + People

Here’s the part people learn the hard way: court is not primarily a debate about who is morally right. It’s a structured process for proving facts under rules.

“Court sense” is the unglamorous mix of (1) procedure, (2) evidence, (3) credibility, and (4) timing. It’s knowing that a great argument delivered at the wrong time, in the wrong format, with the wrong exhibit foundation, can still lose.

AI can help you draft. But drafting is only one slice of litigation. The parts that decide outcomes are often the parts that don’t fit neatly into a prompt.

  • Deadlines, service rules, and local filing requirements (the stuff that can end a case before the judge ever reaches “the merits”).
  • Evidence foundations: what you can actually prove, with admissible documents and competent testimony.
  • Tone and credibility: how you present your story without sounding evasive, exaggerated, or combative.
  • Negotiation leverage: knowing when a settlement offer is a real doorway vs. a trap.
  • Judicial expectations: every court has norms; experienced counsel knows what matters in that courtroom.
  • Emotional regulation: staying calm enough to think strategically when the other side pushes your buttons.

3) The Pro Se + AI Trap: A Perfectly Formatted Brief That Plays the Wrong Game

AI’s superpower is that it can make almost anything sound polished. That’s also the risk: a document can look “court-ready” while missing what the court actually needs. For self-represented litigants, that can create a false sense of security—especially if the other side has counsel who knows the rules and the judge’s preferences.

Here are a few common “logic without common sense” traps I see when people use AI for court paperwork:

  • “The argument is right, so I’m good.” In reality, you often win or lose on procedure: service, admissibility, preservation, deadlines, and whether you asked the court for the right relief.
  • “The AI cited cases, so it must be researched.” Courts have already sanctioned lawyers for filing AI-hallucinated case citations (fake cases). See, for example, Mata v. Avianca (S.D.N.Y. 2023).
  • “I just need to tell my story.” Courts also need organized facts, dates, and proof—presented in a way that matches the burden of proof and the legal elements.
  • “More pages = more persuasive.” Judges often prefer fewer pages, clearer issues, and properly supported exhibits. AI can over-produce volume that dilutes your best points.
  • “The judge will understand what I meant.” Judges can only rule on what’s properly requested and supported. AI doesn’t reliably know what your local court will accept as a proper request.

If you want to see what “AI confidence” looks like in real litigation consequences, read the sanctions opinion here: Mata v. Avianca (Justia).

4) Courts May Be Patient With Pro Se Litigants—But They Don’t Rewrite the Rules

Courts often say they will construe pro se filings liberally—meaning they read them with some flexibility. A common citation is Haines v. Kerner (U.S. Supreme Court, 1972). That helps, but it is not a free pass.

Courts also emphasize that procedural rules still apply. For example, the Supreme Court has stated that even pro se litigants must follow procedural rules like everyone else. See McNeil v. United States (U.S. Supreme Court, 1993).

This is where “court sense” matters most: you can be treated respectfully, heard patiently, and still lose on an avoidable technical issue.

5) The Justice Gap Is Real (and It’s a Big Reason People Turn to AI)

Many people go pro se for a simple reason: they don’t qualify for free legal help, but full representation feels financially out of reach. That’s an access-to-justice problem, not a character flaw.

The Legal Services Corporation’s 2022 Justice Gap findings are sobering: low-income Americans received inadequate or no legal help for 92% of substantial civil legal problems, and cost is a major barrier. You can read the executive summary here: LSC – The Justice Gap Report (Executive Summary).

AI tools are appealing because they feel like “something.” But if AI becomes a substitute for strategy and procedure—not just a drafting aid—it can widen the gap by encouraging people to file more paperwork without improving their odds.

6) The Middle Path: Limited Scope Representation (Unbundled Services)

If your budget can’t handle full-scope representation, you often don’t have to choose between “all in” and “all alone.” Limited scope representation lets you hire a lawyer for the parts of the case where experience and judgment matter most—while you handle the rest.

On my site, I explain how limited scope and payment-plan options can help working New Yorkers get meaningful support without a traditional large retainer. Start here: Limited Scope Representation (Gilmer Legal).

If you want concrete examples of what unbundled services can look like—document drafting/review, hearing prep, a single court appearance, targeted negotiation—see: Flat Fee / Limited Scope Attorney Brooklyn (Unbundled Legal Services).

Limited scope works especially well when you’re using AI for organization and first drafts—but you want a human lawyer to pressure-test the strategy, tighten the proof, and keep you out of procedural trouble.

Related reading (example): if your issue involves custody changes, you may find this practical guide helpful: How to File a Custody Modification in New York (Gilmer Legal).

7) New Service: Legal Coaching for AI-Assisted Litigants (Starting at $750)

Some clients want to use AI to help draft, organize, and understand their case—but they also want a real lawyer to keep them from stepping on procedural landmines.

That’s where legal coaching fits. Legal coaching is not “full representation,” and it is not a promise of any outcome. It is targeted, attorney-led support designed for people who are doing significant parts of the case themselves but need help with court-related strategy and execution.

Legal coaching fees start at $750. Coaching can include (depending on your situation and what’s appropriate):

  • Reviewing and tightening AI-generated drafts so they match the relief you’re actually requesting.
  • Building a clean, court-friendly timeline and exhibit list (what you have, what you need, and what’s missing).
  • Preparing you for a hearing: what questions are likely, what evidence matters, and how to stay focused when emotions run hot.
  • Negotiation coaching: how to make offers that move the case, and how to spot “bad deal” language before you sign it.
  • Reality-checking the plan: what’s strong, what’s weak, and what you should stop spending energy on.

When you’re ready to talk about limited scope representation or coaching, the simplest next step is to reach out here: Contact Gilmer Legal.

8) If You Hire a Lawyer, Ask This: “How Do You Use AI—and How Do You Verify It?”

AI can make good lawyers faster—but it can also make sloppy work look polished. Clients should feel comfortable asking how their lawyer uses AI tools (if at all), what safeguards exist, and who is responsible for accuracy. Spoiler: the lawyer is always responsible.

Ethics guidance is catching up quickly. The ABA issued Formal Opinion 512 on July 29, 2024, addressing duties like competence, confidentiality, communication, supervision, and fees when using generative AI. You can read about it here: ABA news release on Formal Opinion 512, and the PDF is here: ABA Formal Opinion 512 (PDF).

New York lawyers should also pay attention to local ethics guidance. For example, the New York City Bar issued Formal Opinion 2024-5 on generative AI in legal practice: NYC Bar Formal Opinion 2024-5.

9) “Common Sense” Also Means Verifying: Courts Are Sanctioning AI Hallucinations

A simple “common sense” rule for court filings is: don’t cite what you haven’t checked. AI can fabricate citations, quotes, and even “facts” that sound right. That’s why courts have started issuing sharp warnings (and sanctions) when filings contain hallucinations.

A recent example is the Fifth Circuit’s February 18, 2026 sanctions order in Fletcher v. Experian Information Solutions, Inc., No. 25-20086 (5th Cir.), where the court sanctioned counsel after concluding that AI was used to draft a substantial portion of a brief and the lawyer failed to verify quotations, citations, and assertions.

Courts have also referred lawyers to disciplinary processes for citing non-existent cases tied to ChatGPT use. One notable example is Park v. Kim (2d Cir. 2024).

The point for pro se litigants is not “don’t use AI.” It’s: if you use AI, treat it like a first draft machine—not a truth machine.

10) How to Use AI Without Getting Played

If you’re self-represented (by choice or necessity), AI can still be useful—if you treat it like a tool belt, not a substitute for judgment. Here are practical, court-safe ways to use it:

  1. Use AI to organize, not to invent: have it help you build a dated timeline, summarize messages, or create an exhibit index. (You still verify everything.)
  2. Ask AI for questions, not answers: “What questions should I be prepared to answer at a hearing?” is often safer than “What motion should I file?”
  3. Never trust AI citations without checking: if it gives you a case or a quote, find it in an official source and read it yourself.
  4. Keep confidential details out of public AI: don’t paste sensitive client facts, medical details, or privileged communications into a public chatbot. See the ABA’s discussion of confidentiality and safeguards in Formal Opinion 512.
  5. Learn your court’s self-help resources: New York courts have DIY forms and help centers designed for self-represented litigants.

Helpful starting points (New York): NY Courts CourtHelp, Family Court DIY Forms, and the Court Navigator Program (non-lawyer navigators who can help with non-legal support in certain settings).

Bottom Line: AI Can Help You Write—But It Can’t Give You Court Sense

AI is good at generating words. Litigation is about outcomes. Outcomes depend on procedure, proof, credibility, timing, and strategy—things that often require human judgment and experience.

If you’re going pro se because you’re stuck between “no free lawyer” and “can’t afford a full retainer,” you deserve more options than a blank form and a prayer.

Limited scope representation can give you targeted attorney support where it counts most. And if you want to use AI but need a real-world guide, legal coaching (starting at $750) can help you turn drafts into a court-ready plan.

Explore limited scope options here: Limited Scope Representation and Flat Fee / Unbundled Services.

Ready to talk about your situation? Contact Gilmer Legal.

Disclaimer

This post is for general informational purposes only and is not legal advice. Reading it does not create an attorney-client relationship. No attorney-client relationship exists unless and until a written agreement is signed.For more on these terms, see: Gilmer Legal Disclaimer.

About the Author

George M. Gilmer, Esq., a Brooklyn-based attorney, leads the Gilmer Law Firm, PLLC, specializing in family and matrimonial law, ACS cases, immigration, bankruptcy, and criminal law. With over 20 years of legal experience, including arguing cases before high-profile judges like Supreme Court Justice Sonia Sotomayor, George is known for his approachable demeanor and commitment to justice. His firm emphasizes affordable, quality legal services, fostering a culture of integrity and compassion, particularly for civil rights and the LGBTQ community.