Date: October 8th, 2025 9:53 AM
Author: ipods
These people ditched lawyers for ChatGPT in court
From pickleball disputes to eviction cases, litigants are using ChatGPT to fight their court battles.
Lynn White was out of options.
Behind on payments for her mobile home in Long Beach, California, and facing an eviction notice, she had no money for a lawyer. So she accepted a court-appointed attorney and lost.
But White wanted to appeal. So, she decided to consult ChatGPT. Having previously used AI to generate videos for her small music production business, she thought it might be able to help in the legal arena, too.
“It was like having God up there responding to my questions,” White said after using the chatbot and an AI-powered search engine called Perplexity to help represent herself in court.
White regularly provided ChatGPT with documents and detailed information about her case. The chatbot helped White identify potential errors in a judge’s procedural decisions, chart out possible courses of action, research applicable laws and draft responses to the court.
Several months of litigation later, White managed to overturn her eviction notice and avoid roughly $55,000 in penalties and more than $18,000 in overdue rent.
“I can’t overemphasize the usefulness of AI in my case,” White said. “I never, ever, ever, ever could have won this appeal without AI.”
White originally used the free version of ChatGPT before upgrading to its premium service for $20 a month. White used Perplexity Pro, which also costs $20 a month.
With a slew of generative AI tools available to anyone with an internet connection, a rising number of litigants are using AI to assist in their legal cases. Many are eschewing lawyers altogether, representing themselves in court with AI as their primary guide.
To understand the role of AI in self-represented, or pro se, litigation, NBC News spoke with over 15 pro se litigants, legal professionals, nonprofit leaders, AI startups and legal research companies.
In interviews, lawyers and litigants described AI as producing mixed results in court. Some say AI tools are helping them navigate mazes of court procedure and simplify legal jargon. Others have faced steep penalties for submitting court filings with inaccurate or nonexistent information. Regardless of the consequences, legal practitioners and experts say generative AI tools appear to be encouraging certain people to seek legal guidance from algorithms instead of human lawyers.
“I’ve seen more and more pro se litigants in the last year than I have in probably my entire career,” said Meagan Holmes, a paralegal at Phoenix-based law firm Thorpe Shwer, talking about the use of AI chatbots in law.
Some leading AI companies discourage customers from using their services for legal purposes, while others have made little to no mention of the possibility. Google's terms of service state: “Don’t rely on the Services for medical, legal, financial, or other professional advice. Any content regarding those topics is provided for informational purposes only and is not a substitute for advice from a qualified professional.” The acceptable use policy for xAI warns users not to use its models to operate “in a regulated industry or region without complying with those regulations.”
Yet when asked legal questions, most chatbots will respond and provide legal advice without any warnings beyond ever-present, fine-print disclaimers noting that generative AI answers may not be accurate.
In a statement, Perplexity spokesperson Jesse Dwyer told NBC News: “We don’t claim to be 100% accurate, but we do claim to be the only company who works on it relentlessly.”
“The consumer use of AI in some of the most challenging aspects of the legal ecosystem is proof of how necessary that reliability and accuracy is,” Dwyer wrote.
OpenAI, Anthropic, Google DeepMind and xAI did not respond to requests for comment.
Staci Dennett is among the many litigants now representing herself using AI. Dennett owns a home fitness business in New Mexico and describes herself as an early adopter of AI. So when she was served with a court summons about an unpaid debt, she turned to ChatGPT for advice on how to respond.
“I’m not normally fighting lawsuits on ChatGPT, but I do use it a lot in my business and in my life,” she said.
The bot provided templates and drafted arguments for her responses, with Dennett regularly asking it to find potential errors in her logic. “I would tell ChatGPT to pretend it was a Harvard Law professor and to rip my arguments apart,” she said. “Rip it apart until I got an A-plus on the assignment.”
White ultimately negotiated a settlement and saved over $2,000 compared to her original debt, which she said was a partial victory she attributes largely to AI.
She said the opposing lawyers complimented her on her knowledge of the law and court proceedings as the case was wrapping up, writing to her in an email: “If the law is something you’re interested in as a profession, you could certainly do the job.”
The price of AI legal advice
Even as some litigants have found success in small-claims disputes, legal professionals who spoke to NBC News say AI-drafted court documents are often littered with inaccuracies and faulty reasoning.
Holmes said litigants “will use a case that ChatGPT gave them, and when I go to look it up, it does not exist. Most of the time, we get them dismissed for failure to state an actual claim, because a lot of times it’s just kind of, not to be rude, but nonsense.” AI models often generate information that is false or misleading but presented as fact, a phenomenon known as “hallucination.” Chatbots are trained on vast datasets to predict the most likely response to a query but sometimes encounter gaps in their knowledge. In these cases, the model may attempt to fill in the missing pieces with its best approximation, which can result in inaccurate or fabricated details.
For litigants, AI hallucinations can lead to pricey penalties. Jack Owoc, a colorful Florida-based energy drink mogul who lost a false advertising case to the tune of $311 million and is now representing himself, was recently sanctioned for filing a court motion with 11 AI-hallucinated citations referencing court cases that do not exist.
Owoc admitted he had used generative AI to draft the document due to his limited finances. He was ordered to complete 10 hours of community service and is now required to disclose whether he uses AI in all future filings in the case.
“Just like a law firm would have to check the work of a junior associate, so do you have to check the work generated by AI,” Owoc said via email.
Holmes and other legal professionals say there are common telltale signs of careless AI use, such as citations to nonexistent case law, filler language that was left in, and ChatGPT-style emoji or formatting that looks nothing like a typical legal document.
Damien Charlotin, a legal researcher and data scientist, has organized a public database tracking legal decisions in cases where litigants were caught using AI in court. He’s documented 282 such cases in the U.S. and more than 130 from other countries dating back to 2023.
“It really started to accelerate around the spring of 2025,” Charlotin said.
And the database is far from exhaustive. It only tracks cases where the established or alleged use of AI was directly addressed by the court, and Charlotin said most of its entries are referred to him by lawyers or other researchers. He noted that there are generally three types of AI hallucinations:
“You got the fabricated case law, and that’s quite easy to spot, because the case does not exist. Then you got the false quotations from existing case law. That’s also rather easy to spot because you do control-F,” Charlotin said. “And then there is misrepresented case law. That’s much harder, because you’re citing something that exists but you’re totally misrepresenting it.”
Earl Takefman has experienced AI’s hallucinatory tendencies firsthand. He is currently representing himself in several cases in Florida regarding a pickleball business deal gone awry and started using AI to help him in court last year.
“It never for a second even crossed my mind that ChatGPT would totally make up cases, and unfortunately, I found out the hard way,” Takefman told NBC News.
Takefman realized his mistake when the opposing counsel pointed out a hallucinated case in one of Takefman’s filings. “I went back to ChatGPT and told it that it really f----d me over,” he said. “It apologized.”
A judge admonished Takefman for citing the same nonexistent case — an imaginary one from 1995 called Hernandez v. Gilbert — in two separate filings, among other missteps, according to court documents.
Embarrassed about the oversight, Takefman resolved to be more careful. “So I said, ‘OK, I know how to get around it. I’m going to ask ChatGPT to give me actual quotations from the court case I want to reference. Surely they would never make up an actual quotation.’ And it turns out they were making that up too!”
“I certainly did not intend to mislead the court,” Takefman said. “They take it very, very seriously and don’t let you off the hook because you’re a pro se litigant.”
In late August, the court forced Takefman to explain why he should not receive sanctions given his mistakes. The court accepted Takefman’s apology and did not apply sanctions.
The experience has not turned off Takefman from using AI in his court dealings.
“Now, I check between different applications, so I’ll take what Grok gives me and give it to ChatGPT and see if it agrees — that all the cases are real, that there aren’t any hallucinations, and that the cases actually mean what the AI thinks they mean,” Takefman said.
“Then, I put all of the cases into Google to do one last check to make sure that the cases are real. That way, I can actually say to a judge that I checked the case and it exists,” Takefman said.
So far, the majority of AI hallucinations in Charlotin’s database come from pro se litigants, but many have also come from lawyers themselves.
Earlier this month, a California court ordered an attorney to pay a $10,000 fine for filing a state court appeal in which 21 of the 23 quotes from cited cases were hallucinated by ChatGPT. It appears to be the largest-ever fine issued over AI fabrications, according to CalMatters.
“I can understand more easily how someone without a lawyer, and maybe who feels like they don’t have the money to access an attorney, would be tempted to rely on one of these tools,” said Robert Freund, an attorney who regularly contributes to Charlotin’s database. “What I can’t understand is an attorney betraying the most fundamental parts of our responsibilities to our clients … and making these arguments that are based on total fabrication.”
Freund, who runs a law firm in Los Angeles, said the influx of AI hallucinations wastes both the court’s and the opposing party’s time by forcing them to use up resources identifying factual inaccuracies. Even after a judge admonishes someone caught filing AI slop, sometimes the same plaintiff continues to flood the court with AI-generated filings “filled with junk.”
Matthew Garces, a registered nurse in New Mexico who’s a strong proponent of using AI to represent himself in legal matters, is currently involved in 28 federal civil suits, including 10 active appeals and several petitions to the Supreme Court. These cases cover a range of topics, including medical malpractice, housing disputes between Garces and his landlord, and alleged improper judicial conduct toward Garces.
After noting that Garces submitted documents referencing numerous nonexistent cases, a panel of judges from the 5th U.S. Court of Appeals recently criticized Garces’ prolific filing of new cases, writing that he is “WARNED FOR A SECOND TIME” to avoid any “future frivolous, repetitive, or otherwise abusive filings” or risk increasingly severe penalties.
A magistrate judge in another of Garces’ cases also recommended that he be banned from filing any lawsuits without the express authorization of a more senior judge, and that Garces be classified as “a vexatious litigant.”
Still, Garces told NBC News that “AI provides access to the courthouse doors that money often keeps closed. Managing nearly 30 federal suits on my own would be nearly impossible without AI tools to organize, research and prepare filings.”
Some lawyers remain optimistic
As the use of AI in court grows, some pro bono legal clinics are now trying to teach their self-representing clients to use AI in ways that help rather than harm them — without offering direct legal advice.
“This is the most exciting time to be a lawyer,” said Zoe Dolan, a supervising attorney at Public Counsel, a nonprofit public interest law firm and legal advocacy center in Los Angeles. “The amount of impact that any one advocate can now have is only sort of limited by our imagination and organizational structures.”
Last year, Dolan helped create a class for self-represented litigants in Los Angeles County to learn how to leverage AI in their cases. The class taught participants how to use various prompts to create documents, how to fact-check the AI systems’ outputs and how to use chatbots to verify other chatbots’ work.
Several of the litigants who took the class, including White, have gone on to win their cases while using AI.
Numerous legal professionals railing against the sloppy use of AI in court also say that they’re not opposed to the use of AI among lawyers more generally. In fact, many say they feel optimistic about AI adoption by legal professionals who have the expertise to analyze and verify its outputs.
Andrew Montez, an attorney in Southern California, said that despite his firm “seeing pro se litigants constantly using AI” over the past six months, he himself has found AI tools useful as a starting point for research or brainstorming. He said he never inputs real client names or confidential information, and he checks every citation manually.
While AI cannot substitute for his own legal research and analysis, Montez said, these systems enable lawyers to write better-researched briefs more quickly.
“Going forward in the legal profession, all attorneys will have to use AI in some way or another. Otherwise they will be outgunned,” Montez said. “AI is the great equalizer. Internet research, to a certain extent, made law libraries obsolete. I think AI is really the next frontier.”
As for pro se litigants without legal expertise, Montez said he believes most cases are too complex for AI alone to understand sufficient context and provide good enough analysis to help someone succeed in court. But he noted that he could envision a future in which more people will use AI to successfully represent themselves, especially in small claims courts.
White, who avoided eviction this year with the help of ChatGPT and Perplexity.ai, said she views AI as a way to level the playing field. When asked what advice she would give to other pro se litigants, she thought it was fitting to craft a reply with ChatGPT.
“AI gave me research support, drafting help and organizational skills that I could not access. I used AI to double as a virtual law clerk,” White said, reading from ChatGPT’s response. Interjecting her own thoughts, she added: “And it wants me to emphasize, which I totally agree with, that ‘it felt like David and Goliath, except my slingshot was AI.’”
(http://www.autoadmit.com/thread.php?thread_id=5784319&forum_id=2/#49333940)