About Us Contact Us Privacy Policy Terms of Use DMCA Opt-out of personalized ads
© Copyright 2023 Market Realist. Market Realist is a registered trademark. All Rights Reserved. People may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.

Lawyers Fined For Filing 'Fictitious' Case Brief Generated By ChatGPT

The judge said that there was nothing improper about using AI for assistance in research, however, one needs to ensure that the facts and figures are accurate
Cover Image Source | Pexels | Hatice Baran
Cover Image Source | Pexels | Hatice Baran

Two lawyers have been fined by a judge after it was found that they had submitted fictitious citations generated by ChatGPT. Steven Schwartz, Peter LoDuca, and the law firm Luvidow, Levidow & Oberman have been ordered to pay $5,000 penalty.

The case involved a man who sued Avianca Airlines, holding them responsible for a personal injury. When Avianca lawyers and the judge could not find the quotations cited in the brief, the law firm accepted their mistake.

Schwartz went on to admit that he did rely on the chatbot and that the chatbot invented almost six cases in the case brief which of course did not exist.

New York District Judge Peter Kevin Castel said that there was nothing improper about using AI for assistance in research, however, one needs to ensure that the facts and figures are accurate.

Pexels |  Sora Shimazaki
Pexels | Sora Shimazaki

The judge also said that the firm and its lawyers "abandoned their responsibilities when they submitted nonexistent judicial opinions with fake quotes and citations created by the AI ChatGPT."

According to Reuters, the judge said that they found them acting in "bad faith" by making "acts of conscious avoidance and false and misleading statements to the court."

The judge reportedly also said that the lawyers "continued to stand by the fake opinions" after they were confronted by the court, as per the outlet.

The firm, however, "respectfully" disagreed with the court and replied through a statement that they did not act in bad faith and just failed to believe that technology could make up a case.

The false citations were brought to notice by the lawyers for Avianca in March when they said they could not find some cases that were cited in the brief.


Earlier this month, a Texas judge had ordered lawyers to attest that they would not use ChatGPT or other generative artificial intelligence technology to write legal briefs as the AI tool can invent facts.

A report by UNESCO explored the advantages and disadvantages of AI in judicial systems. The report talked about the potential of AI that is being explored by many judicial systems including the judiciary, prosecution services, the use of AI poses lots of challenges like pattern recognition, risk of biased decisions, accountability, and many more.

"Self-learning algorithms, for instance, may be trained by certain data sets (previous decisions, facial images or video databases, etc.) that may contain biased data that can be used by applications for criminal or public safety purposes, leading to biased decisions," it said.


Litigation can be extremely time-consuming, and using AI to collect information from enormous documents can become a cakewalk with AI. Technology has immense capacities when it comes to accelerating these processes. So yes, AI can be extremely handy when it comes to drafting documents. Needless to say, human fact-checking is extremely important.

 Pexels | Sora Shimazaki
Pexels | Sora Shimazaki

Many think that using AI is easy, but having scarce knowledge in the area of prompt engineering can lead to sanctions as high as $5000 as we just saw.

What Is Prompt Engineering?

As per GeekChamp, prompt engineering is simply the "science of designing" in order to make the AI produce the optimized result.

Let's understand this with an example if you ask ChatGPT to simply "tell the weather" or "write a story" then the result will be a far more generalized version of what you are looking for. Prompts like these are simply not effective in further training the AI.

In the court case mentioned above, the fault lies in the prompts that the AI misunderstood and created hypothetical cases rather than taking real cases from the past.

AI can be revolutionary in many ways, but not all AIs are as simple as just asking Alexa to convert Euros to Dollars. Andrew Perliman, Dean, Suffolk University Law School wrote a paper after using ChatGPT. Perliman in his article titled "The Implications of ChatGPT for Legal Services and Society" said that "the responses generated by ChatGPT were imperfect and at times problematic and the use of an AI tool for legal services raises a number of regulatory and ethical issues."

AI, when used in a complex field like that of litigation, should be done in a systemized manner and like all skills, using AI must be learned or can lead to negative consequences like the one we shared today.