DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

Mistake-filled legal briefs show the limits of relying on AI tools at work

October 30, 2025
in News
Mistake-filled legal briefs show the limits of relying on AI tools at work
495
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

NEW YORK (AP) — Judges around the world are dealing with a growing problem: legal briefs that were generated with the help of and submitted with errors such as citations to cases that don’t exist, according to attorneys and court documents.

The trend serves as a cautionary tale for people who are learning to use at work. Many employers want to hire workers who can use the technology to help with tasks such as conducting research and drafting reports. As teachers, accountants and marketing professionals begin engaging with AI chatbots and assistants to generate ideas and improve productivity, they’re also discovering the programs .

A French data scientist and lawyer, Damien Charlotin, has catalogued at least 490 court filings in the past six months that contained “hallucinations,” which are AI responses that contain false or misleading information. The pace is accelerating as more , he said.

“Even the more sophisticated player can have an issue with this,” Charlotin said. “AI can be a boon. It’s wonderful, but also there are these pitfalls.”

Charlotin, a senior research fellow at HEC Paris, a business school located just outside France’s capital city, created a database to track cases in which a judge ruled that generative AI produced hallucinated content such as fabricated case law and false quotes. The majority of rulings are from U.S. cases in which plaintiffs represented themselves without an attorney, he said. While most judges issued warnings about the errors, some levied fines.

This article is part of AP’s Be Well coverage, focusing on wellness, fitness, diet and mental health.

But even high-profile companies have submitted problematic legal documents. A federal judge in Colorado ruled that a lawyer for MyPillow Inc., filed a brief containing nearly 30 defective citations as part of a against the company and founder Michael Lindell.

The legal profession isn’t the only one wrestling with AI’s foibles. The AI overviews that appear at the top of pages frequently contain errors.

And AI tools also raise privacy concerns. Workers in all industries need to be cautious about the information they upload or put into prompts to ensure they’re safeguarding the confidential information of employers and clients.

Legal and workplace experts share their experiences with AI’s mistakes and describe perils to avoid.

Think of AI as an assistant

Don’t trust AI to make big decisions for you. Some AI users treat the tool as an intern to whom you assign tasks and whose completed work you expect to check.

“Think about AI as augmenting your workflow,” said Maria Flynn, CEO of Jobs for the Future, a nonprofit focused on workforce development. It can act as an assistant for tasks such as drafting an email or researching a travel itinerary, but don’t think of it as a substitute that can do all of the work, she said.

When preparing for a meeting, Flynn experimented with an in-house AI tool, asking it to suggest discussion questions based on an article she shared with the team.

“Some of the questions it proposed weren’t the right context really for our organization, so I was able to give it some of that feedback … and it came back with five very thoughtful questions,” she said.

Check for accuracy

Flynn also has found problems in the output of the AI tool, which still is in a pilot stage. She once asked it to compile information on work her organization had done in various states. But the AI tool was treating completed work and funding proposals as the same thing.

“In that case, our AI tool was not able to identify the difference between something that had been proposed and something that had been completed,” Flynn said.

Luckily, she had the institutional knowledge to recognize the errors. “If you’re new in an organization, ask coworkers if the results look accurate to them,” Flynn suggested.

While AI can help with brainstorming, relying on it to provide factual information is risky. Take the time to check the accuracy of what AI generates, even if it’s tempting to skip that step.

“People are making an assumption because it sounds so plausible that it’s right, and it’s convenient,” Justin Daniels, an Atlanta-based attorney and shareholder with the law firm Baker Donelson, said. “Having to go back and check all the cites, or when I look at a contract that AI has summarized, I have to go back and read what the contract says, that’s a little inconvenient and time-consuming, but that’s what you have to do. As much as you think the AI can substitute for that, it can’t.”

Be careful with notetakers

It can be tempting to use AI to record and take notes during meetings. Some tools generate useful summaries and outline action steps based on what was said.

But many jurisdictions require the consent of participants prior to recording a conversations. Before using AI to take notes, pause and consider whether the conversation should be kept privileged and confidential, said Danielle Kays, a Chicago-based partner at law firm Fisher Phillips.

Consult with colleagues in the legal or human resources departments before deploying a notetaker in high-risk situations such as investigations, performance reviews or legal strategy discussions, she suggested.

“People are claiming that with use of AI there should be various levels of consent, and that is something that is working its way through the courts,” Kays said. “That is an issue that I would say companies should continue to watch as it is litigated.”

Protecting confidential information

If you’re using free AI tools to draft a memo or marketing campaign, don’t tell it identifying information or corporate secrets. Once you’ve uploaded that information, it’s possible others using the same tool might find it.

That’s because when other people ask an AI tool questions, it will search available information, including details you revealed, as it builds its answer, Flynn said. “It doesn’t discern whether something is public or private,” she added.

Seek schooling

If your employer doesn’t offer AI training, try experimenting with free tools such as ChatGPT or Microsoft Copilot. Some universities and tech companies offer classes that can help you develop your understanding of how AI works and ways it can be useful.

A course that teaches people how to construct the best AI prompts or hands-on courses that provide opportunities to practice are valuable, Flynn said.

Despite potential problems with the tools, learning how they work can be beneficial at a time when they’re ubiquitous.

“The largest potential pitfall in learning to use AI is not learning to use it at at all,” Flynn said. “We’re all going to need to become fluent in AI, and taking the early steps of building your familiarity, your literacy, your comfort with the tool is going to be critically important.”

___

Share your stories and questions about workplace wellness at [email protected]. Follow AP’s Be Well coverage, focusing on wellness, fitness, diet and mental health at

The post Mistake-filled legal briefs show the limits of relying on AI tools at work appeared first on Associated Press.

Share198Tweet124Share
Tech-first ministry seeks to redefine German efficiency
News

Tech-first ministry seeks to redefine German efficiency

by Deutsche Welle
November 1, 2025

Anyone in who wants a new identification card or to register a car or apply for family benefits needs an ...

Read more
Entertainment

Country star Gavin Adcock falls off stage during concert

November 1, 2025
News

Anguished mother of captured US journalist slams CNN: ‘Another knife in our belly’

November 1, 2025
News

Michigan lawyer says the Halloween terror plot that FBI Director Kash Patel described never existed

November 1, 2025
News

U.S. Treasury Sanctions Cartel-Connected Drug, Migrant Smuggling Organization in Cancun 

November 1, 2025
‘Misery map’ shows which major airports are getting hit the hardest with flight delays and cancellations

‘Misery map’ shows which major airports are getting hit the hardest with flight delays and cancellations

November 1, 2025
Mamdani campaign hosts education roundtable featuring woman who called standardized tests ‘eugenics’

Mamdani campaign hosts education roundtable featuring woman who called standardized tests ‘eugenics’

November 1, 2025
Nearly 65,000 kids could lose access to Head Start programs if the government shutdown continues

Nearly 65,000 kids could lose access to Head Start programs if the government shutdown continues

November 1, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.