×
Large Language Poor Role Model: Lawyer dismissed for using ChatGPT’s false citations
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The legal profession is confronting the real-world consequences of AI hallucination as recent graduates face career setbacks from overreliance on chatbots. A case in Utah has highlighted the dangerous intersection of legal practice and AI tools, where fake citations in court filings led to sanctions, firing, and a pointed judicial warning about AI’s limitations. This incident demonstrates how professional standards are evolving in response to AI adoption, with courts and firms establishing new guardrails to protect both the justice system and vulnerable professionals.

The big picture: A recent law school graduate lost his job after including AI-hallucinated legal citations in a court filing, marking the first fake citation case discovered in Utah’s legal system.

  • Judge Mark Kouris ordered sanctions after finding multiple mis-cited cases and at least one completely fictional legal precedent generated by ChatGPT.
  • The incident highlights the growing tension between convenient AI tools and professional responsibility in highly regulated fields like law.

Key details: The law firm claimed the graduate was working as an unlicensed law clerk who failed to disclose his ChatGPT use when drafting the document.

  • Attorneys Douglas Durbano and Richard Bednar faced judicial scrutiny for submitting the filing without proper verification of its accuracy.
  • The law firm had no AI policy in place at the time but quickly established one after the incident.

What the court said: Judge Kouris emphasized that “every attorney has an ongoing duty to review and ensure the accuracy of their court filings.”

  • The court noted that the attorneys “fell short of their gatekeeping responsibilities as members of the Utah State Bar when they submitted a petition that contained fake precedent generated by ChatGPT.”
  • Kouris warned that “the legal profession must be cautious of AI due to its tendency to hallucinate information.”

The consequences: Attorney Bednar was ordered to pay the opposition’s attorneys’ fees and donate $1,000 to “And Justice for All,” a legal aid organization.

  • The law clerk who used ChatGPT was fired despite the absence of formal policies against such AI use.
  • The sanctions were relatively mild because the attorneys quickly accepted responsibility, unlike other lawyers who have denied AI use when caught.

Why this matters: Fake legal citations generate significant harms by wasting court resources, increasing costs for opposing parties, and potentially depriving clients of proper legal representation.

  • The case represents a cautionary tale as professional industries grapple with integrating AI tools while maintaining ethical standards and quality control.

Behind the numbers: The fictional case—”Royer v. Nelson, 2007 UT App 74, 156 P.3d 789″—was easily identifiable as fake when prompted for details, with ChatGPT providing only vague information that should have raised red flags.

The broader context: This incident reflects growing concerns about students and recent graduates becoming overly dependent on AI tools without understanding their limitations.

  • Law firms are now facing the challenge of educating new hires about responsible AI use in professional contexts where accuracy is paramount.
  • Even legal non-profits acknowledge they are “incorporating AI in their services” while emphasizing that “every attorney has a legal and professional responsibility” to ensure accuracy.
Law clerk fired over ChatGPT use after firm’s filing used AI hallucinations

Recent News

Network performance drives AI success, new benchmark reveals

Network architecture and inter-chip communication become as crucial as raw processing power in AI training, as systems scale to utilize thousands of connected GPUs.

Chinese groups exploit ChatGPT for malicious acts, OpenAI warns

Chinese state-aligned groups employ ChatGPT to generate divisive political content and support cyber operations targeting geopolitical narratives in a concerning trend of AI weaponization.

AI’s evidence engine – how Epoch is mapping machine progress

Nonprofit research group tracks AI development through data-driven analysis and open information sharing.