The legal profession is currently navigating a period of rapid technological change with generative Artificial Intelligence (“AI”) tools like Chat GPT emerging as an assistant for legal research and writing. Accordingly, the use of AI in legal research and writing must be accompanied by rigorous human oversight. Failure to do so can result in costly fines, sanctions and public embarrassment.
Notably, a $60,000 fine was recently imposed on a prominent Chicago lawfirm and one of its lawyers for submitting a brief using AI that included fabricated legal authority to support its client’s position. This legal research was the result of “AI hallucinations” or instances when generative AI produces plausible-looking results with fabricated legal materials like citations, case names, and quotes which, at first blush, appear to support a desired legal position, but in reality, do not exist. When using AI in legal research and writing, an attorney must always individually check each citation for accuracy as AI hallucination results look reputable even though the same ultimately lack any basis in the law.
In December 2025, an Illinois Court issued sanctions of almost $50,000 against the lawfirm and $10,000 against the lawyer in connection with the submission of a post-trial motion/brief that included AI hallucinations. The adverse party advised the Court that multiple citations relied upon by the lawfirm/lawyer to support their client’s position were to non-existent Illinois Supreme Court cases that were inaccurately generated by Chat GPT causing the adverse party to file an extensive Motion for Sanctions.
In a scathing opinion, the Court characterized the lawfirm/lawyer misconduct as a “serious failure” emphasizing a lawyer’s obligation to present truthful and accurate legal authority to the Court. The Court reasoned that “[t]he Court’s focus here is not the misuse of artificial intelligence to conduct unreliable legal research and drafting. It is the inexcusable submission of false authority and factual arguments to the Court, the subsequent misrepresentations about the extent of the improper conduct, and the failure to take prompt responsibility for errors once discovered … The obligations on officers of the court at issue here precede by centuries the age of electronic research and artificial intelligence.”
Rule 1.1 of the Illinois Model Rules of Professional Conduct (which has a mirror Rule of Professional Conduct in all states) affirms that attorneys “shall” provide “competent representation,” which requires “the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.” The Commentary to Illinois MRPC 1.1 requires attorneys to keep abreast of changes in the law and the changes in legal practice “including the benefits and risks associated with relevant technology.”
Ultimately, the lawfirm/attorney acknowledged this situation as a “serious lapse in professionalism.” In addition, the lawfirm implemented new “firm-wide measures to re-educate its attorneys” on its AI policy and “established preventative measures.”
Key Takeaways - Mitigating the Risks:
Responsible use of AI must include the following elements:
- Mandatory human verification of all citations and legal authority generated or suggested by AI;
- Implementation of clear internal policies for every lawfirm/lawyer on the acceptable uses of AI including, but not limited to, identification of the tasks that must remain under a lawyer’s direct control;
- Ongoing training and education relating to AI use in a rapidly changing legal landscape; and
- Documentation of verification steps to demonstrate compliance with ethical obligations.
Ultimately, lawyers must maintain high standards for professional integrity and accountability. Of course, submitting a legal brief with false case citations runs afoul of a lawyer’s longstanding professional duties of accuracy, candor, and competence. The sanction against the lawfirm/lawyer in this Illinois case does not preclude the use of AI in legal research and writing; instead, it signals that Courts are looking closer at case citations for possible AI hallucinations and that the penalty for submitting false legal authority to the Court can be costly to both the lawfirm and the lawyer. As such, while AI is a very useful tool and can serve a valuable role in legal research and writing, it does not relieve a lawyer of his or her ethical and fiduciary duties.