Articles & Publications 08.05.25

When AI Goes Rogue, Who Is to Blame? Published in New York Law Journal

In an article published on August 4 in the New York Law Journal, Segal McCambridge Shareholder Daniel DiLizia discusses the latest class action lawsuits and state laws regarding the insurance industry’s use of artificial intelligence (AI). As the insurance industry deploys AI in many ways from underwriting policies to adjusting and handling claims, the sector is still working out some kinks when AI goes rogue.

“No matter the AI liability framework that is adopted in the United States, insurers should diligently implement AI risk management and governance frameworks to best position itself for a potential defense to an AI-based lawsuit,” DiLizia notes. “However, as more class action litigation challenging insurers’ use of AI ensues, and more states grapple with enacting laws governing AI, the answer to the question “who is to blame when AI goes rogue” will become clearer.”

DiLizia gives timely guidance for insurers to lessen AI liability, including developing and instituting AI use policies, documenting compliance with relevant laws such as the National Association of Insurance Commissioners’ (NAIC) Model Bulletin on the Use of AI Systems, and retaining AI experts or creating AI teams to oversee AI initiatives. He also recommends AI training and bias testing, adequate human oversight of AI systems and vendors, creating AI risk protocols for when AI malfunctions, and cyber insurance for when AI goes rogue.

“AI cannot stand trial or pay damages (at least not yet),” DiLizia writes. “Legal liability and wrongdoing aside, the blame for AI going rogue rests with those who deploy AI without proper oversight, training, and governance, the coders and developers of AI systems, the providers of data that is entered into the AI system, and, increasingly, with regulators have the unenviable task of trying to keep up with technology that evolves as fast as AI.”

Read the story in full, click here (subscriber-based).