Lawyers blame ChatGPT for tricking them into citing bogus case law
The HinduTwo apologetic lawyers responding to an angry judge blamed ChatGPT Thursday for tricking them into including fictitious legal research in a court filing. Hundreds of industry leaders signed a letter in May that warns “ mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Judge Castel seemed both baffled and disturbed at the unusual occurrence and disappointed the lawyers did not act quickly to correct the bogus legal citations when they were first alerted to the problem by Avianca’s lawyers and the court. After the judge read aloud portions of one cited case to show how easily it was to discern that it was “gibberish,” LoDuca said: “It never dawned on me that this was a bogus case.” He said the outcome “pains me to no end.” Ronald Minkoff, an attorney for the law firm, told the judge that the submission “resulted from carelessness, not bad faith” and should not result in sanctions. "What he was doing was playing with live ammo.” Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he introduced the Avianca case during a conference last week that attracted dozens of participants in person and online from state and federal courts in the U.S., including Manhattan federal court. “We’re talking about the Southern District of New York, the federal district that handles big cases, 9/11 to all the big financial crimes,” Shin said.