CHOOSE THE 2s!™ (561) 222-2222
Client Portal Click Here

The Price of Trusting AI: Sanctions and Consequences

AI, artificial intelligence, chatbot, chatbots, ChatGPT, lawyers, attorneys, law

Sanctions and Consequences using AI

If you are a fan of technology, particularly artificial intelligence, you are probably aware of the positive impacts it is projected to have across all parts of our lives, personal and professional. But, like any other tool, AI has limitations, and the chance for mistakes remains. At the end of the day, users of tools like ChatGPT need to heed the adage, “buyer beware,” because if you don’t check what an artificial intelligence program produces, it can come back to haunt you. Two personal injury attorneys from New York learned that lesson the hard way.

The Backstory

Steven A. Schwartz, an attorney at New York personal injury law firm Levidow, Levidow & Oberman was representing Roberto Mata, who was suing Colombian airline Avianca after a serving cart injured his knee on a 2019 flight. Avianca attempted to get a judge to dismiss the case. In response, Schwartz and co-counsel Paul LoDuca objected, and filed a brief filled with similar court decisions in the past. He said he used OpenAI’s chatbot, ChatGPT, to “supplement” his own findings. ChatGPT provided a number of similar cases to back up Mata’s claim.

The problem? ChatGPT made up all the cases. They did not exist.

Unfortunately, Schwartz did not do additional research to confirm the AI cases were legitimate, and was first alerted to the egregious mistake by the airlines attorneys, who exposed the bogus case law in a March filing.

Schwartz claimed he was “unaware of the possibility that its (ChatGPT) content could be false.” He even provided screenshots to the judge of his interaction with the chatbot, asking if one of the cases were real. ChatGPT responded it was, and even said the cases could be found in “reputable legal databases,” which was false.

The Consequences

Due to the mistake, the judge in the case, P. Kevin Castel, was not happy, and scheduled a hearing to determine how to deal with the mess that Schwartz created. On Thursday, June 22, Castel imposed sanctions on Schwartz, LoDuca, and Levidow, Levidow & Oberman, and ordered them to pay a $5,000 fine.

Castel found the attorneys acted in bad faith and made “acts of conscious avoidance and false and misleading statements to the court.”

The firm said in a statement that its lawyers “respectfully” disagreed that they acted in bad faith. “We made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth,” the statement said.

Castel wrote in the sanctions order that “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

The Lesson Learned

Schwartz has stated that he has suffered professionally due to the publicity of the case. So why did a lawyer who has been practicing for over 30 years rely so highly on AI and ChatGPT? He claims he learned about it through his children, and in reading numerous articles about artificial intelligence tools being used in professional settings, including law firms.

Users of technologies like ChatGPT need to realize that AI chatbots are language model trained to follow instructions and provide a user with a response to their prompt. That means if a user asks ChatGPT for information, it would provide the user exactly what they’re looking for. Even if it’s not factual. Why do/can platforms like ChatGPT provide non-factual information? Remember. AI chatbots are a relatively new brand of technology. There are still many bugs to work out. For example, ChatGPT tells users upfront that it does not provide any information that was not available prior to September, 2021. Here are some other ways AI chatbots can provide you with wrong information:

Misunderstanding the User’s Intent

AI chatbots use natural language processing (NLP) algorithms to understand a user’s request, but the algorithms are not always perfect. If the request is ambiguous, the chatbot may misinterpret it and provide inaccurate information.

Lack of Adequate “Training”

Because platforms like ChatGPT are “trained” by being fed tremendous amounts of data, they are only as “good” as the data they are trained on. If an AI platform does not have enough data to draw from, it may provide insufficient or incorrect information.

Having Outdate Information

If not constantly updated, chatbots may provide incorrect information. This is especially true for platforms that provide news or other time-sensitive information.

Incomplete Information

If a chatbot does not have all the information necessary to provide a complete answer, or if it is asked a question that requires additional context, it may only be able to provide a partial, – or perhaps – incorrect answer.

Over-Reliance on Artificial Intelligence

AI is useful, but it’s not perfect. If the chatbot is over-reliant on AI, it may need a human in the loop to ensure it is providing accurate information.

In Conclusion…

While AI, including tools like ChatGPT, holds great promise in various fields, it is crucial for users to exercise caution and not blindly rely on the information provided. The unfortunate case of the New York personal injury attorneys serves as a reminder that AI has its limitations and can make mistakes. As technological advancements continue, it is important to embrace AI tools while maintaining a critical mindset. Users MUST take responsibility for certifying the information created by AI systems, especially in professional settings like law, medicine, or engineering, where accuracy is crucial. By acknowledging the limitations of AI chatbots, and maintaining a cautious approach, we as users can mitigate the risks associated with this still relatively untested technology.