AI Chatbots’ Legal “Hallucinations” Pose Risks for Users, Stanford Study Finds
The use of popular AI chatbots from OpenAI Inc., Google LLC, and Meta Platforms Inc. for answering legal questions has raised concerns due to their tendency to “hallucinate,” according to new research from Stanford University. The study found that these AI models, including OpenAI’s ChatGPT 3.5, Google’s PaLM 2, and Meta’s Llama 2, often provide inaccurate or misleading information when asked about core legal issues.
The researchers tested over 200,000 legal questions and found that these AI models hallucinate at least 75% of the time when answering questions about a court’s core ruling. This poses a significant risk for individuals who rely on AI technology for legal advice because they cannot afford a human lawyer. The inaccuracies in these AI models could potentially lead to serious legal consequences for users.
While generative AI tools specifically trained for legal use may perform better, the researchers warned that building these tools on general-purpose models could still result in accuracy problems. Daniel Ho, a law professor at Stanford and senior fellow at the school’s Institute for Human-Centered Artificial Intelligence, emphasized the need for caution when deploying AI models in legal settings.
The study also found that the AI models were more likely to make mistakes when asked about case law from lower federal district courts, compared to cases from the US Supreme Court and the US Courts of Appeal for the Second Circuit and the Ninth Circuit. The models also struggled with “contra-factual bias,” where they would reinforce a user’s mistaken premise instead of questioning it.
Chief Justice John Roberts highlighted the potential of AI to increase access to justice for individuals who cannot afford a lawyer. However, the Stanford researchers noted that the accuracy issues with AI models were most pronounced in areas where self-represented litigants would likely be using them, such as searching lower-court cases.
Overall, the study underscores the need for caution when relying on AI chatbots for legal advice and highlights the importance of ensuring the accuracy and reliability of these tools. As AI technology continues to evolve, it will be crucial to address the limitations and challenges associated with using AI in the legal field.