Legal Regulators and Courts Must Address Use of AI in Litigation: Master of the Rolls
The use of artificial intelligence (AI) systems like ChatGPT in litigation has raised concerns among legal regulators and the courts in England and Wales. The Master of the Rolls, Sir Geoffrey Vos, has highlighted the need for mechanisms to control the use of generative AI within the legal system.
Speaking at the Law Society of Scotland’s Law and Technology conference, Sir Geoffrey referenced the case of New York lawyer Steven Schwartz, who used ChatGPT to prepare submissions in a personal injury case. Despite asking the system to confirm the accuracy of the cases cited, six of them were found to be “bogus decisions with bogus quotes and bogus citations” by the judge.
This case serves as a warning that lawyers cannot use generative AI to cut corners in legal proceedings. Sir Geoffrey emphasized that specialized legal AI tools may be more beneficial, but cautioned against relying solely on non-specialized AI tools like ChatGPT.
He also mentioned Spellbook, a company claiming to have adapted GPT-4 to review and suggest language for contracts and legal documents. While AI tools can assist lawyers with drafting, document review, predicting case outcomes, and settlement negotiations, they must be carefully checked by legal professionals.
Sir Geoffrey suggested that rules or a professional code of conduct may be needed to regulate the use of large language models like ChatGPT in court documents. He emphasized the importance of programmers training AI systems to understand the principles of law and ensuring that humans using them are diligent in fact-checking.
In conclusion, the Master of the Rolls stressed the need for careful programming and training of AI systems to align with legal standards. As the legal profession navigates the use of AI in litigation, court rules may need to adapt to address the potential risks and challenges posed by these technologies.