当前位置: X-MOL 学术Journal of Legal Analysis › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models
Journal of Legal Analysis ( IF 3.0 ) Pub Date : 2024-06-26 , DOI: 10.1093/jla/laae003
Matthew Dahl , Varun Magesh , Mirac Suzgun , Daniel E Ho

Do large language models (LLMs) know the law? LLMs are increasingly being used to augment legal practice, education, and research, yet their revolutionary potential is threatened by the presence of “hallucinations”—textual output that is not consistent with legal facts. We present the first systematic evidence of these hallucinations in public-facing LLMs, documenting trends across jurisdictions, courts, time periods, and cases. Using OpenAI’s ChatGPT 4 and other public models, we show that LLMs hallucinate at least 58% of the time, struggle to predict their own hallucinations, and often uncritically accept users’ incorrect legal assumptions. We conclude by cautioning against the rapid and unsupervised integration of popular LLMs into legal tasks, and we develop a typology of legal hallucinations to guide future research in this area.

中文翻译:


大型法律小说:在大型语言模型中剖析法律幻觉



大型语言模型(LLMs)知道这个规律吗? LLMs 越来越多地被用来增强法律实践、教育和研究,但它们的革命潜力受到“幻觉”(与法律事实不相符的文本输出)的存在的威胁。我们在面向公众的 LLMs 中首次展示了这些幻觉的系统证据,记录了跨司法管辖区、法院、时间段和案件的趋势。使用 OpenAI 的 ChatGPT 4 和其他公共模型,我们发现 LLMs 至少有 58% 的时间产生幻觉,难以预测自己的幻觉,并且经常不加批判地接受用户不正确的法律假设。最后,我们警告不要将流行的LLMs快速且无监督地整合到法律任务中,并且我们开发了法律幻觉的类型学来指导该领域的未来研究。
更新日期:2024-06-26
down
wechat
bug