当前位置: X-MOL 学术AlChE J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Do large language models “understand” their knowledge?
AIChE Journal ( IF 3.5 ) Pub Date : 2024-11-30 , DOI: 10.1002/aic.18661
Venkat Venkatasubramanian

Large language models (LLMs) are often criticized for lacking true “understanding” and the ability to “reason” with their knowledge, being seen merely as autocomplete engines. I suggest that this assessment might be missing a nuanced insight. LLMs do develop a kind of empirical “understanding” that is “geometry”‐like, which is adequate for many applications. However, this “geometric” understanding, built from incomplete and noisy data, makes them unreliable, difficult to generalize, and lacking in inference capabilities and explanations. To overcome these limitations, LLMs should be integrated with an “algebraic” representation of knowledge that includes symbolic AI elements used in expert systems. This integration aims to create large knowledge models (LKMs) grounded in first principles that can reason and explain, mimicking human expert capabilities. Furthermore, we need a conceptual breakthrough, such as the transformation from Newtonian mechanics to statistical mechanics, to create a new science of LLMs.

中文翻译:


大型语言模型是否“理解”他们的知识?



大型语言模型 (LLMs) 经常被批评为缺乏真正的 “理解” 和利用其知识 “推理” 的能力,仅被视为自动完成引擎。我认为这个评估可能缺少一个细致入微的见解。LLMs 确实发展了一种类似“几何”的经验“理解”,这足以满足许多应用的需求。然而,这种基于不完整和嘈杂数据构建的 “几何 ”理解使它们不可靠、难以概括,并且缺乏推理能力和解释。为了克服这些限制,LLMs 应与知识的“代数”表示集成,其中包括专家系统中使用的符号 AI 元素。这种集成旨在创建基于第一性原理的大型知识模型 (LKM),这些模型可以推理和解释,模仿人类专家的能力。此外,我们需要一个概念上的突破,例如从牛顿力学到统计力学的转变,以创造一门LLMs。
更新日期:2024-11-30
down
wechat
bug