当前位置: X-MOL 学术Nat. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Towards a personalized AI assistant to learn machine learning
Nature Machine Intelligence ( IF 18.8 ) Pub Date : 2024-12-05 , DOI: 10.1038/s42256-024-00953-0
Pascal Wallisch, Ibrahim Sheikh

The introduction and rapid public adoption of generative AI tools such as OpenAI’s ChatGPT in late 2022 dramatically affected the educational landscape. Many students — perhaps the majority — now routinely use ChatGPT and similar models for coursework. In the classes we taught at New York University (NYU), over 90% of undergraduate and master’s students reported using large language models (LLMs) in 2023, rising to over 95% in 2024. The widespread use of LLMs, for example, in personal assistants, promises to be a tremendous boon for learning. However, they could also prove detrimental.

One concern is that these AI tools could supplant learning. For instance, a generative AI tool could write code or essays for a student, instead of the student learning how to write code or essays themself. If generative AI tools reduce diligence by decreasing the time students spend with learning materials, the resulting decrements in learning could be substantial1. Another critical problem with using LLM-based tools for learning is that they were trained on vast, sometimes unreliable text corpora and respond in a probabilistic manner. This makes ‘hallucinations’ — contradictory, inaccurate or false outputs — inevitable2. As students often accept these outputs at face value, probably influenced by the confident tone of LLMs, hallucinations pose a risk to the integrity of learning itself.



中文翻译:


迈向个性化的 AI 助手来学习机器学习



2022 年底 OpenAI 的 ChatGPT 等生成式 AI 工具的引入和迅速公开采用极大地影响了教育领域。许多学生(也许是大多数学生)现在经常使用 ChatGPT 和类似的模型进行课程作业。在纽约大学 (NYU) 教授的课程中,超过 90% 的本科生和硕士生报告说在 2023 年使用了大型语言模型(LLMs),到 2024 年上升到 95% 以上。例如,LLMs个人助理中的广泛使用有望为学习带来巨大的福音。但是,它们也可能被证明是有害的。


一个担忧是这些 AI 工具可能会取代学习。例如,生成式 AI 工具可以为学生编写代码或论文,而不是学生自己学习如何编写代码或论文。如果生成式 AI 工具通过减少学生花在学习材料上的时间来减少勤奋,那么由此产生的学习量可能会大幅减少1。使用基于 LLM,它们是在庞大的、有时不可靠的文本语料库上训练的,并以概率方式做出响应。这使得“幻觉”——矛盾、不准确或虚假的输出——不可避免2.由于学生经常从表面上接受这些输出,可能是受到 LLMs,幻觉对学习本身的完整性构成了风险。

更新日期:2024-12-05
down
wechat
bug