当前位置:
X-MOL 学术
›
Philosophical Studies
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
The linguistic dead zone of value-aligned agency, natural and artificial
Philosophical Studies ( IF 1.1 ) Pub Date : 2024-12-04 , DOI: 10.1007/s11098-024-02257-w Travis LaCroix
中文翻译:
价值对齐的能动性、自然和人工的语言死区
更新日期:2024-12-04
Philosophical Studies ( IF 1.1 ) Pub Date : 2024-12-04 , DOI: 10.1007/s11098-024-02257-w Travis LaCroix
The value alignment problem for artificial intelligence (AI) asks how we can ensure that the “values”—i.e., objective functions—of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems—or, more loftily, those programmes that seek to design robustly beneficial or ethical artificial agents.
中文翻译:
价值对齐的能动性、自然和人工的语言死区
人工智能 (AI) 的价值对齐问题在于我们如何确保人工智能系统的“价值”(即目标功能)与人类的价值观保持一致。在本文中,我认为语言交流是稳健价值对齐的必要条件。我讨论了这一说法的真实性将对试图确保 AI 系统价值一致性的研究计划产生的影响,或者更崇高地说,那些寻求设计稳健有益或合乎道德的人工代理的计划。