Nature Human Behaviour ( IF 21.4 ) Pub Date : 2024-10-21 , DOI: 10.1038/s41562-024-02026-z Fabrizio Gilardi, Atoosa Kasirzadeh, Abraham Bernstein, Steffen Staab, Anita Gohdes
Public concerns about the societal effects of generative artificial intelligence (AI) are shaped by narratives that have the potential to influence research priorities and policy agendas. Understanding the origins and dynamics of these narratives is crucial to effectively address the actual impacts of AI and ensure a constructive discourse about its risks and potential.
This shift in media coverage points to the need for a closer examination of the underlying discourse. We currently see four main types of narratives around generative AI:
-
(1)
The ‘existential risk’ narrative contends that existential risks from artificial superintelligence or artificial general intelligence could stem from the next generations of generative AI-type systems. As generative AI systems become more sophisticated, their capabilities could surpass human control and lead to potentially existentially catastrophic consequences. Strong versions of this narrative raise the concern that artificial superintelligence or artificial general intelligence technologies could lead to human extinction3.
-
(2)
The ‘effective accelerationist’ narrative champions the rapid development of AI. Proponents argue that its potential benefits for solving complex global problems far outweigh the risks, and the existential risks from advanced AI are zero or near zero and so can be dismissed4. This narrative is driven by a strong belief in the power of AI progress to bring about substantial positive change.
-
(3)
The ‘real, immediate societal risks’ narrative focuses only on the tangible, immediate societal risks of generative AI. It emphasizes issues such as the creation of deepfake pornography, unjust capability distribution or the growing environmental effects of generative AI, and argues that these present-day concerns are much more pressing and relevant than speculative existential risks. Proponents of this view argue that focusing on distant existential threats distracts us from addressing the real and present dangers of AI5.
-
(4)
The ‘balanced risks’ narrative advocates for an approach to AI risk governance that acknowledges both the existential and immediate societal risks posed by AI. It encourages finding meaningful connections between these two classes of risks, and suggests that addressing them in tandem can lead to more comprehensive and effective risk mitigation strategies and policies6.
中文翻译:
我们需要了解关于生成式 AI 的叙述的影响
公众对生成式人工智能 (AI) 的社会影响的担忧是由有可能影响研究重点和政策议程的叙述所塑造的。了解这些叙述的起源和动态对于有效解决 AI 的实际影响并确保对其风险和潜力进行建设性讨论至关重要。
媒体报道的这种转变表明,有必要对潜在的话语进行更仔细的审视。我们目前看到围绕生成式 AI 的四种主要叙述类型:
-
(1)
“生存风险”的说法认为,人工智能超级智能或通用人工智能的生存风险可能源于下一代生成式人工智能系统。随着生成式 AI 系统变得越来越复杂,它们的能力可能会超出人类的控制范围,并可能导致潜在的生存灾难性后果。这种说法的强烈版本引起了人们对人工智能或人工智能通用技术可能导致人类灭绝的担忧 3。 -
(2)
“有效的加速主义者”叙事支持 AI 的快速发展。支持者认为,它解决复杂全球问题的潜在好处远远大于风险,而先进人工智能的生存风险为零或接近零,因此可以被忽视4。这种叙事是由对 AI 进步的力量的坚定信念驱动的,它可以带来实质性的积极变化。 -
(3)
“真实、直接的社会风险”叙述仅关注生成式 AI 的有形、直接的社会风险。它强调了诸如深度伪造色情内容、不公正的能力分配或生成式人工智能日益增长的环境影响等问题,并认为这些当今的问题比投机性的生存风险更紧迫、更相关。这种观点的支持者认为,专注于遥远的生存威胁会分散我们的注意力,使我们无法解决 AI5 的真实和现实危险。 -
(4)
“平衡风险”的叙述倡导一种 AI 风险治理方法,该方法承认 AI 带来的生存风险和直接社会风险。它鼓励在这两类风险之间找到有意义的联系,并建议同时解决它们可以带来更全面、更有效的风险缓解策略和政策6。