当前位置: X-MOL 学术Policy and Society › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
When code isn’t law: rethinking regulation for artificial intelligence
Policy and Society ( IF 5.7 ) Pub Date : 2024-05-29 , DOI: 10.1093/polsoc/puae020
Brian Judge 1 , Mark Nitzberg 1 , Stuart Russell 1
Affiliation  

This article examines the challenges of regulating artificial intelligence (AI) systems and proposes an adapted model of regulation suitable for AI’s novel features. Unlike past technologies, AI systems built using techniques like deep learning cannot be directly analyzed, specified, or audited against regulations. Their behavior emerges unpredictably from training rather than intentional design. However, the traditional model of delegating oversight to an expert agency, which has succeeded in high-risk sectors like aviation and nuclear power, should not be wholly discarded. Instead, policymakers must contain risks from today’s opaque models while supporting research into provably safe AI architectures. Drawing lessons from AI safety literature and past regulatory successes, effective AI governance will likely require consolidated authority, licensing regimes, mandated training data and modeling disclosures, formal verification of system behavior, and the capacity for rapid intervention.

中文翻译:


当代码不再是法律:重新思考人工智能监管



本文探讨了监管人工智能 (AI) 系统的挑战,并提出了一种适合人工智能新特征的监管模型。与过去的技术不同,使用深度学习等技术构建的人工智能系统无法根据法规直接分析、指定或审计。他们的行为是不可预测地来自训练而不是有意设计。然而,将监管委托给专家机构的传统模式在航空和核电等高风险领域取得了成功,但不应完全放弃。相反,政策制定者必须遏制当今不透明模型带来的风险,同时支持对可证明安全的人工智能架构的研究。借鉴人工智能安全文献和过去监管成功的经验教训,有效的人工智能治理可能需要统一的权力、许可制度、强制性的培训数据和模型披露、系统行为的正式验证以及快速干预的能力。
更新日期:2024-05-29
down
wechat
bug