当前位置: X-MOL 学术Regul. Gov. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
European artificial intelligence “trusted throughout the world”: Risk-based regulation and the fashioning of a competitive common AI market
Regulation & Governance ( IF 3.2 ) Pub Date : 2023-12-11 , DOI: 10.1111/rego.12563
Regine Paul 1
Affiliation  

The European Commission has pioneered the coercive regulation of artificial intelligence (AI), including a proposal of banning some applications altogether on moral grounds. Core to its regulatory strategy is a nominally “risk-based” approach with interventions that are proportionate to risk levels. Yet, neither standard accounts of risk-based regulation as rational problem-solving endeavor nor theories of organizational legitimacy-seeking, both prominently discussed in Regulation & Governance, fully explain the Commission's attraction to the risk heuristic. This article responds to this impasse with three contributions. First, it enrichens risk-based regulation scholarship—beyond AI—with a firm foundation in constructivist and critical political economy accounts of emerging tech regulation to capture the performative politics of defining and enacting risk vis-à-vis global economic competitiveness. Second, it conceptualizes the role of risk analysis within a Cultural Political Economy framework: as a powerful epistemic tool for the discursive and regulatory differentiation of an uncertain regulatory terrain (semiosis and structuration) which the Commission wields in its pursuit of a future common European AI market. Thirdly, the paper offers an in-depth empirical reconstruction of the Commission's risk-based semiosis and structuration in AI regulation through qualitative analysis of a substantive sample of documents and expert interviews. This finds that the Commission's use of risk analysis, outlawing some AI uses as matters of deep value conflicts and tightly controlling (at least discursively) so-called high-risk AI systems, enables Brussels to fashion its desired trademark of European “cutting-edge AI … trusted throughout the world” in the first place.

中文翻译:


欧洲人工智能“全球信赖”:基于风险的监管和竞争性共同人工智能市场的形成



欧盟委员会率先对人工智能 (AI) 进行强制性监管,包括基于道德理由完全禁止某些应用程序的提案。其监管策略的核心是名义上“基于风险”的方法,其干预措施与风险水平成正比。然而,无论是在《监管与治理》中都未明确讨论过将基于风险的监管作为理性解决问题的努力的标准解释,也不是关于组织寻求合法性的理论,都不能完全解释委员会对风险启发式的吸引力。本文通过三篇文章来回应这一僵局。首先,它丰富了基于风险的监管学术研究——超越人工智能——在对新兴技术监管的建构主义和批判性政治经济学解释方面奠定了坚实的基础,以捕捉定义和实施相对于全球经济竞争力的风险的表演性政治。其次,它将风险分析在文化政治经济学框架中的作用概念化:作为委员会在追求未来共同欧洲人工智能市场的过程中,对不确定的监管领域(符号学和结构)进行话语和监管差异化的强大认识工具。第三,本文通过对实质性文件样本和专家访谈的定性分析,对委员会基于风险的符号学和人工智能监管结构进行了深入的实证重建。 研究发现,欧盟委员会使用风险分析,将某些人工智能用途定为非法,将其视为深度价值冲突的问题,并严格控制(至少在话语上)所谓的高风险人工智能系统,使布鲁塞尔能够塑造其想要的欧洲“尖端人工智能......全世界都值得信赖”。
更新日期:2023-12-11
down
wechat
bug