当前位置: X-MOL 学术ACM Comput. Surv. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Macro Ethics Principles for Responsible AI Systems: Taxonomy and Directions
ACM Computing Surveys ( IF 23.8 ) Pub Date : 2024-06-13 , DOI: 10.1145/3672394
Jessica Woodgate 1, 2 , Nirav Ajmeri 2, 3
Affiliation  

Responsible AI must be able to make or support decisions that consider human values and can be justified by human morals. Accommodating values and morals in responsible decision making is supported by adopting a perspective of macro ethics, which views ethics through a holistic lens incorporating social context. Normative ethical principles inferred from philosophy can be used to methodically reason about ethics and make ethical judgements in specific contexts. Operationalising normative ethical principles thus promotes responsible reasoning under the perspective of macro ethics. We survey AI and computer science literature and develop a taxonomy of 21 normative ethical principles which can be operationalised in AI. We describe how each principle has previously been operationalised, highlighting key themes that AI practitioners seeking to implement ethical principles should be aware of. We envision that this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in reasoning capacities of responsible AI systems.



中文翻译:


负责任的人工智能系统的宏观道德原则:分类和方向



负责任的人工智能必须能够做出或支持考虑人类价值观的决策,并且可以通过人类道德来证明其合理性。在负责任的决策中融入价值观和道德是通过采用宏观道德视角来支持的,宏观道德视角通过结合社会背景的整体视角来看待道德。从哲学中推导出来的规范伦理原则可以用来有条不紊地推理伦理并在特定背景下做出伦理判断。因此,规范伦理原则的可操作性促进了宏观伦理视角下的负责任推理。我们调查了人工智能和计算机科学文献,并制定了可在人工智能中实施的 21 条规范道德原则的分类法。我们描述了每项原则之前是如何实施的,强调了寻求实施道德原则的人工智能从业者应该注意的关键主题。我们设想这种分类法将促进方法的开发,将规范道德原则纳入负责任的人工智能系统的推理能力中。

更新日期:2024-06-13
down
wechat
bug