当前位置: X-MOL 学术Journal of Legal Analysis › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Algorithmic Harm in Consumer Markets
Journal of Legal Analysis ( IF 3.0 ) Pub Date : 2023-08-21 , DOI: 10.1093/jla/laad003
Oren Bar-Gill 1 , Cass R Sunstein 1 , Inbal Talgam-Cohen 2
Affiliation  

Machine learning algorithms are increasingly able to predict what goods and services particular people will buy, and at what price. It is possible to imagine a situation in which relatively uniform, or coarsely set, prices and product characteristics are replaced by far more in the way of individualization. Companies might, for example, offer people shirts and shoes that are particularly suited to their situations, that fit with their particular tastes, and that have prices that fit their personal valuations. In many cases, the use of algorithms promises to increase efficiency and to promote social welfare; it might also promote fair distribution. But when consumers suffer from an absence of information or from behavioral biases, algorithms can cause serious harm. Companies might, for example, exploit such biases in order to lead people to purchase products that have little or no value for them or to pay too much for products that do have value for them. Algorithmic harm, understood as the exploitation of an absence of information or of behavioral biases, can disproportionately affect members of identifiable groups, including women and people of color. Since algorithms exacerbate the harm caused to imperfectly informed and imperfectly rational consumers, their increasing use provides fresh support for existing efforts to reduce information and rationality deficits, especially through optimally designed disclosure mandates. In addition, there is a more particular need for algorithm-centered policy responses. Specifically, algorithmic transparency—transparency about the nature, uses, and consequences of algorithms—is both crucial and challenging; novel methods designed to open the algorithmic “black box” and “interpret” the algorithm’s decision-making process should play a key role. In appropriate cases, regulators should also police the design and implementation of algorithms, with a particular emphasis on the exploitation of an absence of information or of behavioral biases.

中文翻译:

消费市场中的算法危害

机器学习算法越来越能够预测特定人群将购买哪些商品和服务以及以什么价格购买。可以想象这样一种情况:相对统一或粗略设定的价格和产品特征被更加个性化的方式所取代。例如,公司可能会向人们提供特别适合他们的情况、适合他们的特定品味、并且价格符合他们个人估价的衬衫和鞋子。在许多情况下,算法的使用有望提高效率并促进社会福利;它还可能促进公平分配。但当消费者缺乏信息或行为偏差时,算法可能会造成严重伤害。例如,公司可能会:利用这种偏见来引导人们购买对他们来说几乎没有价值或没有价值的产品,或者为对他们有价值的产品支付过高的价格。算法伤害被理解为利用信息缺失或行为偏见,可能会对可识别群体的成员造成不成比例的影响,包括女性和有色人种。由于算法加剧了对不完全知情和不完全理性的消费者造成的伤害,因此它们的日益使用为减少信息和理性缺陷的现有努力提供了新的支持,特别是通过优化设计的披露指令。此外,更特别需要以算法为中心的政策响应。具体来说,算法透明度——关于性质、用途、以及算法的后果——既至关重要又具有挑战性;旨在打开算法“黑匣子”并“解释”算法决策过程的新颖方法应该发挥关键作用。在适当的情况下,监管机构还应监管算法的设计和实施,特别强调利用信息缺失或行为偏差的情况。
更新日期:2023-08-21
down
wechat
bug