当前位置:
X-MOL 学术
›
Social Media + Society
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Who Can Say What? Testing the Impact of Interpersonal Mechanisms and Gender on Fairness Evaluations of Content Moderation
Social Media + Society ( IF 5.5 ) Pub Date : 2024-11-26 , DOI: 10.1177/20563051241286702 Ina Weber, João Gonçalves, Gina M. Masullo, Marisa Torres da Silva, Joep Hofhuis
Social Media + Society ( IF 5.5 ) Pub Date : 2024-11-26 , DOI: 10.1177/20563051241286702 Ina Weber, João Gonçalves, Gina M. Masullo, Marisa Torres da Silva, Joep Hofhuis
Content moderation is commonly used by social media platforms to curb the spread of hateful content. Yet, little is known about how users perceive this practice and which factors may influence their perceptions. Publicly denouncing content moderation—for example, portraying it as a limitation to free speech or as a form of political targeting—may play an important role in this context. Evaluations of moderation may also depend on interpersonal mechanisms triggered by perceived user characteristics. In this study, we disentangle these different factors by examining how the gender, perceived similarity, and social influence of a user publicly complaining about a content-removal decision influence evaluations of moderation. In an experiment ( n = 1,586) conducted in the United States, the Netherlands, and Portugal, participants witnessed the moderation of a hateful post, followed by a publicly posted complaint about moderation by the affected user. Evaluations of the fairness, legitimacy, and bias of the moderation decision were measured, as well as perceived similarity and social influence as mediators. The results indicate that arguments about freedom of speech significantly lower the perceived fairness of content moderation. Factors such as social influence of the moderated user impacted outcomes differently depending on the moderated user’s gender. We discuss implications of these findings for content-moderation practices.
中文翻译:
谁能说什么?测试人际机制和性别对内容审核公平性评价的影响
社交媒体平台通常使用内容审核来遏制仇恨内容的传播。然而,关于用户如何看待这种做法以及哪些因素可能会影响他们的看法,人们知之甚少。在这种情况下,公开谴责内容审核(例如,将其描述为对言论自由的限制或一种政治目标形式)可能会发挥重要作用。对审核的评估也可能取决于感知到的用户特征触发的人际机制。在这项研究中,我们通过研究公开抱怨内容删除决定的用户的性别、感知相似性和社会影响力如何影响审核评估来解开这些不同的因素。在美国、荷兰和葡萄牙进行的一项实验 (n = 1,586) 中,参与者目睹了恶意帖子的审核,然后是受影响用户公开发布的关于审核的投诉。测量了对审核决定的公平性、合法性和偏见的评估,以及作为中介的感知相似性和社会影响力。结果表明,关于言论自由的争论显着降低了内容审核的感知公平性。受监管用户的社会影响等因素对结果的影响因受监管用户的性别而异。我们讨论了这些发现对内容审核实践的影响。
更新日期:2024-11-26
中文翻译:
谁能说什么?测试人际机制和性别对内容审核公平性评价的影响
社交媒体平台通常使用内容审核来遏制仇恨内容的传播。然而,关于用户如何看待这种做法以及哪些因素可能会影响他们的看法,人们知之甚少。在这种情况下,公开谴责内容审核(例如,将其描述为对言论自由的限制或一种政治目标形式)可能会发挥重要作用。对审核的评估也可能取决于感知到的用户特征触发的人际机制。在这项研究中,我们通过研究公开抱怨内容删除决定的用户的性别、感知相似性和社会影响力如何影响审核评估来解开这些不同的因素。在美国、荷兰和葡萄牙进行的一项实验 (n = 1,586) 中,参与者目睹了恶意帖子的审核,然后是受影响用户公开发布的关于审核的投诉。测量了对审核决定的公平性、合法性和偏见的评估,以及作为中介的感知相似性和社会影响力。结果表明,关于言论自由的争论显着降低了内容审核的感知公平性。受监管用户的社会影响等因素对结果的影响因受监管用户的性别而异。我们讨论了这些发现对内容审核实践的影响。