当前位置:
X-MOL 学术
›
Proc. Natl. Acad. Sci. U.S.A.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
People who share encounters with racism are silenced online by humans and machines, but a guideline-reframing intervention holds promise
Proceedings of the National Academy of Sciences of the United States of America ( IF 9.4 ) Pub Date : 2024-09-09 , DOI: 10.1073/pnas.2322764121 Cinoo Lee 1, 2 , Kristina Gligorić 2, 3 , Pratyusha Ria Kalluri 3 , Maggie Harrington 1 , Esin Durmus 3 , Kiara L Sanchez 4 , Nay San 2, 5 , Danny Tse 3 , Xuan Zhao 2 , MarYam G Hamedani 2 , Hazel Rose Markus 1, 2 , Dan Jurafsky 3, 5 , Jennifer L Eberhardt 1, 2, 6
Proceedings of the National Academy of Sciences of the United States of America ( IF 9.4 ) Pub Date : 2024-09-09 , DOI: 10.1073/pnas.2322764121 Cinoo Lee 1, 2 , Kristina Gligorić 2, 3 , Pratyusha Ria Kalluri 3 , Maggie Harrington 1 , Esin Durmus 3 , Kiara L Sanchez 4 , Nay San 2, 5 , Danny Tse 3 , Xuan Zhao 2 , MarYam G Hamedani 2 , Hazel Rose Markus 1, 2 , Dan Jurafsky 3, 5 , Jennifer L Eberhardt 1, 2, 6
Affiliation
Are members of marginalized communities silenced on social media when they share personal experiences of racism? Here, we investigate the role of algorithms, humans, and platform guidelines in suppressing disclosures of racial discrimination. In a field study of actual posts from a neighborhood-based social media platform, we find that when users talk about their experiences as targets of racism, their posts are disproportionately flagged for removal as toxic by five widely used moderation algorithms from major online platforms, including the most recent large language models. We show that human users disproportionately flag these disclosures for removal as well. Next, in a follow-up experiment, we demonstrate that merely witnessing such suppression negatively influences how Black Americans view the community and their place in it. Finally, to address these challenges to equity and inclusion in online spaces, we introduce a mitigation strategy: a guideline-reframing intervention that is effective at reducing silencing behavior across the political spectrum.
中文翻译:
那些分享遭遇种族主义经历的人在网上被人类和机器压制,但重新制定指导方针的干预措施有望实现
当边缘化社区的成员分享种族主义的个人经历时,他们是否会在社交媒体上保持沉默?在这里,我们研究了算法、人类和平台指南在压制种族歧视披露方面的作用。在对社区社交媒体平台实际帖子的实地研究中,我们发现,当用户谈论自己作为种族主义目标的经历时,他们的帖子不成比例地被主要在线平台的五种广泛使用的审核算法标记为有毒内容,需要删除,包括最新的大型语言模型。我们发现,人类用户也不成比例地将这些披露标记为删除。接下来,在后续实验中,我们证明,仅仅目睹这种镇压就会对美国黑人如何看待社区及其在社区中的地位产生负面影响。最后,为了应对在线空间公平和包容性的这些挑战,我们引入了一种缓解策略:一种重新制定指南的干预措施,可有效减少整个政治领域的沉默行为。
更新日期:2024-09-09
中文翻译:
那些分享遭遇种族主义经历的人在网上被人类和机器压制,但重新制定指导方针的干预措施有望实现
当边缘化社区的成员分享种族主义的个人经历时,他们是否会在社交媒体上保持沉默?在这里,我们研究了算法、人类和平台指南在压制种族歧视披露方面的作用。在对社区社交媒体平台实际帖子的实地研究中,我们发现,当用户谈论自己作为种族主义目标的经历时,他们的帖子不成比例地被主要在线平台的五种广泛使用的审核算法标记为有毒内容,需要删除,包括最新的大型语言模型。我们发现,人类用户也不成比例地将这些披露标记为删除。接下来,在后续实验中,我们证明,仅仅目睹这种镇压就会对美国黑人如何看待社区及其在社区中的地位产生负面影响。最后,为了应对在线空间公平和包容性的这些挑战,我们引入了一种缓解策略:一种重新制定指南的干预措施,可有效减少整个政治领域的沉默行为。