当前位置: X-MOL 学术Br. J. Psychiatry › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Algorithmic fairness in precision psychiatry: analysis of prediction models in individuals at clinical high risk for psychosis
The British Journal of Psychiatry ( IF 8.7 ) Pub Date : 2023-11-08 , DOI: 10.1192/bjp.2023.141
Derya Şahin 1 , Lana Kambeitz-Ilankovic 2 , Stephen Wood 3 , Dominic Dwyer 4 , Rachel Upthegrove 5 , Raimo Salokangas 6 , Stefan Borgwardt 7 , Paolo Brambilla 8 , Eva Meisenzahl 9 , Stephan Ruhrmann 1 , Frauke Schultze-Lutter 10 , Rebekka Lencer 11 , Alessandro Bertolino 12 , Christos Pantelis 13 , Nikolaos Koutsouleris 14 , Joseph Kambeitz 1 ,
Affiliation  

Background

Computational models offer promising potential for personalised treatment of psychiatric diseases. For their clinical deployment, fairness must be evaluated alongside accuracy. Fairness requires predictive models to not unfairly disadvantage specific demographic groups. Failure to assess model fairness prior to use risks perpetuating healthcare inequalities. Despite its importance, empirical investigation of fairness in predictive models for psychiatry remains scarce.

Aims

To evaluate fairness in prediction models for development of psychosis and functional outcome.

Method

Using data from the PRONIA study, we examined fairness in 13 published models for prediction of transition to psychosis (n = 11) and functional outcome (n = 2) in people at clinical high risk for psychosis or with recent-onset depression. Using accuracy equality, predictive parity, false-positive error rate balance and false-negative error rate balance, we evaluated relevant fairness aspects for the demographic attributes ‘gender’ and ‘educational attainment’ and compared them with the fairness of clinicians’ judgements.

Results

Our findings indicate systematic bias towards assigning less favourable outcomes to individuals with lower educational attainment in both prediction models and clinicians’ judgements, resulting in higher false-positive rates in 7 of 11 models for transition to psychosis. Interestingly, the bias patterns observed in algorithmic predictions were not significantly more pronounced than those in clinicians’ predictions.

Conclusions

Educational bias was present in algorithmic and clinicians’ predictions, assuming more favourable outcomes for individuals with higher educational level (years of education). This bias might lead to increased stigma and psychosocial burden in patients with lower educational attainment and suboptimal psychosis prevention in those with higher educational attainment.



中文翻译:


精准精神病学中的算法公平性:临床精神病高危个体的预测模型分析


 背景


计算模型为精神疾病的个性化治疗提供了广阔的前景。对于临床部署,必须同时评估公平性和准确性。公平性要求预测模型不会不公平地使特定人口群体处于不利地位。在使用之前未能评估模型的公平性可能会导致医疗保健不平等现象长期存在。尽管其重要性,但对精神病学预测模型公平性的实证研究仍然很少。

 目标


评估精神病发展和功能结果预测模型的公平性。

 方法


使用 PRONIA 研究的数据,我们检查了 13 个已发表的模型的公平性,这些模型用于预测精神病临床高风险或新发抑郁症患者的精神病转变( n = 11)和功能结果( n = 2)。使用准确性平等、预测奇偶性、假阳性错误率平衡和假阴性错误率平衡,我们评估了人口统计属性“性别”和“教育程度”的相关公平性方面,并将其与临床医生判断的公平性进行了比较。

 结果


我们的研究结果表明,在预测模型和临床医生的判断中,系统性地倾向于将不太有利的结果分配给教育程度较低的个体,导致 11 个向精神病转变的模型中有 7 个模型的假阳性率较高。有趣的是,算法预测中观察到的偏差模式并不比临床医生预测中的偏差模式明显更明显。

 结论


算法和临床医生的预测中存在教育偏差,假设教育水平(受教育年限)较高的个体会得到更有利的结果。这种偏见可能会导致教育程度较低的患者的耻辱感和社会心理负担增加,以及教育程度较高的患者的精神病预防效果不佳。

更新日期:2023-11-08
down
wechat
bug