World Psychiatry ( IF 60.5 ) Pub Date : 2024-09-16 , DOI: 10.1002/wps.21237 Nicole Martinez-Martin
Galderisi et al1 provide an insightful overview of current ethical challenges in psychiatry, including those presented by digital psychiatry, as well as recommendations for addressing these challenges. As they discuss, “digital psychiatry” encompasses an array of different digital tools, including mental health apps, chatbots, telehealth platforms, and artificial intelligence (AI). These tools hold promise for improving diagnosis and care, and could facilitate access to mental health services by marginalized populations. In particular, digital mental health tools can assist in expanding mental health support in lower-to-middle income countries.
Many of the ethical challenges identified by the authors in the use of digital tools reflect inequities and challenges within broader society. For example, in the US, lack of mental health insurance and insufficient representation of racialized minorities in medical research contribute to the difficulties with access and fairness in digital psychiatry. In many ways, the ethical challenges presented by digital psychiatry reflect long-standing concerns about who benefits, and who does not, from psychiatry. The array of forward-looking recommendations advanced by Galderisi et al show that these ethical challenges can also be seen as opportunities for moving towards greater equity and inclusion in psychiatry.
Discussions of the ethics of digital health benefit from broadening the scope of issues to include social context. Galderisi et al refer to inequities in how mental health care is researched, developed and accessed, and to historical power imbalances in psychiatry due to which patient voices are undervalued and overlooked. A broader approach to ethical challenges related to digital health technologies recognizes that issues affecting these technologies often emerge due to their interactions with the social institutions in which they are developed and applied2. For example, privacy and safety of digital psychiatry tools must be understood within the context of the specific regulatory environment and infrastructure (e.g., broadband, hardware) in which they are being used.
Digital health tools and medical AI are often promoted for improving cost-effectiveness, but this business-oriented emphasis can obscure discussion of what trade-offs in costs are considered acceptable, such as whether lesser-quality services are deemed acceptable for low-income groups. Institutions that regulate medical devices often struggle when they have to deal with softwares or AI. Consumers and patients too often find it difficult to obtain information that can help them decide which digital psychiatry tools are appropriate and effective for their needs.
There have been pioneering efforts to assist with evaluating effective digital mental health tools, such as American Psychiatric Association's mental health app evaluator3. However, new models for evaluation which are responsive to the ways in which clinicians and patients realistically engage with mental health care tools are still needed. For example, some of the measures that regulators or insurance companies use to evaluate and approve digital mental health tools may not capture the aspects of a tool that, from a consumer or patient perspective, offer meaningful improvements to their lives. There has also been growing recognition that meaningful evaluation of the effectiveness of digital health tools needs to look beyond the tool itself in order to evaluate the tool's effectiveness as it is used within a particular system4. More engagement of diverse communities and those with lived experience during the development of digital psychiatry tools is imperative for improving these tools.
Unfortunately, the hype around digital mental health often goes hand-in-hand with rapid adoption of unproven technologies. For example, large language models (LLMs) and generative AI are being quickly taken up within health care, including psychiatry5. These digital tools are embraced as cost-effective time-savers before there is sufficient opportunity to determine the extent to which they are in fact ready for the purposes for which they are being used6. Potential problems with generative AI in health care continue to emerge, from the potential discriminatory biases in information, to the potential collection and disclosure of personal data7. There is a need to exercise more caution in the adoption of new digital tools in psychiatry, in order to give time for evaluation and guidance for specific purposes.
Privacy continues to pose significant concerns for digital psychiatry. Digital mental health tools often gather information that psychiatrists and patients are not aware of, such as location data, which may seem insignificant, but can allow for behavioral analyses that infer sensitive or predictive information regarding users8. In today's data landscape, brokerage of personal data can generate billions of dollars. These data practices have repercussions on patients that they may not be able to anticipate. Even de-identified data can increasingly be re-identified, and user profiles that are compiled from such data can be utilized to target people for fraudulent marketing schemes, or lead to downstream implications for employment or educational opportunities. Furthermore, in countries such as the US, where mental health care may be unaffordable for many individuals, people may effectively be put in the position of trading data for health care.
Because of fairness and bias issues, there are also real questions on how much digital and AI tools actually work for different populations. One common source of bias is that the data that are used to train and develop digital tools may be insufficiently representative of the target population, such as participants of diverse race and gender or with disability9. The potential for bias goes beyond the question of algorithmic bias, as tools may be simply designed in ways that do not work effectively for different populations, or the use of those tools in specific contexts may lead to unfair outcomes. Addressing fairness will require ensuring that researchers and clinicians from diverse backgrounds are included in the development and design of digital psychiatry tools.
As Galderisi et al note, the discipline and tools of psychiatry have a long history of being used for social control, such as in the criminal justice and educational systems. The tools of digital psychiatry may be applied to put vulnerable and minoritized groups at particular risk of punitive interventions from government institutions. It is, therefore, important that members of the psychiatric profession put considered effort into anticipating and addressing the social and legal implications of the use of digital psychiatry tools in other domains of society.
Development of digital psychiatry tools requires identifying specific ethical challenges, but also taking the time to reflect and envision the system and world that these tools will help create. Galderisi et al set out a number of action items that, taken together, envision a more equitable and inclusive future for psychiatry. This is an important moment to take these opportunities for building new frameworks and systems for psychiatry, in which digital tools can be used to support human empathy and creativity, allowing mental well-being to flourish.
中文翻译:
应对数字心理健康领域道德挑战的更广泛方法
Galderisi 等人1对精神病学领域当前的伦理挑战(包括数字精神病学提出的挑战)进行了深入的概述,并提出了应对这些挑战的建议。正如他们所讨论的,“数字精神病学”涵盖了一系列不同的数字工具,包括心理健康应用程序、聊天机器人、远程医疗平台和人工智能 (AI)。这些工具有望改善诊断和护理,并可以促进边缘化人群获得心理健康服务。特别是,数字心理健康工具可以帮助扩大中低收入国家的心理健康支持。
作者在使用数字工具时发现的许多道德挑战反映了更广泛社会内部的不平等和挑战。例如,在美国,缺乏精神健康保险以及少数族裔在医学研究中的代表性不足,导致了数字精神病学的获取和公平性困难。在许多方面,数字精神病学带来的伦理挑战反映了人们长期以来对谁能从精神病学中受益、谁不能受益的担忧。 Galderisi 等人提出的一系列前瞻性建议表明,这些伦理挑战也可以被视为精神病学走向更大公平和包容性的机会。
对数字健康伦理的讨论受益于扩大问题范围以涵盖社会背景。 Galderisi 等人提到了精神卫生保健研究、开发和获取方面的不平等,以及精神病学历史上的权力不平衡,导致患者的声音被低估和忽视。应对与数字医疗技术相关的伦理挑战的更广泛方法认识到,影响这些技术的问题往往是由于它们与开发和应用这些技术的社会机构的相互作用而出现的2 。例如,必须在使用数字精神病学工具的特定监管环境和基础设施(例如宽带、硬件)的背景下理解数字精神病学工具的隐私和安全。
数字健康工具和医疗人工智能经常被推广以提高成本效益,但这种以业务为导向的强调可能会模糊关于哪些成本权衡被认为可以接受的讨论,例如低质量的服务是否被认为是低收入群体可以接受的。监管医疗设备的机构在必须处理软件或人工智能时往往会陷入困境。消费者和患者常常发现很难获得可以帮助他们决定哪些数字精神病学工具适合且有效满足其需求的信息。
人们在协助评估有效的数字心理健康工具方面做出了开创性的努力,例如美国精神病学协会的心理健康应用程序评估器3 。然而,仍然需要新的评估模型,以适应临床医生和患者实际使用心理保健工具的方式。例如,监管机构或保险公司用于评估和批准数字心理健康工具的一些措施可能无法体现该工具从消费者或患者角度来看可以为他们的生活带来有意义的改善的方面。人们也越来越认识到,对数字健康工具有效性的有意义的评估需要超越工具本身,以便评估该工具在特定系统中使用时的有效性4 。在数字精神病学工具的开发过程中,不同社区和具有生活经验的人的更多参与对于改进这些工具至关重要。
不幸的是,围绕数字心理健康的炒作往往与未经证实的技术的快速采用密切相关。例如,大型语言模型 ( LLMs ) 和生成式人工智能正在医疗保健领域迅速得到应用,包括精神病学5 。在有足够的机会确定这些数字工具实际上已达到其使用目的的程度之前,这些数字工具被视为具有成本效益的节省时间的工具6 。医疗保健中生成式人工智能的潜在问题不断出现,从信息中潜在的歧视性偏见,到潜在的个人数据收集和披露7 。在精神病学中采用新的数字工具时需要更加谨慎,以便为特定目的的评估和指导留出时间。
隐私仍然是数字精神病学的重大问题。数字心理健康工具通常会收集精神科医生和患者不知道的信息,例如位置数据,这些数据可能看起来微不足道,但可以进行行为分析,推断出有关用户的敏感或预测信息8 。在当今的数据领域,个人数据经纪可以产生数十亿美元的收入。这些数据实践会对患者产生他们可能无法预见的影响。即使去识别化的数据也越来越多地被重新识别,并且根据这些数据编制的用户配置文件可用于针对欺诈性营销计划的目标人群,或导致对就业或教育机会的下游影响。此外,在美国等国家,许多人可能无法负担精神卫生保健费用,人们可能实际上处于以数据换取医疗保健的境地。
由于公平和偏见问题,数字和人工智能工具对不同人群的实际效果也存在真正的问题。偏见的一个常见来源是用于培训和开发数字工具的数据可能不足以代表目标人群,例如不同种族和性别或残疾的参与者9 。潜在的偏见超出了算法偏见的问题,因为工具的设计方式可能对不同人群无效,或者在特定情况下使用这些工具可能会导致不公平的结果。解决公平性需要确保来自不同背景的研究人员和临床医生参与数字精神病学工具的开发和设计。
正如 Galderisi 等人指出的那样,精神病学的学科和工具长期以来被用于社会控制,例如在刑事司法和教育系统中。数字精神病学工具可用于使弱势群体和少数群体面临政府机构惩罚性干预的特殊风险。因此,重要的是,精神病学专业人士应深思熟虑地努力预测和解决在社会其他领域使用数字精神病学工具的社会和法律影响。
数字精神病学工具的开发需要确定具体的道德挑战,但也要花时间反思和设想这些工具将帮助创建的系统和世界。 Galderisi 等人提出了一系列行动项目,这些行动项目共同为精神病学设想了一个更加公平和包容的未来。这是抓住这些机会建立精神病学新框架和系统的重要时刻,其中数字工具可用于支持人类的同理心和创造力,从而使心理健康蓬勃发展。