当前位置: X-MOL 学术World Psychiatry › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Artificial intelligence, consciousness and psychiatry
World Psychiatry ( IF 60.5 ) Pub Date : 2024-09-16 , DOI: 10.1002/wps.21222
Giulio Tononi, Charles Raison

In 1966, a researcher at the Massachusetts Institute of Technology introduced ELIZA, a computer program that simulated a psychotherapist in the Rogerian tradition, rephrasing a patient's words into questions according to simple but effective scripts. This was one of the first (and few) successes of early artificial intelligence (AI). To the dismay of its creator, some people took ELIZA for a real psychotherapist, perhaps because of our innate tendency to project consciousness when we detect intelligence, especially intelligent speech.

ELIZA's stuttering attempt at AI has now become an immensely eloquent golem. ChatGPT can easily outspeak, outwrite and outperform S. Freud. Because large language models (LLM) benefit from superhuman lexicon, knowledge, memory and speed, artificial brains can now trump natural ones in most tasks.

ELIZA was named after the flower-girl in G.B. Shaw's play Pygmalion, supposedly because it learned to improve its speech with practice. The original myth of Pygmalion – the sculptor who carved the ideal woman Galatea out of ivory and hoped to bring her to life – is even more apt: does the creation of AI portend artificial consciousness, perhaps even superhuman consciousness? Two camps are beginning to emerge, with radically different answers to this question.

According to the dominant computational/functionalist stance in cognitive neuroscience, the answer is yes1. Cognitive neuroscience assumes that we are ultimately machines running sophisticated software (that can derail and be reprogrammed). Neural algorithms recognize objects and scenes, direct attention, hold items in working memory, and store them in long-term memory. Complex neural computations drive cognitive control, decision making, emotional reactions, social behaviors, and of course language. In this view, consciousness must be just another function, perhaps the global broadcasting of information2 or the metacognitive assessment of sensory inputs3. In this case, whenever computers can reproduce the same functions as our brain, just implemented differently (the functionalists’ “multiple realizability”), they will be conscious like we are.

Admittedly, despite LLMs sounding a lot like conscious humans nowadays, there is no principled way for determining whether they are already conscious and, if so, in which ways and to what degree1. Nor is it clear how we might establish whether they feel anything (just asking, we suspect, might not do…).

Cognitive neuroscience typically takes the extrinsic perspective, introduced by Galileo, which has been immensely successful in much of science. From this perspective, consciousness is either a “user illusion”4, or a mysterious “emergent” property. However, as recognized long ago by Leibniz, this leaves experience – what we see, hear, think and feel – entirely unaccounted for. This implicit dualism is one that has plagued not just neuroscience, but also psychiatry from the very beginning: are we treating the brain, the psyche, or both? If so, how are they related? Is the soul just the brain's ephemeral passenger?

Integrated information theory (IIT) provides a radically different approach5, and this is our own view. IIT takes the intrinsic perspective, starting not from the brain and what it does, but from consciousness and what it is. After all, for each of us, experience is what exists irrefutably, and the world is an inference from within experience – a good one, but still an inference, as psychiatrists should know well.

IIT first characterizes the essential properties of consciousness – those that are irrefutably true of every conceivable experience – and then asks what is required to account for them in physical terms. Crucially, this leads to identifying an experience, in all its richness, with a structure (rather than with a process, a computation, or a function) – a structure that expresses the causal powers of a (neural) substrate in its current state. In fact, IIT provides a calculus for determining, at least in principle, whether a substrate is conscious, in which way, and to what degree.

The theory can explain why certain parts of the brain can support consciousness, while others, such as the cerebellum and portions of prefrontal cortex, cannot. It can explain why – due to a breakdown of causal links – consciousness is lost in dreamless sleep, anesthesia, and generalized seizures6. It has also started to account for the quality of experience – the way space feels extended and time flowing7. It leads to many testable predictions, including counterintuitive ones: for example, that a near-silent cortex can support a vivid experience of pure presence. Finally, IIT has spawned the development of a transcranial magnetic stimulation/electroencephalography method that is currently the most specific and sensitive for assessing the presence of consciousness in unresponsive patients8.

If IIT is right, and in sharp contrast to the dominant computational/functionalist view, AI lacks (and will lack) any spark of consciousness: it may talk and behave just as well or better than any of us (it will be “functionally equivalent”), but it will not be “phenomenally equivalent” (it will feel nothing at all)5. In the words of T. Nagel, there will be nothing “it is like to be” a computer, no matter how intelligent. Just like the cerebellum, the computer has the wrong architecture for consciousness. Even though it may perform flawlessly every “cognitive” function we may care for, including those we are used to consider uniquely human, all those functions will unroll “in the dark”. They will unroll as unconsciously as the processes in our brain that smoothly string together phonemes into words and words into sentences to express a fleeting thought.

If IIT is right, attributing consciousness to AI is truly an “existential” mistake – because consciousness is about being, not doing, and AI is about doing, not being. Under selective pressure, biological constraints may promote the co-evolution of intelligence and consciousness (by favoring highly integrated substrates)9. However, in a larger context, intelligence and consciousness can be doubly dissociated. There can be experience without the functional abilities that we associate with intelligence. For example, minimally responsive patients may be unable to do or say anything but may harbor rich subjective experiences8. And there can be great intelligence without consciousness: an eloquent AI may engage in a stimulating conversation and impress us with its intellect, without anything existing besides the stream of sentences we hear – in the words of P. Larkin, “No sight, no sound / No touch or taste or smell, nothing to think with / Nothing to love or link with”.

AI poses a unique and urgent challenge not just for mental health, but for the human condition and our place in nature. Either mainstream computational/functionalist approaches are right, and we – highly constrained and often defective biological machines – will soon be superseded by machines made of silicon that will be not just better and faster but also enjoy a richer inner life. Or IIT is right, and every human experience is an extraordinary and precious phenomenon, one that requires a very special neural substrate that cannot be replicated by merely simulating its functions.



中文翻译:


人工智能、意识和精神病学



1966年,麻省理工学院的一名研究人员推出了ELIZA,这是一个计算机程序,可以模拟罗杰传统中的心理治疗师,根据简单但有效的脚本将患者的话语重新表述为问题。这是早期人工智能 (AI) 的首批(也是少数)成功之一。令其创造者沮丧的是,有些人将 ELIZA 视为真正的心理治疗师,也许是因为当我们检测到智力(尤其是智能言语)时,我们天生倾向于投射意识。


ELIZA 对 AI 的结巴尝试现在已经成为一个能言善辩的傀儡。 ChatGPT 可以轻松地超越 S. Freud 的言论、写作和表现。由于大型语言模型( LLM )受益于超人的词汇、知识、记忆力和速度,人工大脑现在可以在大多数任务中胜过自然大脑。


ELIZA 是以 GB Shaw 的戏剧《皮格马利翁》中的花姑娘命名的,据说是因为它学会了通过练习来提高演讲能力。皮格马利翁的原始神话——这位雕塑家用象牙雕刻出了理想的女性加拉蒂亚,并希望将她赋予生命——则更加贴切:人工智能的创造是否预示着人工意识,甚至可能是超人意识?两个阵营开始出现,对这个问题的答案截然不同。


根据认知神经科学中占主导地位的计算/功能主义立场,答案是肯定的1 。认知神经科学假设我们最终是运行复杂软件的机器(可以脱轨并重新编程)。神经算法识别物体和场景,引导注意力,将项目保存在工作记忆中,并将它们存储在长期记忆中。复杂的神经计算驱动认知控制、决策、情绪反应、社会行为,当然还有语言。按照这种观点,意识一定只是另一种功能,也许是信息的全球广播2或感官输入的元认知评估3 。在这种情况下,只要计算机能够复制与我们的大脑相同的功能,只是实现方式不同(功能主义者的“多重可实现性”),它们就会像我们一样有意识。


诚然,尽管LLMs现在听起来很像有意识的人类,但没有原则性的方法来确定他们是否已经有意识,如果有的话,以何种方式和程度1 。我们也不清楚如何确定他们是否有任何感觉(我们怀疑,仅仅询问可能不会……)。


认知神经科学通常采用伽利略提出的外在观点,该观点在许多科学领域都取得了巨大成功。从这个角度来看,意识要么是一种“用户幻觉” 4 ,要么是一种神秘的“涌现”属性。然而,正如莱布尼茨很久以前所认识到的那样,这使得经验——我们所看到的、听到的、想到的和感觉到的——完全无法解释。这种隐含的二元论不仅困扰着神经科学,而且从一开始就困扰着精神病学:我们是在治疗大脑、心灵,还是两者兼而有之?如果是,它们之间有何关系?灵魂只是大脑的短暂乘客吗?


综合信息论(IIT)提供了一种完全不同的方法5 ,这是我们自己的观点。 IIT 采用内在的视角,不是从大脑及其作用出发,而是从意识及其本质出发。毕竟,对于我们每个人来说,经验都是无可辩驳地存在的,而世界是从经验中得出的推论——这是一个很好的推论,但仍然是一种推论,正如精神病学家应该清楚的那样。


IIT 首先描述了意识的基本属性——这些属性对于每一种可以想象的体验都是无可辩驳的真实——然后询问需要什么才能用物理术语来解释它们。至关重要的是,这导致通过结构(而不​​是过程、计算或函数)来识别一种丰富的体验——一种表达(神经)基质​​在当前状态下的因果力量的结构。事实上,IIT 提供了一种计算方法,至少在原则上可以确定基质是否有意识、以何种方式以及达到何种程度。


该理论可以解释为什么大脑的某些部分可以支持意识,而其他部分,例如小脑和前额叶皮层的部分则不能。它可以解释为什么由于因果关系的崩溃,意识会在无梦睡眠、麻醉和全身性癫痫发作中丧失6 。它还开始考虑体验的质量——空间延伸和时间流动的感觉7 。它导致了许多可测试的预测,包括反直觉的预测:例如,近乎沉默的皮层可以支持纯粹存在的生动体验。最后,IIT 催生了经颅磁刺激/脑电图方法的发展,该方法是目前评估无反应患者意识存在最特异和最灵敏的方法8


如果 IIT 是正确的,并且与占主导地位的计算/功能主义观点形成鲜明对比,人工智能缺乏(并将缺乏)任何意识火花:它的说话和行为可能与我们任何人一样好甚至更好(它将“在功能上等效”) ”),但它不会是“现象上等价的”(它根本感觉不到任何东西) 5 。用 T. Nagel 的话说,无论计算机多么智能,都不会“像”计算机一样。就像小脑一样,计算机的意识架构是错误的。尽管它可以完美地执行我们可能关心的每一项“认知”功能,包括那些我们习惯认为是人类独有的功能,但所有这些功能都将在“黑暗中”展开。它们会像我们大脑中的过程一样无意识地展开,将音素顺利地串成单词,并将单词串成句子以表达转瞬即逝的想法。


如果印度理工学院是对的,那么将意识归因于人工智能确实是一个“存在主义”错误——因为意识是关于存在,而不是做,而人工智能是关于做,而不是存在。在选择压力下,生物约束可能会促进智力和意识的共同进化(通过有利于高度集成的基质) 9 。然而,在更大的背景下,智力和意识可以双重分离。没有我们与智力相关的功能性能力,也可以有经验。例如,反应最低的患者可能无法做或说任何事情,但可能拥有丰富的主观体验8 。没有意识也可以有伟大的智能:一个能言善辩的人工智能可以进行令人兴奋的对话,并以其智力给我们留下深刻的印象,除了我们听到的句子流之外没有任何东西存在——用 P. Larkin 的话来说,“没有视觉,没有声音/没有触摸、味道或气味,没有什么可以思考/没有什么可以爱或联系”。


人工智能不仅对心理健康,而且对人类状况和我们在自然中的地位提出了独特而紧迫的挑战。主流计算/功能主义方法都是正确的,而我们——高度受限且常常有缺陷的生物机器——很快就会被硅制成的机器所取代,这些机器不仅更好、更快,而且享有更丰富的内在生活。或者说 IIT 是对的,每一次人类经历都是一种非凡而珍贵的现象,需要一种非常特殊的神经基质,而这种神经基质不能仅通过模拟其功能来复制。

更新日期:2024-09-21
down
wechat
bug