当前位置: X-MOL 学术IEEE Trans. Affect. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
PR-PL: A Novel Prototypical Representation Based Pairwise Learning Framework for Emotion Recognition Using EEG Signals
IEEE Transactions on Affective Computing ( IF 9.6 ) Pub Date : 2023-06-23 , DOI: 10.1109/taffc.2023.3288118
Rushuang Zhou 1 , Zhiguo Zhang 2 , Hong Fu 3 , Li Zhang 1 , Linling Li 1 , Gan Huang 1 , Fali Li 4 , Xin Yang 1 , Yining Dong 5 , Yuan-Ting Zhang 6 , Zhen Liang 1
Affiliation  

Affective brain-computer interface based on electroencephalography (EEG) is an important branch in the field of affective computing. However, the individual differences in EEG emotional data and the noisy labeling problem in the subjective feedback seriously limit the effectiveness and generalizability of existing models. To tackle these two critical issues, we propose a novel transfer learning framework with Prototypical Representation based Pairwise Learning ( PR-PL ). The discriminative and generalized EEG features are learned for emotion revealing across individuals and the emotion recognition task is formulated as pairwise learning for improving the model tolerance to the noisy labels. More specifically, a prototypical learning is developed to encode the inherent emotion-related semantic structure of EEG data and align the individuals’ EEG features to a shared common feature space under consideration of the feature separability of both source and target domains. Based on the aligned feature representations, pairwise learning with an adaptive pseudo labeling method is introduced to encode the proximity relationships among samples and alleviate the label noises effect on modeling. Extensive results on two benchmark databases (SEED and SEED-IV) under four different cross-validation evaluation protocols validate the model reliability and stability across subjects and sessions. Compared to the literature, the average enhancement of emotion recognition across four different evaluation protocols is 2.04% (SEED) and 2.58% (SEED-IV).

中文翻译:


PR-PL:一种基于新型原型表示的成对学习框架,用于使用脑电图信号进行情绪识别



基于脑电图(EEG)的情感脑机接口是情感计算领域的一个重要分支。然而,脑电情感数据的个体差异和主观反馈中的噪声标签问题严重限制了现有模型的有效性和普适性。为了解决这两个问题,我们提出了一种新颖的迁移学习框架,该框架具有基于原型表示的配对学习(PR-PL)。学习区分性和广义的脑电图特征以揭示个体之间的情绪,并将情绪识别任务制定为成对学习,以提高模型对噪声标签的耐受性。更具体地说,开发了一种原型学习来编码脑电图数据固有的与情感相关的语义结构,并在考虑源域和目标域的特征可分离性的情况下将个体的脑电图特征对齐到共享的公共特征空间。基于对齐的特征表示,引入自适应伪标记方法的成对学习来编码样本之间的邻近关系并减轻标签噪声对建模的影响。根据四种不同的交叉验证评估协议,两个基准数据库(SEED 和 SEED-IV)的广泛结果验证了模型在受试者和会话中的可靠性和稳定性。与文献相比,四种不同评估方案的情绪识别平均增强程度为 2.04% (SEED) 和 2.58% (SEED-IV)。
更新日期:2023-06-23
down
wechat
bug