当前位置: X-MOL 学术IEEE Trans. Affect. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
CFDA-CSF: A Multi-Modal Domain Adaptation Method for Cross-Subject Emotion Recognition
IEEE Transactions on Affective Computing ( IF 9.6 ) Pub Date : 2024-01-23 , DOI: 10.1109/taffc.2024.3357656
Magdiel Jiménez-Guarneros 1 , Gibran Fuentes-Pineda 1
Affiliation  

Multi-modal classifiers for emotion recognition have become prominent, as the emotional states of subjects can be more comprehensively inferred from Electroencephalogram (EEG) signals and eye movements. However, existing classifiers experience a decrease in performance due to the distribution shift when applied to new users. Unsupervised domain adaptation (UDA) emerges as a solution to address the distribution shift between subjects by learning a shared latent feature space. Nevertheless, most UDA approaches focus on a single modality, while existing multi-modal approaches do not consider that fine-grained structures should also be explicitly aligned and the learned feature space must be discriminative. In this paper, we propose Coarse and Fine-grained Distribution Alignment with Correlated and Separable Features (CFDA-CSF), which performs a coarse alignment over the global feature space, and a fine-grained alignment between modalities from each domain distribution. At the same time, the model learns intra-domain correlated features, while a separable feature space is encouraged on new subjects. We conduct an extensive experimental study across the available sessions on three public datasets for multi-modal emotion recognition: SEED, SEED-IV, and SEED-V. Our proposal effectively improves the recognition performance in every session, achieving an average accuracy of 93.05%, 85.87% and 91.20% for SEED; 85.72%, 89.60%, and 86.88% for SEED-IV; and 88.49%, 91.37% and 91.57% for SEED-V.

中文翻译:


CFDA-CSF:一种跨主体情绪识别的多模态域适应方法



用于情绪识别的多模态分类器已经变得很重要,因为可以从脑电图(EEG)信号和眼球运动中更全面地推断受试者的情绪状态。然而,现有的分类器由于应用于新用户时的分布变化而导致性能下降。无监督域适应(UDA)作为一种解决方案出现,通过学习共享的潜在特征空间来解决受试者之间的分布变化。然而,大多数 UDA 方法侧重于单一模态,而现有的多模态方法没有考虑到细粒度结构也应该明确对齐,并且学习的特征空间必须具有辨别力。在本文中,我们提出了具有相关和可分离特征的粗粒度和细粒度分布对齐(CFDA-CSF),它在全局特征空间上执行粗对齐,并在每个域分布的模态之间执行细粒度对齐。同时,该模型学习域内相关特征,同时鼓励新主题使用可分离的特征空间。我们在三个用于多模式情感识别的公共数据集(SEED、SEED-IV 和 SEED-V)的可用会话中进行了广泛的实验研究。我们的建议有效提高了每个会话的识别性能,SEED 的平均准确率达到 93.05%、85.87% 和 91.20%; SEED-IV 为 85.72%、89.60% 和 86.88%; SEED-V 为 88.49%、91.37% 和 91.57%。
更新日期:2024-01-23
down
wechat
bug