当前位置: X-MOL 学术IEEE Trans. Affect. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MGEED: A Multimodal Genuine Emotion and Expression Detection Database
IEEE Transactions on Affective Computing ( IF 9.6 ) Pub Date : 2023-06-15 , DOI: 10.1109/taffc.2023.3286351
Yiming Wang 1 , Hui Yu 2 , Weihong Gao 2 , Yifan Xia 3 , Charles Nduka 4
Affiliation  

Multimodal emotion recognition has attracted increasing interest from academia and industry in recent years, since it enables emotion detection using various modalities, such as facial expression images, speech and physiological signals. Although research in this field has grown rapidly, it is still challenging to create a multimodal database containing facial electrical information due to the difficulty in capturing natural and subtle facial expression signals, such as optomyography (OMG) signals. To this end, we present a newly developed Multimodal Genuine Emotion and Expression Detection (MGEED) database in this paper, which is the first publicly available database containing the facial OMG signals. MGEED consists of 17 subjects with over 150 K facial images, 140 K depth maps and different modalities of physiological signals including OMG, electroencephalography (EEG) and electrocardiography (ECG) signals. The emotions of the participants are evoked by video stimuli and the data are collected by a multimodal sensing system. With the collected data, an emotion recognition method is developed based on multimodal signal synchronisation, feature extraction, fusion and emotion prediction. The results show that superior performance can be achieved by fusing the visual, EEG and OMG features.

中文翻译:


MGEED:多模式真实情绪和表达检测数据库



近年来,多模态情感识别越来越受到学术界和工业界的关注,因为它可以使用多种模式(例如面部表情图像、语音和生理信号)进行情感检测。尽管该领域的研究发展迅速,但由于难以捕获自然和微妙的面部表情信号,例如视肌摄影(OMG)信号,因此创建包含面部电信息的多模态数据库仍然具有挑战性。为此,我们在本文中提出了一个新开发的多模态真实情绪和表达检测(MGEED)数据库,这是第一个包含面部 OMG 信号的公开数据库。 MGEED 由 17 名受试者组成,拥有超过 150 K 的面部图像、140 K 的深度图和不同形式的生理信号,包括 OMG、脑电图 (EEG) 和心电图 (ECG) 信号。视频刺激唤起参与者的情绪,多模态传感系统收集数据。利用收集到的数据,开发了一种基于多模态信号同步、特征提取、融合和情绪预测的情绪识别方法。结果表明,通过融合视觉、EEG 和 OMG 特征可以实现卓越的性能。
更新日期:2023-06-15
down
wechat
bug