当前位置: X-MOL 学术IEEE Trans. Affect. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
WiFE: WiFi and Vision Based Unobtrusive Emotion Recognition via Gesture and Facial Expression
IEEE Transactions on Affective Computing ( IF 9.6 ) Pub Date : 2023-06-13 , DOI: 10.1109/taffc.2023.3285777
Yu Gu 1 , Xiang Zhang 2 , Huan Yan 2 , Jingyang Huang 2 , Zhi Liu 3 , Mianxiong Dong 4 , Fuji Ren 1
Affiliation  

Emotion plays a critical role in making the computer more human-like. As the first and most essential step, emotion recognition emerges recently as a hot but relatively nascent topic, i.e., current research mainly focuses on single modality (e.g., facial expression) while human emotion expressions are multi-modal in nature. To this end, we propose an unobtrusive emotion recognition system leveraging two emotion-rich and tightly-coupled modalities, i.e., gesture and facial expression. The system design faces two major challenges, namely, how to capture the emotional expression in both modalities without disturbing the subject and how to leverage the relationship between modalities for recognizing the emotion. For the former, we explore WiFi and vision for unobtrusive and contactless gesture and facial expression sensing, respectively. For the latter, we propose a novel deep learning framework named Multi-Source Learning (MSL) to efficiently exploit both self-correlation in the modality and cross-correlation between modalities for fine-grained emotion recognition. To evaluate the proposed method, we prototype the system on low-cost commodity WiFi and vision devices, build a first-of-its-kind WiFi-Vision emotion dataset, and conduct extensive experiments. Empirical results not only verify the effectiveness of WiFE in emotion recognition, but also confirm the superiority of multi-modality over single-modality.

中文翻译:


WiFE:基于 WiFi 和视觉的通过手势和面部表情进行的不引人注目的情绪识别



情感在使计算机变得更像人类方面发挥着关键作用。作为第一步也是最重要的一步,情感识别最近成为一个热门但相对新生的话题,即当前的研究主要集中在单一模态(例如面部表情),而人类情感表达本质上是多模态的。为此,我们提出了一种不引人注目的情感识别系统,利用两种情感丰富且紧密耦合的模式,即手势和面部表情。系统设计面临两个主要挑战,即如何在不干扰主体的情况下捕获两种模态的情感表达以及如何利用模态之间的关系来识别情感。对于前者,我们分别探索 WiFi 和视觉以实现不显眼的非接触式手势和面部表情传感。对于后者,我们提出了一种名为多源学习(MSL)的新型深度学习框架,以有效地利用模态中的自相关性和模态之间的互相关性来进行细粒度的情感识别。为了评估所提出的方法,我们在低成本商用 WiFi 和视觉设备上对系统进行了原型设计,构建了首个 WiFi-Vision 情感数据集,并进行了广泛的实验。实证结果不仅验证了WiFE在情感识别方面的有效性,而且证实了多模态相对于单模态的优越性。
更新日期:2023-06-13
down
wechat
bug