当前位置: X-MOL 学术Complex Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
LCANet: a model for analysis of students real-time sentiment by integrating attention mechanism and joint loss function
Complex & Intelligent Systems ( IF 5.0 ) Pub Date : 2024-11-13 , DOI: 10.1007/s40747-024-01608-8
Pengyun Hu, Xianpiao Tang, Liu Yang, Chuijian Kong, Daoxun Xia

By recognizing students’ facial expressions in actual classroom situations, the students’ emotional states can be quickly uncovered, which can help teachers grasp the students’ learning rate, which allows teachers to adjust their teaching strategies and methods, thus improving the quality and effectiveness of classroom teaching. However, most previous facial expression recognition methods have problems such as missing key facial features and imbalanced class distributions in the dateset, resulting in low recognition accuracy. To address these challenges, this paper proposes LCANet, a model founded on a fused attention mechanism and a joint loss function, which allows the recognition of students’ emotions in real classroom scenarios. The model uses ConvNeXt V2 as the backbone network to optimize the global feature extraction capability of the model, and at the same time, it enables the model to pay closer attention to the key regions in facial expressions. We incorporate an improved Channel Spatial Attention (CSA) module as a way to extract more local feature information. Furthermore, to mitigate the class distribution imbalance problem in the facial expression dataset, we introduce a joint loss function. The experimental results show that our LCANet model has good recognition rates on both the public emotion datasets FERPlus, RAF-DB and AffectNet, with accuracies of 91.43%, 90.03% and 64.43%, respectively, with good robustness and generalizability. Additionally, we conducted experiments using the model in real classroom scenarios, detecting and accurately predicting students’ classroom emotions in real time, which provides an important reference for improving teaching in smart teaching scenarios.



中文翻译:


LCANet:一种通过整合注意力机制和联合损失函数来分析学生实时情绪的模型



通过识别学生在实际课堂情境中的面部表情,可以快速发现学生的情绪状态,可以帮助教师掌握学生的学习速度,从而让教师调整教学策略和方法,从而提高课堂教学的质量和效果。但是,以往的人脸识别方法大多存在关键人脸特征缺失、日期集中类别分布不均衡等问题,导致识别准确率低。为了应对这些挑战,本文提出了 LCANet,这是一种建立在融合注意力机制和联合损失函数基础上的模型,可以在真实的课堂场景中识别学生的情绪。该模型使用 ConvNeXt V2 作为骨干网络,优化了模型的全局特征提取能力,同时使模型能够更紧密地关注面部表情中的关键区域。我们整合了一个改进的通道空间注意力 (CSA) 模块,作为提取更多局部特征信息的一种方式。此外,为了缓解面部表情数据集中的类分布不平衡问题,我们引入了一个关节损失函数。实验结果表明,我们的 LCANet 模型在公共情感数据集 FERPlus、RAF-DB 和 AffectNet 上均具有良好的识别率,准确率分别为 91.43%、90.03% 和 64.43%,具有良好的鲁棒性和泛化性。此外,我们在真实课堂场景中使用该模型进行了实验,实时检测并准确预测学生的课堂情绪,为改进智能教学场景下的教学提供了重要参考。

更新日期:2024-11-13
down
wechat
bug