当前位置:
X-MOL 学术
›
Inform. Fusion
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
AtCAF: Attention-based causality-aware fusion network for multimodal sentiment analysis
Information Fusion ( IF 14.7 ) Pub Date : 2024-10-02 , DOI: 10.1016/j.inffus.2024.102725 Changqin Huang, Jili Chen, Qionghao Huang, Shijin Wang, Yaxin Tu, Xiaodi Huang
Information Fusion ( IF 14.7 ) Pub Date : 2024-10-02 , DOI: 10.1016/j.inffus.2024.102725 Changqin Huang, Jili Chen, Qionghao Huang, Shijin Wang, Yaxin Tu, Xiaodi Huang
Multimodal sentiment analysis (MSA) involves interpreting sentiment using various sensory data modalities. Traditional MSA models often overlook causality between modalities, resulting in spurious correlations and ineffective cross-modal attention. To address these limitations, we propose the At tention-based C ausality-A ware F usion (AtCAF) network from a causal perspective. To capture a causality-aware representation of text, we introduce the C ausality-A ware T ext D ebiasing M odule (CATDM) utilizing the front-door adjustment. Furthermore, we employ the C ounterfactual C ro ss-modal At tention (CCoAt) module integrate causal information in modal fusion, thereby enhancing the quality of aggregation by incorporating more causality-aware cues. AtCAF achieves state-of-the-art performance across three datasets, demonstrating significant improvements in both standard and Out-Of-Distribution (OOD) settings. Specifically, AtCAF outperforms existing models with a 1.5% improvement in ACC-2 on the CMU-MOSI dataset, a 0.95% increase in ACC-7 on the CMU-MOSEI dataset under normal conditions, and a 1.47% enhancement under OOD conditions. CATDM improves category cohesion in feature space, while CCoAt accurately classifies ambiguous samples through context filtering. Overall, AtCAF offers a robust solution for social media sentiment analysis, delivering reliable insights by effectively addressing data imbalance. The code is available at https://github.com/TheShy-Dream/AtCAF .
中文翻译:
AtCAF:用于多模态情感分析的基于注意力的因果关系感知融合网络
多模态情感分析 (MSA) 涉及使用各种感官数据模态来解释情感。传统的 MSA 模型经常忽视模态之间的因果关系,导致虚假的相关性和无效的跨模态注意力。为了解决这些限制,我们从因果角度提出了基于注意力的因果感知融合 (AtCAF) 网络。为了捕获文本的因果感知表示,我们引入了利用前门调整的因果感知文本去偏差模块 (CATDM)。此外,我们采用反事实跨模态注意力 (CCoAt) 模块在模态融合中整合因果信息,从而通过纳入更多因果感知线索来提高聚合质量。AtCAF 在三个数据集中实现了最先进的性能,在标准和分布外 (OOD) 设置方面都取得了显著改进。具体来说,AtCAF 的性能优于现有模型,在正常条件下,CMU-MOSI 数据集上的 ACC-2 提高了 1.5%,CMU-MOSEI 数据集上的 ACC-7 提高了 0.95%,在 OOD 条件下提高了 1.47%。CATDM 提高了特征空间中的类别内聚性,而 CCoAt 通过上下文过滤对模糊样本进行了准确分类。总体而言,AtCAF 为社交媒体情绪分析提供了强大的解决方案,通过有效解决数据不平衡问题来提供可靠的见解。该代码可在 https://github.com/TheShy-Dream/AtCAF 获取。
更新日期:2024-10-02
中文翻译:
AtCAF:用于多模态情感分析的基于注意力的因果关系感知融合网络
多模态情感分析 (MSA) 涉及使用各种感官数据模态来解释情感。传统的 MSA 模型经常忽视模态之间的因果关系,导致虚假的相关性和无效的跨模态注意力。为了解决这些限制,我们从因果角度提出了基于注意力的因果感知融合 (AtCAF) 网络。为了捕获文本的因果感知表示,我们引入了利用前门调整的因果感知文本去偏差模块 (CATDM)。此外,我们采用反事实跨模态注意力 (CCoAt) 模块在模态融合中整合因果信息,从而通过纳入更多因果感知线索来提高聚合质量。AtCAF 在三个数据集中实现了最先进的性能,在标准和分布外 (OOD) 设置方面都取得了显著改进。具体来说,AtCAF 的性能优于现有模型,在正常条件下,CMU-MOSI 数据集上的 ACC-2 提高了 1.5%,CMU-MOSEI 数据集上的 ACC-7 提高了 0.95%,在 OOD 条件下提高了 1.47%。CATDM 提高了特征空间中的类别内聚性,而 CCoAt 通过上下文过滤对模糊样本进行了准确分类。总体而言,AtCAF 为社交媒体情绪分析提供了强大的解决方案,通过有效解决数据不平衡问题来提供可靠的见解。该代码可在 https://github.com/TheShy-Dream/AtCAF 获取。