当前位置: X-MOL 学术Complex Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A counterfactual explanation method based on modified group influence function for recommendation
Complex & Intelligent Systems ( IF 5.0 ) Pub Date : 2024-07-27 , DOI: 10.1007/s40747-024-01547-4
Yupu Guo , Fei Cai , Zhiqiang Pan , Taihua Shao , Honghui Chen , Xin Zhang

In recent years, recommendation explanation methods have received widespread attention due to their potentials to enhance user experience and streamline transactions. In scenarios where auxiliary information such as text and attributes are lacking, counterfactual explanation has emerged as a crucial technique for explaining recommendations. However, existing counterfactual explanation methods encounter two primary challenges. First, a substantial bias indeed exists in the calculation of the group impact function, leading to the inaccurate predictions as the counterfactual explanation group expands. In addition, the importance of collaborative filtering as a counterfactual explanation is overlooked, which results in lengthy, narrow, and inaccurate explanations. To address such issues, we propose a counterfactual explanation method based on Modified Group Influence Function for recommendation. In particular, via a rigorous formula derivation, we demonstrate that a simple summation of individual influence functions cannot reflect the group impact in recommendations. After that, building upon the improved influence function, we construct the counterfactual groups by iteratively incorporating the individuals from the training samples, which possess the greatest influence on the recommended results, and continuously adjusting the parameters to ensure accuracy. Finally, we expand the scope of searching for counterfactual groups by incorporating the collaborative filtering information from different users. To evaluate the effectiveness of our method, we employ it to explain the recommendations generated by two common recommendation models, i.e., Matrix Factorization and Neural Collaborative Filtering, on two publicly available datasets. The evaluation of the proposed counterfactual explanation method showcases its superior performance in providing counterfactual explanations. In the most significant case, our proposed method achieves a 17% lead in terms of Counterfactual precision compared to the best baseline explanation method.



中文翻译:


一种基于修正群体影响函数的推荐反事实解释方法



近年来,推荐解释方法因其在增强用户体验和简化交易方面的潜力而受到广泛关注。在缺乏文本和属性等辅助信息的场景中,反事实解释已成为解释推荐的关键技术。然而,现有的反事实解释方法遇到两个主要挑战。首先,群体影响函数的计算确实存在很大偏差,导致随着反事实解释群体的扩大,预测不准确。此外,协同过滤作为反事实解释的重要性被忽视,导致解释冗长、狭隘且不准确。为了解决这些问题,我们提出了一种基于修正群体影响函数的反事实解释方法进行推荐。特别是,通过严格的公式推导,我们证明了个体影响函数的简单求和不能反映推荐中的群体影响。之后,在改进的影响函数的基础上,我们通过迭代合并训练样本中对推荐结果影响最大的个体来构建反事实群体,并不断调整参数以确保准确性。最后,我们通过合并来自不同用户的协同过滤信息来扩大搜索反事实群体的范围。为了评估我们方法的有效性,我们用它来解释两个常见推荐模型(即矩阵分解和神经协同过滤)在两个公开数据集上生成的推荐。 对所提出的反事实解释方法的评估显示了其在提供反事实解释方面的优越性能。在最重要的情况下,与最佳基线解释方法相比,我们提出的方法在反事实精度方面领先 17%。

更新日期:2024-07-27
down
wechat
bug