Complex & Intelligent Systems ( IF 5.0 ) Pub Date : 2024-11-23 , DOI: 10.1007/s40747-024-01636-4 Wenxin Chen, Jinrui Zhang, Deyu Zhang
Federated learning is a distributed machine learning paradigm that trains a shared model using data from various clients, it faces a core challenge in data heterogeneity arising from diverse client settings and environments. Existing methods typically focus on weight divergence mitigation and aggregation strategy enhancements, they overlook the mixed skew in label and feature distributions prevalent in real-world data. To address this, we present FL-Joint, a federated learning framework that aligns label and feature distributions using auxiliary loss functions. This framework involves a class-balanced classifier as the local model. It aligns label and feature distributions locally by using auxiliary loss functions based on class-conditional information and pseudo-labels. This alignment drives client feature distributions to converge towards a shared feature space, refining decision boundaries and boosting the global model’s generalization ability. Extensive experiments across diverse datasets and heterogeneous data settings show that our method significantly improves accuracy and convergence speed compared to baseline approaches.
中文翻译:
FL-Joint:用于数据异构性的联合学习中的联合对齐特征和标签
联邦学习是一种分布式机器学习范式,它使用来自不同客户端的数据来训练共享模型,它面临着由不同的客户端设置和环境引起的数据异构性的核心挑战。现有方法通常侧重于权重差异缓解和聚合策略增强,它们忽略了真实数据中普遍存在的标签和特征分布的混合偏斜。为了解决这个问题,我们提出了 FL-Joint,这是一个联邦学习框架,它使用辅助损失函数来对齐标签和特征分布。此框架涉及类平衡分类器作为本地模型。它通过使用基于类条件信息和伪标签的辅助损失函数在本地对齐标签和特征分布。这种对齐方式推动客户端特征分布向共享特征空间收敛,从而细化决策边界并提高全局模型的泛化能力。跨不同数据集和异构数据设置的广泛实验表明,与基线方法相比,我们的方法显著提高了准确性和收敛速度。