当前位置: X-MOL 学术IEEE Trans. Cognit. Commun. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Federated Distillation in Massive MIMO Networks: Dynamic Training, Convergence Analysis, and Communication Channel-Aware Learning
IEEE Transactions on Cognitive Communications and Networking ( IF 7.4 ) Pub Date : 2024-03-18 , DOI: 10.1109/tccn.2024.3378215
Yuchen Mu 1 , Navneet Garg 1 , Tharmalingam Ratnarajah 1
Affiliation  

Federated Distillation (FD) is a novel distributed learning paradigm that shares the privacy-preserving nature of Federated Learning (FL) and provides possible solutions to the challenges introduced by the FL framework, such as being able to train local models with nonidentical architectures. In this paper, a communication channel-aware FD-framework is presented for multi-user massive multi-input-multi-output (mMIMO) communication system, where zero-forcing (ZF) and minimum mean-squared-error (MMSE) schemes are utilized to null the intra-cell interference. Unlike most existing studies, where both model parameters and model outputs (logits) are utilized for transmission, we exclusively adopt logits as the information exchanged in the wireless links to reduce the overall communication overhead in each round. Based on the analysis, the dynamic training steps based FD algorithm (FedTSKD) is proposed to save communication resources and accelerate the training process. Further, a group-based FD algorithm (FedTSKD-G) is proposed for the system experiencing different channel conditions like deep-fade. Simulation results on image classification tasks with ImageNette/STL-10, CIFAR-10/STL-10 and MNIST/FMNIST datasets combinations have demonstrated the proposed algorithm’s effectiveness and efficiency. Comparison with the FL algorithm shows that the proposed FD algorithm only incurs 1% of FL’s communication overhead to achieve the same testing performance.

中文翻译:


大规模 MIMO 网络中的联合蒸馏:动态训练、收敛分析和通信信道感知学习



联邦蒸馏 (FD) 是一种新颖的分布式学习范式,它具有联邦学习 (FL) 的隐私保护性质,并为 FL 框架带来的挑战提供了可能的解决方案,例如能够使用不同的架构训练本地模型。本文提出了一种用于多用户大规模多输入多输出(mMIMO)通信系统的通信信道感知FD框架,其中迫零(ZF)和最小均方误差(MMSE)方案用于消除小区内干扰。与大多数现有研究同时使用模型参数和模型输出(logits)进行传输不同,我们专门采用 logits 作为无线链路中交换的信息,以减少每轮的总体通信开销。在此基础上,提出了基于动态训练步骤的FD算法(FedTSKD)来节省通信资源并加速训练过程。此外,针对经历不同信道条件(例如深度衰落)的系统,提出了基于组的FD算法(FedTSKD-G)。使用ImageNette/STL-10、CIFAR-10/STL-10和MNIST/FMNIST数据集组合进行图像分类任务的仿真结果证明了该算法的有效性和效率。与 FL 算法的比较表明,所提出的 FD 算法仅需要 FL 1% 的通信开销即可实现相同的测试性能。
更新日期:2024-03-18
down
wechat
bug