Communication-efficient federated learning

发布时间: 2021-04-27 00:00:00
期刊: PNAS
doi: 10.1073/pnas.2024789118
作者: Mingzhe Chen,Nir Shlezinger,H. Vincent Poor,Yonina C. Eldar,Shuguang Cui
摘要: Federated learning (FL) is an emerging paradigm that enables multiple devices to collaborate in training machine learning (ML) models without having to share their possibly private data. FL requires a multitude of devices to frequently exchange their learned model updates, thus introducing significant communication overhead, which imposes a major challenge in FL over realistic networks that are limited in computational and communication resources. In this article, we propose a communication-efficient FL framework that enables edge devices to efficiently train and transmit model parameters, thus significantly improving FL performance and convergence speed. Our proposed FL framework paves the way to collaborative ML in large-scale networking systems such as Internet of Things networks. All study data are included in the article and/or [ SI Appendix ][1]. All code required to reproduce the results reported in this paper is available online at GitHub () and iHub (). [1]: https://www.pnas.org/lookup/suppl/doi:10.1073/pnas.2024789118/-/DCSupplemental
关键字标签: 
machine learning ; federated learning ; wireless communications