Introduction

Developing resource-efficient solutions (e.g., in terms of networking and compute) for Federated Learning.

Summary

Federated Learning (FL) is an ML technique that trains an algorithm across multiple decentralized edge devices or servers. Each device or server has its own local data, and the training procedure takes place without exchanging the local data of the participants. Instead, participants perform local optimization steps using their private data and share only parameter updates. FL naturally introduces overheads including bandwidth for exchanging parameter updates and compute burden on the participants. Our research is focused on reducing these overheads and taking FL a step forward towards wide adoption.

Details

DRIVE [NeurIPS 2021]: NeurIPS 2021 video

Distributed Mean Estimation (DME) is a central building block in Federated Learning. DRIVE is a novel DME technique with appealing performance and guarantees. DRIVE uses only a single bit per coordinate and achieves better accuracy results than SOTA compression techniques that utilize a similar and often even a larger communication budget.

EDEN [ICML 2022]: ICML 2022 video , Python Package

EDEN is a robust Distributed Mean Estimation (DME) technique that extends DRIVE. It naturally supports heterogeneous communication budgets and lossy transport.

DoCoFL [ICML 2023]: ICML 2023 video

DoCoFL is a new framework for downlink compression in the cross-device federated learning setting.

Blogs:

VMware Research Group’s EDEN Becomes Part of OpenFL

Pushing the Limits of Network Efficiency for Federated Machine Learning

DoCoFL: Downlink Compression for Cross-Device Federated Learning

Researchers

External Researchers

  • Amit Portnoy [Ben-Gurion University]
  • Gal Mendelson [Stanford]
  • Kfir Y. Levy [Technion]
  • Michael Mitzenmacher [Harvard]
  • Ran Ben Basat [University College London]
  • Ron Dorfman [VMware and Technion]