Developing resource-efficient solutions (e.g., in terms of networking and compute) for Federated Learning.


Federated Learning (FL) is an ML technique that trains an algorithm across multiple decentralized edge devices or servers. Each device or server has its own local data, and the training procedure takes place without exchanging the local data of the participants. Instead, participants perform local optimization steps using their private data and share only parameter updates. FL naturally introduces overheads including bandwidth for exchanging parameter updates and compute burden on the participants. Our research is focused on reducing these overheads and taking FL a step forward towards wide adoption.


DRIVE [NeurIPS 2021]: GitHub Repo

Distributed Mean Estimation (DME) is a central building block in Federated Learning. DRIVE is a novel DME technique with appealing performance and guarantees. DRIVE uses only a single bit per coordinate and achieves better accuracy results than SOTA compression techniques that utilize a similar and often even a larger communication budget.

EDEN [ICML 2022]: GitHub Repo , Python Package

EDEN is a robust Distributed Mean Estimation (DME) technique that extends DRIVE. It naturally supports heterogeneous communication budgets and lossy transport.


VMware Research Group’s EDEN Becomes Part of OpenFL

Pushing the Limits of Network Efficiency for Federated Machine Learning


External Researchers

  • Amit Portnoy [Ben-Gurion University]
  • Gal Mendelson [Stanford]
  • Michael Mitzenmacher [Harvard]
  • Ran Ben Basat [University College London]

Related Publications


  • Active Research Areas

Research Areas

  • Machine Learning