Introduction
Developing resource-efficient solutions (e.g., in terms of networking and compute) for Federated Learning.
Summary
Federated Learning (FL) is an ML technique that trains an algorithm across multiple decentralized edge devices or servers. Each device or server has its own local data, and the training procedure takes place without exchanging the local data of the participants. Instead, participants perform local optimization steps using their private data and share only parameter updates.
FL naturally introduces overheads including bandwidth for exchanging parameter updates and compute burden on the participants. Our research is focused on reducing these overheads and taking FL a step forward towards wide adoption.
Details
Distributed Mean Estimation (DME) is a central building block in Federated Learning.
DRIVE is a novel DME technique with appealing performance and guarantees.
DRIVE uses only a single bit per coordinate and achieves better accuracy results than SOTA compression techniques that utilize a similar and often even a larger communication budget.
EDEN is a robust Distributed Mean Estimation (DME) technique that extends DRIVE. It naturally supports heterogeneous communication budgets and lossy transport.
Blogs:
VMware Research Group’s EDEN Becomes Part of OpenFL
Pushing the Limits of Network Efficiency for Federated Machine Learning
Researchers
External Researchers
- Amit Portnoy [Ben-Gurion University]
- Gal Mendelson [Stanford]
- Michael Mitzenmacher [Harvard]
- Ran Ben Basat [University College London]