Introduction

DoCoFL is a novel framework for downlink bandwidth reduction in the challenging cross-device federated learning setting.

Abstract

Many compression techniques have been proposed to reduce the communication overhead of Federated Learning training procedures. However, these are typically designed for compressing model updates, which are expected to decay throughout training. As a result, such methods are inapplicable to downlink (i.e., from the parameter server to clients) compression in the cross-device setting, where heterogeneous clients may appear only once during training and thus must download the model parameters. Accordingly, we propose DoCoFL – a new framework for downlink compression in the cross-device setting. Importantly, DoCoFL can be seamlessly combined with many uplink compression schemes, rendering it suitable for bi-directional compression. Through extensive evaluation, we show that DoCoFL offers significant bi-directional bandwidth reduction while achieving competitive accuracy to that of a baseline without any compression.

Details

https://proceedings.mlr.press/v202/dorfman23a/dorfman23a.pdf; https://octo.vmware.com/docofl-downlink-compression-for-cross-device-federated-learning/

Date

2023

Authors

Related projects

Research Areas

  • Machine Learning

Journal

ICML