Introduction

Building efficient and trustworthy machine learning systems.

Summary

Machine learning (ML) models are among the most successful artificial intelligence technologies making an impact in a variety of practical applications. However, many concerns were raised about the efficiency and the ``magical'' power of these models, such as deep neural networks or decision-tree based ensemble methods. Our goal is to address these concerns. In our research group, we approach these goals from two perspectives. From the system standpoint, we aim to develop resource-efficient ML solutions in terms of compute and network resources. From the model-analysis standpoint, we investigate techniques to analyse ML models to explaining their behavior, verifying their properties, and ensuring secure deployment.

Details

Resource efficiency

ML systems require guarantees to adhere to necessary resource efficiency constraints. For instance, federated learning requires low bandwidth; anomaly detection over edge devices requires fast classification and low memory footprint; cloud systems require fast training to reduce costs. We look into efficiency from compute and network perspectives.
Compute.
The increase in available information and complexity of ML models results in excessive compute resource consumption. It is therefore of major importance to re-examine and better understand the trade-offs of an ML system from compute-efficiency perspective.
Network.
Distributed and federated learning systems are key enablers of global, scalable and fair learning procedures. However, these systems require significant network resources that threaten the feasibility of the solutions. Therefore, acquiring a better understanding of the inherent ML performance and network efficiency tradeoffs of different distributed and federated learning systems is of major importance. Such understanding can help to reason about better architectures and algorithms for distributed and federated learning.

Analysis of ML-model behavior

It is disturbing that we are lacking of understanding of the decision making process behind ML technology. Therefore, a natural question is whether we can trust decisions that ML models make. There are several ways to address this problem.
Verification.
One approach is to define properties that we expect ML models to satisfy. Verifying whether a model fulfills these properties can reassure users that the model has an expected behavior.
Explainability.
Another approach is to interpret the decision-making process of ML models. Namely, users can require that models' decisions must be accompanied by explanations that would help them understand the decision-making processes of models.
Security.
A different approach is to explore the attack surface of ML models both at training and deployment time, and their impact on underlying systems. These attacks can then inform how to train and deploy models in ways that would enhance the models' and underlying systems' robustness.

Researchers

2022 Interns

External Researchers

  • Aarti Gupta (Princeton)
  • Alexey Ignatiev (Monash University)
  • Arie Gurfinkel (University of Waterloo)
  • Joao Marques-Silva (University of Toulouse)
  • Kuldeep Meel ( National University of Singapore)
  • Martin. Cooper (University of Toulouse)
  • Moshe Vardi (Rice University)
  • Thomas Gerspacher (University of Toulouse)
  • Toby Walsh (UNSW)