Building trustworthy machine learning systems.
Deep neural networks are among the most successful artificial intelligence technologies making impact in a variety of practical applications. However, many concerns were raised about the 'magical' power of these networks. It is disturbing that we are clearly lacking of understanding of the decision making process behind this technology. Therefore, a natural question is whether we can trust decisions that neural networks make.
There are two ways to address this problem that are closely related. The first approach is to define properties that we expect a neural network to satisfy. Verifying whether a neural network fulfills these properties sheds light on the properties of the function that it represents. Verification guarantees can reassure the user that the network has an expected behavior. The second approach is to better understand the decision making process of neural networks. Namely, the user can require that a neural network decision must be accompanied by an explanation for this decision. Such explanations help the user to understand the decision making process of the network function.
- Aarti Gupta (Princeton)
- Alexey Ignatiev (Monash University)
- Arie Gurfinkel (University of Waterloo)
- Joao Marques-Silva (University of Toulouse)
- Kuldeep Meel ( National University of Singapore)
- Martin. Cooper (University of Toulouse)
- Moshe Vardi (Rice University)
- Thomas Gerspacher (University of Toulouse)
- Toby Walsh (UNSW)