Introduction

We take the formal verification approach to the analysis of RNNs.

Abstract

Recurrent Neural Networks (RNNs) are one of the most successful neural network architectures that deals with temporal sequences, e.g. speech and text recognition. Recently, RNNs have been shown to be useful in cognitive neuroscience as a model of decision-making. RNNs can be trained to solve the same behavioral tasks performed by humans and other animals in decision-making experiments, allowing for a direct comparison between networks and experimental subjects. Analysis of RNNs is expected to be a simpler problem than the analysis of neural activity. However, in practice, reasoning about an RNN's behaviour is a challenging problem. In this work, we take the formal verification approach to the analysis of RNNs. We make two main contributions. First, we analyse the cognitive domain and formally define a set of useful properties to analyse for a popular experimental task. Second, we employ and adapt well-known verification techniques to our focus domain, i.e. a polytope propagation, an invariant detection and a counter-example guided abstraction refinement, to perform verification efficiently. Our experiments show that our techniques can effectively solve classes of benchmarks that are challenging for the state-of-the-art verification tool.

Files

Date

May, 2020

Authors

  • Hongce Zhang
  • Maxwell Shinn
  • Aarti Gupta
  • Arie Gurfinkel
  • Nham Le
  • Nina Narodytska

Related projects

Type

Conference

Booktitle

The 24th European Conference on Artificial Intelligence (ECAI 2020).