Abstract
The successes of machine learning in recent years have triggered
a fast growing range of applications.
In important settings, including safety critical applications and
when transparency of decisions is paramount, accurate predictions do
not suffice; one expects the machine learning model to also explain
the predictions made, in forms understandable by human decision
makers.
Recent work proposed explainable models based on decision
sets which can be viewed as unordered sets of rules, respecting
some sort of rule non-overlap constraint.
This paper investigates existing solutions for computing decision
sets and identifies a number of drawbacks, related with rule
overlap and succinctness of explanations, the accuracy of achieved
results, but also the efficiency of proposed approaches.
To address these drawbacks, the paper develops novel SAT-based
solutions for learning decision sets.
Experimental results on computing decision sets for representative
datasets demonstrate that SAT enables solutions that are not only
the most efficient, but also offer stronger guarantees in terms of
rule non-overlap.