Automated Dependency Plots


In practical applications of machine learning, it is necessary to look beyond standard metrics such as test accuracy in order to validate various qualitative properties such as monotonicity with respect to a feature or combination of features, checking for undesirable changes or oscillations in the response, and differences in outcomes (e.g., discrimination) for a protected class. Partial dependence plots (PDP), including instance-specific PDPs (i.e., ICE plots), have been widely used as a visual way to understand or validate a model. In particular, PDPs visualize the model response as one feature is changed while holding other features fixed via an intuitive line graph. Yet, current PDPs suffer from two main drawbacks: (1) a user must manually sort or select interesting plots, and (2) PDPs are usually limited to plots along a single feature. To address these drawbacks, we formalize a method for automating the selection of interesting PDPs and extend PDPs beyond showing single features to show the model response along arbitrary directions, for example in raw feature spaces or a latent space arising from some generative model. We demonstrate the usefulness of our proposed PDP generalization across multiple use-cases and datasets including selecting between two models and understanding out-of-sample behavior.

Uncertainty in Artificial Intelligence (UAI)