Explaining distribution shifts in histopathology images across hospitals.

Explainable AI

Explaining distribution shifts in histopathology images across hospitals.

Explainable AI

While modern AI performs well in certain and closed-world contexts, these AI systems may fail in uncertain and open-world contexts. For example, individual sensors of an AI system may be noisy or intermittently fail causing incorrect or missing values. Beyond uncertainty, the open-world nature of real-world contexts means that there are complex changes in the operating environment. Some environment changes are exogeneous to the system such as a change in foreign policy or a world event, while others directly affect the inputs to the system such as a change in weather, location, or sensor network. Given these deficiencies in modern AI, human oversight is required for practical systems. To alleviate the problems above, a contextual AI system should be able to explain its uncertainty and changes in its environment.

This project aims to fill these gaps in contextual AI by (1) providing a self-explainable neural networks, (2) combining efficient input and parameter uncertainty methods into a unified uncertainty framework and (3) developing a general method for explaining distributions shifts between environments. Specifically, we incorporate Shapley feature attribution values [Lundberg & Lee, 2017] as latent representations in deep models thereby making Shapley explanations first-class citizens in the modeling paradigm [Wang et al., 2021]. We plan to combine the probabilistic circuit method for handling input uncertainty [Khosravi et al., 2019] with an efficient method for handling parameter uncertainty. We explain distribution shifts via transport maps between two distributions [Kulinski & Inouye, 2022a, 2022b]. We plan to also explore the connection of distribution alignment to invariance and causal discovery so that we may enable partially causal explanations. Access to causal structure is expected to unlock both counterfactual explanations and much simpler explanations.

References

Khosravi, P., Choi, Y., Liang, Y., Vergari, A., & Van den Broeck, G. (2019). On tractable computation of expected predictions. In Neural Information Processing Systems (NeurIPS), 32.

Kulinski, S., & Inouye, D. I. (2022). Towards Explaining Image-Based Distribution Shifts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.

Kulinski, S., & Inouye, D. I. (2022). Towards Explaining Distribution Shifts. arXiv preprint arXiv:2210.10275.

Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In Neural Information Processing Systems (NeurIPS), 30.

Wang, R., Wang, X., & Inouye, D. (2021). Shapley Explanation Networks. In International Conference on Learning Representations (ICLR).

Avatar
David I. Inouye
Assistant Professor

My research interests include distribution alignment, localized learning, and explainable AI.

Publications

A distribution shift can have fundamental consequences such as signaling a change in the operating environment or significantly …

Spatial reasoning tasks in multi-agent environments such as event prediction, agent type identification, or missing data imputation are …

Distribution shift can have fundamental consequences such as signaling a change in the operating environment or significantly reducing …

Shapley values have become one of the most popular feature attribution explanation methods. However, most prior work has focused on …

While previous distribution shift detection approaches can identify if a shift has occurred, these approaches cannot localize which …

In practical applications of machine learning, it is necessary to look beyond standard metrics such as test accuracy in order to …

We consider objective evaluation measures of explanations of complex black-box machine learning models. We propose simple robust …