Explaining distribution shifts in histopathology images across hospitals.

Explainable AI

Explaining distribution shifts in histopathology images across hospitals.

Explainable AI

While modern AI performs well in certain and closed-world contexts, these AI systems may fail in uncertain and open-world contexts. For example, individual sensors of an AI system may be noisy or intermittently fail causing incorrect or missing values. Beyond uncertainty, the open-world nature of real-world contexts means that there are complex changes in the operating environment. Some environment changes are exogenous to the system such as a change in foreign policy or a world event, while others directly affect the inputs to the system such as a change in weather, location, or sensor network. Given these deficiencies in modern AI, human oversight is required for practical systems. To alleviate the problems above, a contextual AI system should be able to explain its uncertainty and changes in its environment.

This research area aims to fill these gaps in contextual AI by (1) providing a self-explainable neural networks, (2) combining efficient input and parameter uncertainty methods into a unified uncertainty framework and (3) developing a general method for explaining distributions shifts between environments. Specifically, we incorporate Shapley feature attribution values [Lundberg & Lee, 2017] as latent representations in deep models thereby making Shapley explanations first-class citizens in the modeling paradigm [Wang et al., 2021]. We study the problem of estimating uncertainty due to missing values. We explain distribution shifts via transport maps between two distributions [Kulinski & Inouye, 2022a, 2022b]. We have also explored the connection of distribution matching to invariance and causal inference queries so that we may enable causally-inspired explanations via domain counterfactuals [Zhou et al., 2023]. Access to causal structure is expected to unlock both counterfactual explanations and simpler explanations.

References

Khosravi, P., Choi, Y., Liang, Y., Vergari, A., & Van den Broeck, G. (2019). On tractable computation of expected predictions. In Neural Information Processing Systems (NeurIPS), 32.

Kulinski, S., & Inouye, D. I. (2022). Towards Explaining Image-Based Distribution Shifts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.

Kulinski, S., & Inouye, D. I. (2022). Towards Explaining Distribution Shifts. arXiv preprint arXiv:2210.10275.

Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In Neural Information Processing Systems (NeurIPS), 30.

Wang, R., Wang, X., & Inouye, D. (2021). Shapley Explanation Networks. In International Conference on Learning Representations (ICLR).

Zhou, Z., Bai, R., Kulinski, S., Kocaoglu, M., & Inouye, D. Towards Characterizing Domain Counterfactuals For Invertible Latent Causal Models. arXiv preprint arXiv:2306.11281

Avatar
David I. Inouye
Assistant Professor

I research trustworthy ML methods include distribution alignment, localized learning, and explainable AI.

Publications

In high-stake domains such as healthcare and hiring, the role of machine learning (ML) in decision-making raises significant fairness …

As Diffusion Models have shown promising performance, a lot of efforts have been made to improve the controllability of Diffusion …

Answering counterfactual queries has important applications such as explainability, robustness, and fairness but is challenging when …

A distribution shift can have fundamental consequences such as signaling a change in the operating environment or significantly …

Spatial reasoning tasks in multi-agent environments such as event prediction, agent type identification, or missing data imputation are …

Distribution shift can have fundamental consequences such as signaling a change in the operating environment or significantly reducing …

Shapley values have become one of the most popular feature attribution explanation methods. However, most prior work has focused on …

While previous distribution shift detection approaches can identify if a shift has occurred, these approaches cannot localize which …

In practical applications of machine learning, it is necessary to look beyond standard metrics such as test accuracy in order to …

We consider objective evaluation measures of explanations of complex black-box machine learning models. We propose simple robust …