Explaining distribution shifts in histopathology images across hospitals.

Explainable AI

Explaining distribution shifts in histopathology images across hospitals.

Explainable AI

This project considers how explanations can be used to aid in ML robustness, either by directly identifying issues or by helping a human make better decisions. This relates to the causal ML project as causality can provide a natural explanation in certain cases. Overall, we view explanations as a way to enhance robustness of ML.

Avatar
David I. Inouye
Assistant Professor

I research trustworthy ML methods that are robust to imperfect distributional and computational assumptions using explainability, causality, and collaborative learning.

Publications

Text-to-Image (T2I) Diffusion Models have achieved remarkable performance in generating high quality images. However, enabling precise …

As Diffusion Models have shown promising performance, a lot of efforts have been made to improve the controllability of Diffusion …

A distribution shift can have fundamental consequences such as signaling a change in the operating environment or significantly …

Spatial reasoning tasks in multi-agent environments such as event prediction, agent type identification, or missing data imputation are …

Distribution shift can have fundamental consequences such as signaling a change in the operating environment or significantly reducing …

Shapley values have become one of the most popular feature attribution explanation methods. However, most prior work has focused on …

While previous distribution shift detection approaches can identify if a shift has occurred, these approaches cannot localize which …

In practical applications of machine learning, it is necessary to look beyond standard metrics such as test accuracy in order to …

We consider objective evaluation measures of explanations of complex black-box machine learning models. We propose simple robust …