Illustration of factual vs counterfactual.

Causal ML

Illustration of factual vs counterfactual.

Causal ML

Causality provides a formal language for analyzing and understanding often subtle problems in machine learning, particularly it can formalize reasonable notions of distribution shift. At its core, causality is the combination of probability and the notion of intervention. Distribution shifts can be viewed as a type of unknown intervention. This project seeks to explore how causality can inspire and help to analyze core ML robustness problems.

One of the recent directions is using domain counterfactuals, counterfactuals between two domains that answer: “What would this sample have been like if it had been observed in the other domain or environment?” Our work has shown applications of domain counterfactuals for distribution shift explanations, counterfactual fairness, and out-of-distribution robustness. We have also worked on estimating counterfactuals given only data from the domains by leveraging a sparsity of intervention hypothesis.

Avatar
David I. Inouye
Assistant Professor

I research trustworthy ML methods that are robust to imperfect distributional and computational assumptions using explainability, causality, and collaborative learning.

Publications

Spurious correlations can cause model performance to degrade in new environments. Prior causality-inspired work aim to learn invariant …

In high-stake domains such as healthcare and hiring, the role of machine learning (ML) in decision-making raises significant fairness …

Answering counterfactual queries has important applications such as explainability, robustness, and fairness but is challenging when …

A distribution shift can have fundamental consequences such as signaling a change in the operating environment or significantly …