Conditional distribution alignment

Robust ML

Conditional distribution alignment

Robust ML

This project aims to improve robustness including out-of-distribution (OOD) robustness (e.g., domain generalization and domain adaptation) and fairness, which can be viewed as a type of robustness to sensitive attribute changes. The core problem is distribution shift, i.e., when the test distribution is different from the training distribution. We leverage various tools including causality and distribution matching.

Avatar
David I. Inouye
Assistant Professor

I research trustworthy ML methods that are robust to imperfect distributional and computational assumptions using explainability, causality, and collaborative learning.

Publications

Distribution matching (DM) is a versatile domain-invariant representation learning technique that has been applied to tasks such as …

Spurious correlations can cause model performance to degrade in new environments. Prior causality-inspired work aim to learn invariant …

There has been a growing excitement that implicit graph generative models could be used to design or discover new molecules for …

While prior federated learning (FL) methods mainly consider client heterogeneity, we focus on the Federated Domain Generalization (DG) …

Distribution matching can be used to learn invariant representations with applications in fairness and robustness. Most prior works …

A central theme in federated learning (FL) is the fact that client data distributions are often not independent and identically …

Unsupervised distribution alignment estimates a transformation that maps two or more source distributions to a shared aligned …

The unsupervised task of aligning two or more distributions in a shared latent space has many applications including fair …

While previous distribution shift detection approaches can identify if a shift has occurred, these approaches cannot localize which …