selected

From Invariant Representations to Invariant Data: Provable Robustness to Spurious Correlations via Noisy Counterfactual Matching

Spurious correlations can cause model performance to degrade in new environments. Prior causality-inspired work aim to learn invariant representations (e.g., IRM) but typically underperform empirical risk minimization (ERM). Recent alternatives …

Imagine for Me: Creative Conceptual Blending of Real Images and Text via Blended Attention

Blending visual and textual concepts into a new visual concept is a unique and powerful trait of human beings that can fuel creativity. However, in practice, crossmodal conceptual blending for humans is prone to cognitive biases, like design …

Towards Characterizing Domain Counterfactuals For Invertible Latent Causal Models

Answering counterfactual queries has important applications such as explainability, robustness, and fairness but is challenging when the causal variables are unobserved and the observations are non-linear mixtures of these latent variables, such as …

Benchmarking Algorithms for Federated Domain Generalization

While prior federated learning (FL) methods mainly consider client heterogeneity, we focus on the Federated Domain Generalization (DG) task, which introduces train-test heterogeneity in the FL context. Existing evaluations in this field are limited …

Towards Explaining Distribution Shifts

A distribution shift can have fundamental consequences such as signaling a change in the operating environment or significantly reducing the accuracy of downstream models. Thus, understanding distribution shifts is critical for examining and …

Shapley Explanation Networks

Shapley values have become one of the most popular feature attribution explanation methods. However, most prior work has focused on post-hoc Shapley explanations, which can be computationally demanding (exponential time complexity) and preclude model …