Distribution Matching with Structural Regularization via Expressive Score-Based Priors

Distribution Matching with Structural Regularization via Expressive Score-Based Priors

Abstract

Distribution matching (DM) is a versatile domain-invariant representation learning technique that has been applied to tasks such as fair classification, domain adaptation, and domain translation. Existing DM methods can struggle with scalability (non-parametric methods), instability and mode collapse (adversarial methods), and likelihood-based methods often impose unnecessary biases through fixed priors or require learning complex prior distributions. We address a critical limitation: absence of expressive yet learnable prior distributions that align with geometry-preserving regularization. Our key insight leverages the fact that gradient-based DM training only requires the prior’s score function – not its density – enabling us to model the prior via denoising score matching. This approach eliminates biases from fixed priors (common in VAEs) and avoids the computational overhead of learning full prior densities (as in normalizing flows). Compared to other diffusion-based priors (e.g., LSGM), our method demonstrate better stability and computational efficiency. Furthermore, experiments demonstrate superior performance across benchmarks, establishing a new paradigm for efficient and flexible distribution matching.

Publication
International Conference on Machine Learning (ICML)