Monge, Bregman and Occam: Interpretable Optimal Transport in High-Dimensions with Feature-Sparse Maps
AuthorsMarco Cuturi, Michal Klein, Pierre Ablin
Monge, Bregman and Occam: Interpretable Optimal Transport in High-Dimensions with Feature-Sparse Maps
AuthorsMarco Cuturi, Michal Klein, Pierre Ablin
Optimal transport (OT) theory focuses, among all maps that can morph a probability measure onto another, on those that are the “thriftiest”, i.e. such that the averaged cost between and its image be as small as possible. Many computational approaches have been proposed to estimate such Monge maps when is the distance, e.g., using entropic maps (Pooladian and Niles-Weed, 2021), or neural networks (Makkuva et al., 2020; Korotin et al., 2020). We propose a new model for transport maps, built on a family of translation invariant costs , where and is a regularizer. We propose a generalization of the entropic map suitable for , and highlight a surprising link tying it with the Bregman centroids of the divergence generated by , and the proximal operator of . We show that choosing a sparsity-inducing norm for results in maps that apply Occam’s razor to transport, in the sense that the displacement vectors they induce are sparse, with a sparsity pattern that varies depending on . We showcase the ability of our method to estimate meaningful OT maps for high-dimensional single-cell transcription data, in the - space of gene counts for cells, without using dimensionality reduction, thus retaining the ability to interpret all displacements at the gene level.
Flow Matching with Semidiscrete Couplings
March 6, 2026research area Computer Vision, research area Methods and Algorithmsconference ICLR
Flow models parameterized as time-dependent velocity fields can generate data from noise by integrating an ODE. These models are often trained using flow matching, i.e. by sampling random pairs of noise and target points and ensuring that the velocity field is aligned, on average, with when evaluated along a segment linking to . While these pairs are sampled…
Learning Elastic Costs to Shape Monge Displacements
December 3, 2024research area Methods and Algorithmsconference NeurIPS
Given a source and a target probability measure supported on , the Monge problem aims for the most efficient way to map one distribution to the other. This efficiency is quantified by defining a cost function between source and target data. Such a cost is often set by default in the machine learning literature to the squared-Euclidean distance, . The benefits of using elastic costs, defined…