View publication

In 1991, Brenier proved a theorem that generalizes the polar decomposition for square matrices -- factored as PSD ×\times unitary -- to any vector field F:RdRdF:\mathbb{R}^d\rightarrow \mathbb{R}^d. The theorem, known as the polar factorization theorem, states that any field FF can be recovered as the composition of the gradient of a convex function uu with a measure-preserving map MM, namely F=uMF=\nabla u \circ M. We propose a practical implementation of this far-reaching theoretical result, and explore possible uses within machine learning. The theorem is closely related to optimal transport (OT) theory, and we borrow from recent advances in the field of neural optimal transport to parameterize the potential uu as an input convex neural network. The map MM can be either evaluated pointwise using uu^*, the convex conjugate of uu, through the identity M=uFM=\nabla u^* \circ F, or learned as an auxiliary network. Because MM is, in general, not injective, we consider the additional task of estimating the ill-posed inverse map that can approximate the pre-image measure M1M^{-1} using a stochastic generator. We illustrate possible applications of Brenier's polar factorization to non-convex optimization problems, as well as sampling of densities that are not log-concave.

Related readings and updates.

Monge, Bregman and Occam: Interpretable Optimal Transport in High-Dimensions with Feature-Sparse Maps

Optimal transport (OT) theory focuses, among all maps T:Rd→RdT:\mathbb{R}^d\rightarrow \mathbb{R}^dT:Rd→Rd that can morph a probability measure onto another, on those that are the "thriftiest", i.e. such that the averaged cost c(x,T(x))c(\mathbf{x}, T(\mathbf{x}))c(x,T(x)) between x\mathbf{x}x and its image T(x)T(\mathbf{x})T(x) be as small as possible. Many computational approaches have been proposed to estimate such Monge maps when ccc is the…
See paper details

The Monge Gap: A Regularizer to Learn All Transport Maps

Optimal transport (OT) theory has been been used in machine learning to study and characterize maps that can push-forward efficiently a probability measure onto another. Recent works have drawn inspiration from Brenier's theorem, which states that when the ground cost is the squared-Euclidean distance, the "best" map to morph a continuous measure in P(Rd)\mathcal{P}(\mathbb{R}^d)P(Rd) into another must be the gradient of a convex function. To…
See paper details