ENOT: Expectile Regularization for Fast and Accurate Training of Neural Optimal Transport

N Buzun, M Bobrin, DV Dylov - arXiv preprint arXiv:2403.03777, 2024 - arxiv.org
N Buzun, M Bobrin, DV Dylov
arXiv preprint arXiv:2403.03777, 2024arxiv.org
We present a new extension for Neural Optimal Transport (NOT) training procedure, capable
of accurately and efficiently estimating optimal transportation plan via specific regularisation
on conjugate potentials. The main bottleneck of existing NOT solvers is associated with the
procedure of finding a near-exact approximation of the conjugate operator (ie, the c-
transform), which is done either by optimizing over maximin objectives or by the
computationally-intensive fine-tuning of the initial approximated prediction. We resolve both …
We present a new extension for Neural Optimal Transport (NOT) training procedure, capable of accurately and efficiently estimating optimal transportation plan via specific regularisation on conjugate potentials. The main bottleneck of existing NOT solvers is associated with the procedure of finding a near-exact approximation of the conjugate operator (i.e., the c-transform), which is done either by optimizing over maximin objectives or by the computationally-intensive fine-tuning of the initial approximated prediction. We resolve both issues by proposing a new, theoretically justified loss in the form of expectile regularization that enforces binding conditions on the learning dual potentials. Such a regularization provides the upper bound estimation over the distribution of possible conjugate potentials and makes the learning stable, eliminating the need for additional extensive finetuning. We formally justify the efficiency of our method, called Expectile-Regularised Neural Optimal Transport (ENOT). ENOT outperforms previous state-of-the-art approaches on the Wasserstein-2 benchmark tasks by a large margin (up to a 3-fold improvement in quality and up to a 10-fold improvement in runtime).
arxiv.org