Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

New First-Order Algorithms for Stochastic Variational Inequalities

Published: 01 January 2022 Publication History

Abstract

In this paper, we propose two new solution schemes to solve the stochastic strongly monotone variational inequality (VI) problems: the stochastic extra-point solution scheme and the stochastic extra-momentum solution scheme. The first one is a general scheme based on updating the iterative sequence and an auxiliary extra-point sequence. In the case of a deterministic VI model, this approach includes several state-of-the-art first-order methods as its special cases. The second scheme combines two momentum-based directions: the so-called heavy-ball direction and the optimism direction, where only one projection per iteration is required in its updating process. We show that if the variance of the stochastic oracle is appropriately controlled, then both schemes can be made to achieve optimal iteration complexity of $\mathcal{O}\left(\kappa\ln\left(\frac{1}{\epsilon}\right)\right)$ to reach an $\epsilon$-solution for a strongly monotone VI problem with condition number $\kappa$. As a specific application to stochastic VI, we demonstrate how to incorporate a zeroth-order approach for solving stochastic minimax saddle-point problems in our schemes, where only noisy and biased samples of the objective can be obtained, with a total sample complexity of $\mathcal{O}\left(\frac{\kappa}{\epsilon}\right)$.

References

[1]
H. H. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, CMS Books Math./Ouvrages Math. SMC, Springer, New York, 2011.
[2]
Y. Chen, G. Lan, and Y. Ouyang, Accelerated schemes for a class of variational inequalities, Math. Program., 165 (2017), pp. 113--149.
[3]
C. Daskalakis, A. Ilyas, V. Syrgkanis, and H. Zeng, Training GANs with Optimism, preprint, https://arxiv.org/abs/1711.00141, 2017.
[4]
J. C. Duchi, M. I. Jordan, M. J. Wainwright, and A. Wibisono, Optimal rates for zero-order convex optimization: The power of two function evaluations, IEEE Trans. Inform. Theory, 61 (2015), pp. 2788--2806.
[5]
F. Facchinei and J.-S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer, New York, 2007.
[6]
X. Gao, Low-Order Optimization Algorithms: Iteration Complexity and Applications, Ph.D. thesis, University of Minnesota, Minneapolis, MN, 2018.
[7]
G. Gidel, H. Berard, G. Vignoud, P. Vincent, and S. Lacoste-Julien, A Variational Inequality Perspective on Generative Adversarial Networks, preprint, https://arxiv.org/abs/1802.10551, 2018.
[8]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial nets, in Advances in Neural Information Processing Systems, MIT Press, Cambridge, MA, 2014, pp. 2672--2680.
[9]
K. Huang and S. Zhang, New First-Order Algorithms for Stochastic Variational Inequalities, preprint, https://arxiv.org/abs/2107.08341, 2021.
[10]
K. Huang and S. Zhang, A Unifying Framework of Accelerated First-Order Approach to Strongly Monotone Variational Inequalities, preprint, https://arxiv.org/abs/2103.15270, 2021.
[11]
A. N. Iusem, A. Jofré, R. I. Oliveira, and P. Thompson, Extragradient method with variance reduction for stochastic variational inequalities, SIAM J. Optim., 27 (2017), pp. 686--724, https://doi.org/10.1137/15M1031953.
[12]
A. N. Iusem, A. Jofré, R. I. Oliveira, and P. Thompson, Variance-based extragradient methods with line search for stochastic variational inequalities, SIAM J. Optim., 29 (2019), pp. 175--206, https://doi.org/10.1137/17M1144799.
[13]
A. Jalilzadeh and U. V. Shanbhag, A proximal-point algorithm with variable sample-sizes (PPAWSS) for monotone stochastic variational inequality problems, in Proceedings of the IEEE Winter Simulation Conference (WSC), 2019, pp. 3551--3562.
[14]
H. Jiang and H. Xu, Stochastic approximation approaches to the stochastic variational inequality problem, IEEE Trans. Automat. Control, 53 (2008), pp. 1462--1475.
[15]
A. Juditsky, A. Nemirovski, and C. Tauvel, Solving variational inequalities with stochastic mirror-prox algorithm, Stoch. Syst., 1 (2011), pp. 17--58.
[16]
A. Kannan and U. V. Shanbhag, Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants, Comput. Optim. Appl., 74 (2019), pp. 779--820.
[17]
G. Korpelevich, The extragradient method for finding saddle points and other problems, Èkonom. i Mat. Metody, 12 (1976), pp. 747--756.
[18]
J. Koshal, A. Nedić, and U. V. Shanbhag, Regularized iterative stochastic approximation methods for stochastic variational inequality problems, IEEE Trans. Automat. Control, 58 (2012), pp. 594--609.
[19]
G. Kotsalis, G. Lan, and T. Li, Simple and optimal methods for stochastic variational inequalities, I: Operator extrapolation, SIAM J. Optim., 32 (2022), pp. 2041--2073, https://doi.org/10.1137/20M1381678.
[20]
G. Lan, Policy Mirror Descent for Reinforcement Learning: Linear Convergence, New Sampling Complexity, and Generalized Problem Classes, preprint, https://arxiv.org/abs/2102.00135, 2021.
[21]
J. Larson, M. Menickelly, and S. M. Wild, Derivative-free optimization methods, Acta Numer., 28 (2019), pp. 287--404.
[22]
T. Liang and J. Stokes, Interaction matters: A note on non-asymptotic local convergence of generative adversarial networks, in Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (PMLR), 2019, pp. 907--915.
[23]
S. Liu, S. Lu, X. Chen, Y. Feng, K. Xu, A. Al-Dujaili, M. Hong, and U.-M. O'Reilly, Min-max Optimization without Gradients: Convergence and Applications to Adversarial ML, preprint, https://arxiv.org/abs/1909.13806, 2019.
[24]
B. Martinet, Brève communication. régularisation d'inéquations variationnelles par approximations successives, Rev. Fr. d'Inform. Rech. Oper. Sér. Rouge, 4 (1970), pp. 154--158.
[25]
M. Menickelly and S. M. Wild, Derivative-free robust optimization by outer approximations, Math. Program., 179 (2020), pp. 157--193.
[26]
P. Mertikopoulos, B. Lecouat, H. Zenati, C.-S. Foo, V. Chandrasekhar, and G. Piliouras, Optimistic Mirror Descent in Saddle-Point Problems: Going the Extra (Gradient) Mile, preprint, https://arxiv.org/abs/1807.02629, 2018.
[27]
A. Mokhtari, A. Ozdaglar, and S. Pattathil, A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach, preprint, https://arxiv.org/abs/1901.08511, 2019.
[28]
A. Mokhtari, A. E. Ozdaglar, and S. Pattathil, Convergence rate of $O(1/k)$ for optimistic gradient and extragradient methods in smooth convex-concave saddle point problems, SIAM J. Optim., 30 (2020), pp. 3230--3251, https://doi.org/10.1137/19M127375X.
[29]
A. Nemirovski, Prox-method with rate of convergence $\mathcal{O}(1/t)$ for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems, SIAM J. Optim., 15 (2004), pp. 229--251, https://doi.org/10.1137/S1052623403425629.
[30]
Y. Nesterov, A method for unconstrained convex minimization problem with the rate of convergence $o(1/k^2)$, Dokl. Akad. Nauk SSSR, 269 (1983), pp. 543--547.
[31]
Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, Appl. Optim. 87, Kluwer Academic Publishers, Boston, MA, 2004.
[32]
Y. Nesterov, Dual extrapolation and its applications to solving variational inequalities and related problems, Math. Program., 109 (2007), pp. 319--344.
[33]
Y. Nesterov and L. Scrimali, Solving strongly monotone variational and quasi-variational inequalities, Discrete Contin. Dyn. Syst., 31 (2011), pp. 1383--1396.
[34]
Y. Nesterov and V. Spokoiny, Random gradient-free minimization of convex functions, Found. Comput. Math., 17 (2017), pp. 527--566.
[35]
B. Palaniappan and F. Bach, Stochastic variance reduction methods for saddle-point problems, in Advances in Neural Information Processing Systems, MIT Press, Cambridge, MA, 2016, pp. 1416--1424.
[36]
B. T. Polyak, Some methods of speeding up the convergence of iteration methods, USSR Comput. Math. & Math. Phys, 4 (1964), pp. 1--17.
[37]
L. D. Popov, A modification of the Arrow-Hurwicz method for search of saddle points, Math. Notes Acad. Sci. USSR, 28 (1980), pp. 845--848.
[38]
H. Robbins and S. Monro, A stochastic approximation method, Ann. Math. Statistics, 22 (1951), pp. 400--407.
[39]
R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim., 14 (1976), pp. 877--898, https://doi.org/10.1137/0314056.
[40]
R. T. Rockafellar and J. Sun, Solving monotone stochastic variational inequalities and complementarity problems by progressive hedging, Math. Program., 174 (2019), pp. 453--471.
[41]
R. T. Rockafellar and R. J.-B. Wets, Stochastic variational inequalities: Single-stage to multistage, Math. Program., 165 (2017), pp. 331--360.
[42]
A. Roy, Y. Chen, K. Balasubramanian, and P. Mohapatra, Online and Bandit Algorithms for Nonstationary Stochastic Saddle-Point Optimization, preprint, https://arxiv.org/abs/1912.01698, 2019.
[43]
S. Shalev-Shwartz, Online learning and online convex optimization, Found. Trends Mach. Learn., 4 (2011), pp. 107--194.
[44]
O. Shamir, An optimal algorithm for bandit and zero-order convex optimization with two-point feedback, J. Mach. Learn. Res., 18 (2017), pp. 1703--1713.
[45]
U. V. Shanbhag, Stochastic variational inequality problems: Applications, analysis, and algorithms, in Theory Driven by Influential Applications, INFORMS, Catonsville, MD, 2013, pp. 71--107.
[46]
P. Tseng, On linear convergence of iterative methods for the variational inequality problem, J. Comput. Appl. Math., 60 (1995), pp. 237--252.
[47]
Z. Wang, K. Balasubramanian, S. Ma, and M. Razaviyayn, Zeroth-Order Algorithms for Nonconvex Minimax Problems with Improved Complexities, preprint, https://arxiv.org/abs/2001.07819, 2020.
[48]
T. Xu, Z. Wang, Y. Liang, and H. V. Poor, Gradient Free Minimax Optimization: Variance Reduction and Faster Convergence, preprint, https://arxiv.org/abs/2006.09361, 2020.
[49]
F. Yousefian, A. Nedić, and U. V. Shanbhag, A regularized smoothing stochastic approximation (RSSA) algorithm for stochastic variational inequality problems, in Proceedings of the IEEE Winter Simulation Conference (WSC), 2013, pp. 933--944.
[50]
F. Yousefian, A. Nedić, and U. V. Shanbhag, Optimal robust smoothing extragradient algorithms for stochastic variational inequality problems, in Proceedings of the 53rd Annual IEEE Conference on Decision and Control, 2014, pp. 5831--5836.
[51]
F. Yousefian, A. Nedić, and U. V. Shanbhag, On smoothing, regularization, and averaging in stochastic approximation methods for stochastic variational inequality problems, Math. Program., 165 (2017), pp. 391--431.
[52]
J. Zhang, M. Hong, and S. Zhang, On lower iteration complexity bounds for the saddle point problems, Math. Program., 194 (2022), pp. 901--935.

Cited By

View all
  • (2024)General Procedure to Provide High-Probability Guarantees for Stochastic Saddle Point ProblemsJournal of Scientific Computing10.1007/s10915-024-02567-5100:1Online publication date: 28-May-2024
  • (2024)Dynamic stochastic projection method for multistage stochastic variational inequalitiesComputational Optimization and Applications10.1007/s10589-024-00594-489:2(485-516)Online publication date: 1-Nov-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image SIAM Journal on Optimization
SIAM Journal on Optimization  Volume 32, Issue 4
DOI:10.1137/sjope8.32.4
Issue’s Table of Contents

Publisher

Society for Industrial and Applied Mathematics

United States

Publication History

Published: 01 January 2022

Author Tags

  1. variational inequality
  2. minimax saddle-point
  3. stochastic first-order method
  4. zeroth-order method

Author Tags

  1. 90C33
  2. 65K15
  3. 90C47
  4. 90C56
  5. 90C15

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 19 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)General Procedure to Provide High-Probability Guarantees for Stochastic Saddle Point ProblemsJournal of Scientific Computing10.1007/s10915-024-02567-5100:1Online publication date: 28-May-2024
  • (2024)Dynamic stochastic projection method for multistage stochastic variational inequalitiesComputational Optimization and Applications10.1007/s10589-024-00594-489:2(485-516)Online publication date: 1-Nov-2024

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media