Abstract
In this paper, we study asymptotic behaviors of semidefinite programming with a covariance perturbation. We obtain some moderate deviations, Cramér-type moderate deviations and a law of the iterated logarithm of estimates of the respective optimal value and optimal solutions when the covariance matrix is estimated by its sample covariance. As an example, we also apply the main results to the Minimum Trace factor Analysis.
Similar content being viewed by others
References
Bentler, P.M.: Lower-bound method for the dimension-free measurement of internal consistency. Soc. Sci. Res. 1, 343–357 (1972)
Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)
Chiralaksanakul, A., Morton, D.P.: Assessing policy quality in multi-stage stochastic programming. Stochastic Programming E-Print Series, Humboldt-Universität zu Berlin (2004)
Dembo, A., Zeitouni, O.: Large Deviation Technique and Applicatons. Springer, Berlin (1998)
Eichorn, A., Römisch, W.: Stochastic integer programming: limit theorems and confidence intervals. Math. Oper. Res. 32, 118–135 (2007)
Gao, F.Q., Zhao, X.Q.: Delta method in large deviations and moderate deviations for estimators. Ann. Stat. 39, 1211–1240 (2011)
Guigues, V.: Multistep stochastic mirror descent for risk-averse convex stochastic programs based on extended polyhedral risk measures. Math. Program. 163, 169–212 (2017)
Guigues, V., Juditsky, A., Nemirovski, A.: Non-asymptotic confidence bounds for the optimal value of a stochastic program. Optim. Methods Softw. 32, 1033–1058 (2017)
Guigues, V., Krätschmer, V., Shapiro, A.: Statistical inference and hypotheses testing of risk averse stochastic programs. SIAM J. Optim. 28, 1337–1366 (2018)
Helmke, U., Moore, J.B.: Optimization and Dynamical Systems, 2nd edn. Springer, London (1996)
King, A.J., Rockafellar, R.T.: Asymptotic theory for solutions in statistical estimation and stochastic programming. Math. Oper. Res. 18, 148–162 (1993)
Kleywegt, A.J., Shapiro, A., Homem de Mello, T.: The sample average approximation method for stochastic discrete optimization. SIAM J. Optim. 12, 479–502 (2001)
Lan, G., Nemirovski, A., Shapiro, A.: Validation analysis of mirror descent stochastic approximation method. Math. Program. 134, 425–458 (2012)
Petrov, V.V.: Sums of Independent Random Variables. Springer, New York (1975)
Pflug, G.: Asymptotic stochastic programs. Math. Oper. Res. 20, 769–789 (1995)
Pflug, G.: Stochastic programs and statistical data. Ann. Oper. Res. 85, 59–78 (1999)
Pflug, G.: Stochastic optimization and statistical inference. In: Ruszczyński, A., Shapiro, A. (eds.) Stochastic Programming: Handbooks in Operations Research and Management Science, vol. 10. Elsevier, New York (2003)
Römisch, W.: Delta method infinite dimensional. In: Kotz, S., Read, C.B., Balakrishnan, N., Vidakovic, B. (eds.) Encyclopedia of Statistical Sciences, vol. 16, 2nd edn. Wiley, New York (2006)
Scheinberg, K.: Parametric linear semidefinite programming. In: Wolkowicz, H., Saigal, R., Vandenberghe, L. (eds.) Handbook of Semidefinite Programming, Chapter 4, pp. 93–110. Kluwer Academic Publishers, Boston (2000)
Shapiro, A.: Rank reducibility of a symmetric matrix and sampling theory of minimum trace factor analysis. Psychometrika 47, 187–199 (1982)
Shapiro, A.: Weighted minimum trace factor analysis. Psychometrika 47, 243–264 (1982)
Shapiro, A.: Asymptotic properties of statistical estimators in stochastic programming. Ann. Stat. 17, 841–858 (1989)
Shapiro, A.: Asymptotic analysis of stochastic programs. Ann. Oper. Res. 30, 169–186 (1991)
Shapiro, A.: First and second order analysis of nonlinear semidefinite programs. Math. Program. 77, 301–320 (1997)
Shapiro, A.: Duality, optimality conditions, and perturbation analysis. In: Wolkowicz, H., Saigal, R., Vandenberghe, L. (eds.) Handbook of Semidefinite Programming, Chapter 4, pp. 67–92. Kluwer Academic Publishers, Boston (2000)
Shapiro, A.: Statistical inference of semidefinite programming. Math. Program. (2018). https://doi.org/10.1007/s10107-018-1250-z
Shapiro, A., Ten Berge, J.M.F.: Statistical inference of minimum rank factor analysis. Psychometrika 67, 79–94 (2002)
Shapiro, A., Dentcheva, D., Ruszczyński, A.: Lectures on Stochastic Programming: Modeling and Theory, 2nd edn. SIAM, Philadelphia (2014)
Shapiro, A., Homem de Mello, T.: On rate of convergence of optimal solutions of Monte Carlo approximations of stochastic programs. SIAM J. Optim. 11, 70–86 (2000)
Van der Vaart, A.W., Wellner, J.A.: Weak Convergence and Empirical Processes with Applications to Statistics. Springer, New York (1996)
Yurinsky, V.: Sum and Gaussian Vectors. Lecture Notes in Mathematics, vol. 1617. Springer, New York (1995)
Acknowledgements
The authors are very grateful to two anonymous referees for their helpful comments and suggestions. M. J. Gao: Supported by the National Natural Science Foundation of China (NSFC) Grant 11801184 and 11771157. K. F. C. Yiu: Supported by GRF Grant PolyU. 152200/14E and PolyU Grant G-YBVQ.
Author information
Authors and Affiliations
Corresponding author
Appendix: Differentiability properties of the optimal value and an optimal solution
Appendix: Differentiability properties of the optimal value and an optimal solution
For convenience, in this “Appendix”, we recall some results on differentiability properties of the optimal value \(\vartheta (\Sigma )\) and an optimal solution \({\bar{x}}(\Sigma )\) of the following problem (A.1) considered as functions of matrix \(\Sigma \in {\mathbb {S}}^p\) (see [26]):
which can be viewed as an SDP problem parameterized by matrix \(\Sigma \in {\mathbb {S}}^p\).
The (Lagrangian) dual of problem (A.1) can be written as
The problems (A.1) and (A.2) are refered as the primal (P) and dual (D) problems, respectively. We also use notation \(\sigma :={\mathrm{vec}\,}(\Sigma )\), \({\bar{x}}(\sigma ):={\bar{x}}(\Sigma )\) and \(\vartheta (\sigma ):=\vartheta (\Sigma )\).
Slater condition It is said that Slater condition holds for the primal problem (P) if there exists \(x^* \in {\mathbb {R}}^n\) such that \(\Sigma +{\mathcal {A}}(x^*)\in {\mathbb {S}}^p_{++} \). If Slater condition holds, then optimal values of problems (P) and (D) are equal to each other.
Let \({\mathcal {W}}_r\) denote the space of matrices \(A \in {\mathbb {S}}^p\) of \({\mathrm{rank}\,}(A) = r \le p\). Then by Proposition 1.1, Chapter 5 in [10], \({\mathcal {W}}_r\) is a smooth manifold of dimension
and the tangent space of the manifold \({\mathcal {W}}_r\) at \(A \in {\mathcal {W}}_r\) is
Nondegenerate point It is said that \(x^*\in {\mathbb {R}}^n\) is a nondegenerate point of mapping \(x \rightarrow \Sigma +{\mathcal {A}}(x)\) if for \(\Upsilon := \Sigma + {\mathcal {A}}(x^*)\) and \(r := {\mathrm{rank}\,}(\Upsilon )\) it follows that
otherwise point \(x^*\) is said to be degenerate.
1.1 A.1: Differentiability of the optimal value \(\vartheta (\Sigma )\)
Let \({\mathrm{Sol}\,}(P)\) denote the set of optimal solutions of the reference (true) problem (1.1), and let \({\mathrm{Sol}\,}(D)\) be the set of optimal solutions of its dual problem (A.1) for \(\Sigma =\Sigma _0\). By the classical convex analysis and Theorem 4.1.9 in [25], the following result holds.
Proposition 3.1
(Proposition 3 in [26]) Suppose that Slater condition holds for the reference problem (1.1) and its optimal value \(\vartheta (\Sigma _0)\) is finite. Then the set \({\mathrm{Sol}\,}(D)\) is nonempty, convex and compact and the optimal value function \(\vartheta (\cdot )\) is continuous convex function and Fréchet directionally differentiable at \(\Sigma _0\) with
That is,
1.2 A.2: The second order differentiability of the optimal value \(\vartheta (\Sigma )\)
Suppose that \({\mathrm{Sol}\,}(P) = \{x^*\}\) and that \(x^*\) is a nondegenerate point of \(\Sigma _0 +{\mathcal {A}}(\cdot )\), and so \({\mathrm{Sol}\,}(D) = \{\Lambda \}\) is a singleton.
Complementarity condition Assume that Slater condition holds for the reference problem (1.1). Then by the first order optimality conditions we have that for \(x^* \in {\mathrm{Sol}\,}(P)\) and \(\Lambda \in {\mathrm{Sol}\,}(D)\) the following complementarity condition follows
Note that since \((\Sigma _0 +{\mathcal {A}}(x^*)) \succcurlyeq 0\) and \(\Lambda \succcurlyeq 0\), this complementarity condition is equivalent to \((\Sigma _0 +{\mathcal {A}}(x^*)) \Lambda = 0\) and hence \({\mathrm{rank}\,}(\Lambda ) \le p- r\), where
It is said that the strict complementarity condition holds at \(\Lambda \in {\mathrm{Sol}\,}(D)\) if \({\mathrm{rank}\,}(\Lambda ) = p -r\).
Suppose also that the strict complementarity condition holds. Let \(\Upsilon = NDN^T\) be the spectral decomposition of matrix \(\Upsilon =\Sigma _0 +{\mathcal {A}}(x^*)\), and \(\Lambda = EE^T\) for some \(p\times (p-r)\) matrix E of rank \(p-r\) such that \(N^TE = 0\). It is known (see [26]) that the following optimization problem (A.5) depending on \(\Delta \in {\mathbb {S}}^p\) has a unique optimal solution \(J^T\delta \) and the optimal value is a quadratic function \(\delta ^TQ\delta \) where \( \delta :={\mathrm{vec}\,}(\Delta )\), J is a \(p^2\times n\) matrix and Q is a \(p^2\times p^2\) matrix.
The following result is Theorem 1 in [26] which can be obtained from Section 5.3.6 in [2].
Proposition 3.2
(Theorem 1 in [26]) Suppose that \({\mathrm{Sol}\,}(P) = \{x^*\}\) is a singleton, and that \(x^*\) is a nondegenerate point of \(\Sigma _0 +{\mathcal {A}}(\cdot )\) and the strict complementarity condition holds. Then \({\bar{x}}(\cdot )\) is differentiable at \(\sigma _0 = {\mathrm{vec}\,}(\Sigma _0)\) and
where \(J^T\delta \) is the optimal solution of problem (A.5). Moreover
where \(\Lambda \) is the optimal solution of the dual problem and \(\delta ^TQ\delta \) is the optimal value of problem (A.5).
Rights and permissions
About this article
Cite this article
Gao, M.J., Yiu, K.F.C. Asymptotic behaviors of semidefinite programming with a covariance perturbation. Optim Lett 13, 1631–1649 (2019). https://doi.org/10.1007/s11590-018-1346-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-018-1346-7