-
Article
On solving a rank regularized minimization problem via equivalent factorized column-sparse regularized models
Rank regularized minimization problem is an ideal model for the low-rank matrix completion/recovery problem. The matrix factorization approach can transform the high-dimensional rank regularized problem to a l...
-
Article
A Corrected Inexact Proximal Augmented Lagrangian Method with a Relative Error Criterion for a Class of Group-Quadratic Regularized Optimal Transport Problems
The optimal transport (OT) problem and its related problems have attracted significant attention and have been extensively studied in various applications. In this paper, we focus on a class of group-quadratic...
-
Article
Self-adaptive ADMM for semi-strongly convex problems
In this paper, we develop a self-adaptive ADMM that updates the penalty parameter adaptively. When one part of the objective function is strongly convex i.e., the problem is semi-strongly convex, our algorithm...
-
Article
Solving graph equipartition SDPs on an algebraic variety
In this paper, we focus on using the low-rank factorization approach to solve the SDP relaxation of a graph equipartition problem, which involves an additional spectral upper bound over the traditional linear ...
-
Article
An inexact projected gradient method with rounding and lifting by nonlinear programming for solving rank-one semidefinite relaxation of polynomial optimization
We consider solving high-order and tight semidefinite programming (SDP) relaxations of nonconvex polynomial optimization problems (POPs) that often admit degenerate rank-one optimal solutions. Instead of solvi...
-
Article
On proximal augmented Lagrangian based decomposition methods for dual block-angular convex composite programming problems
We design inexact proximal augmented Lagrangian based decomposition methods for convex composite programming problems with dual block-angular structures. Our methods are particularly well suited for convex quadra...
-
Article
An efficient implementable inexact entropic proximal point algorithm for a class of linear programming problems
We introduce a class of specially structured linear programming (LP) problems, which has favorable modeling capability for important application problems in different areas such as optimal transport, discrete ...
-
Article
Doubly nonnegative relaxations for quadratic and polynomial optimization problems with binary and box constraints
We propose a doubly nonnegative (DNN) relaxation for polynomial optimization problems (POPs) with binary and box constraints. This work is an extension of the work by Kim, Kojima and Toh in 2016 from quadratic...
-
Article
An augmented Lagrangian method with constraint generation for shape-constrained convex regression problems
Shape-constrained convex regression problem deals with fitting a convex function to the observed data, where additional constraints are imposed, such as component-wise monotonicity and uniform Lipschitz contin...
-
Article
Subspace quadratic regularization method for group sparse multinomial logistic regression
Sparse multinomial logistic regression has recently received widespread attention. It provides a useful tool for solving multi-classification problems in various fields, such as signal and image processing, ma...
-
Article
On the equivalence of inexact proximal ALM and ADMM for a class of convex composite programming
In this paper, we show that for a class of linearly constrained convex composite optimization problems, an (inexact) symmetric Gauss–Seidel based majorized multi-block proximal alternating direction method of ...
-
Article
Doubly nonnegative relaxations are equivalent to completely positive reformulations of quadratic optimization problems with block-clique graph structures
We study the equivalence among a nonconvex QOP, its CPP and DNN relaxations under the assumption that the aggregate and correlative sparsity of the data matrices of the CPP relaxation is represented by a block...
-
Article
On the efficient computation of a generalized Jacobian of the projector over the Birkhoff polytope
We derive an explicit formula, as well as an efficient procedure, for constructing a generalized Jacobian for the projector of a given square matrix onto the Birkhoff polytope, i.e., the set of doubly stochast...
-
Article
An efficient Hessian based algorithm for solving large-scale sparse group Lasso problems
The sparse group Lasso is a widely used statistical model which encourages the sparsity both on a group and within the group level. In this paper, we develop an efficient augmented Lagrangian method for large-...
-
Article
On the R-superlinear convergence of the KKT residuals generated by the augmented Lagrangian method for convex composite conic programming
Due to the possible lack of primal-dual-type error bounds, it was not clear whether the Karush–Kuhn–Tucker (KKT) residuals of the sequence generated by the augmented Lagrangian method (ALM) for solving convex ...
-
Article
A block symmetric Gauss–Seidel decomposition theorem for convex composite quadratic programming and its applications
For a symmetric positive semidefinite linear system of equations \(\mathcal{Q}{{\varvec{x}}}= {{\varvec{b}}}\) ...
-
Article
QSDPNAL: a two-phase augmented Lagrangian method for convex quadratic semidefinite programming
In this paper, we present a two-phase augmented Lagrangian method, called QSDPNAL, for solving convex quadratic semidefinite programming (QSDP) problems with constraints consisting of a large number of linear ...
-
Article
Sparse-BSOS: a bounded degree SOS hierarchy for large scale polynomial optimization with sparsity
We provide a sparse version of the bounded degree SOS hierarchy BSOS (Lasserre et al. in EURO J Comp Optim:87–117, 2017) for polynomial optimization problems. It permits to treat large scale problems which satisf...
-
Article
Spectral operators of matrices
The class of matrix optimization problems (MOPs) has been recognized in recent years to be a powerful tool to model many important applications involving structured low rank matrices within and beyond the opti...
-
Article
Max-norm optimization for robust matrix recovery
This paper studies the matrix completion problem under arbitrary sampling schemes. We propose a new estimator incorporating both max-norm and nuclear-norm regularization, based on which we can conduct efficien...