Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Improvements to Quantum Interior Point Method for Linear Optimization

Published: 14 January 2025 Publication History

Abstract

Quantum linear system algorithms (QLSAs) have the potential to speed up Interior Point Methods (IPMs). However, a major bottleneck is the inexactness of quantum tomography to extract classical solutions from quantum states. In addition, QLSAs are sensitive to the condition number, and this sensitivity is exacerbated when the Newton systems arising in IPMs converge to a singular matrix. Recently, an Inexact Feasible Quantum IPM (IF-QIPM) has been developed that addresses the inexactness of QLSAs. However, this method requires a large number of gates and qubits to be implemented. Here, we propose a new IF-QIPM using the normal equation system, which requires fewer gates and qubits. To mitigate the sensitivity to the condition number and other input data-related parameters, we use preconditioning coupled with iterative refinement to obtain better complexity. Finally, we demonstrate the effectiveness of our approach on IBM Qiskit simulators.

1 Introduction

Mathematical optimization problems arise in many fields, and their solution yields significant computational challenges. Researchers have attempted to develop quantum optimization algorithms, such as the Quantum Approximation Optimization Algorithm for unconstrained quadratic binary optimization problems [14], and a quantum subroutine for simplex algorithms [36]. Another class of quantum algorithms includes the Quantum Interior Point Methods (QIPMs) [4, 23, 31], which are hybrid-classical Interior Point Methods (IPMs) that use Quantum Linear System Algorithms (QLSAs) to solve the Newton system at each IPM iteration. Before reviewing prior work on QIPMs for Linear Optimization Problems (LOPs), we provide the necessary definitions, fundamental results, and properties.
Definition 1.1 (LOP: Standard Formulation).
For \(b\in \mathbb {R}^m\), \(c\in \mathbb {R}^n\), and matrix \(A\in \mathbb {R}^{m\times n}\) with \(\operatorname{rank}(A)=m \le n\), the LOP is defined as
\begin{equation*} (\text{P})\quad \begin{aligned}\min \ c^T&x, \\ {\rm s.t. }\;\; Ax &= b, \\ x &\ge 0, \end{aligned} \qquad \qquad (\text{D}) \quad \begin{aligned}\max \ b^Ty,\ \ & \\ {\rm s.t. }\;\; A^Ty +&s = c,\\ &s \ge 0, \end{aligned} \end{equation*}
where \(x\in \mathbb {R}^n\) is the vector of primal variables, and \(y\in \mathbb {R}^m\), \(s\in \mathbb {R}^n\) are vectors of the dual variables. Problem (P) is called the primal problem, and problem (D) is called the dual problem.
In each step of IPMs, a Newton system is solved to determine the Newton step. There are four approaches:
(1)
Full Newton System
\begin{equation} \begin{bmatrix}0&A&0\\ A^T&0&I\\ 0&S&X \end{bmatrix}\begin{bmatrix}\Delta y\\ \Delta x\\ \Delta s \end{bmatrix}=\begin{bmatrix}0\\ 0\\ \beta \mu e- Xs \end{bmatrix}, \end{equation}
(FNS)
(2)
Augmented System
\begin{equation} \begin{bmatrix}0&A\\ A^T&-D^{-2} \end{bmatrix}\begin{bmatrix}\Delta y\\ \Delta x \end{bmatrix}=\begin{bmatrix}0\\ s- \beta \mu X^{-1}e \end{bmatrix}, \end{equation}
(AS)
(3)
Normal Equation System
\begin{equation} AD^2A^T \Delta y= Ax- \beta \mu A S^{-1}e, \end{equation}
(NES)
(4)
Orthogonal Subspaces System
\begin{equation} \begin{bmatrix}-XA^T&SV \end{bmatrix}\begin{bmatrix}\Delta y\\ \lambda \end{bmatrix}=\beta \mu e- Xs, \end{equation}
(OSS)
where \(X=\text{diag}(x)\), \(S=\text{diag}(s)\), \(D=S^{-1/2}X^{1/2}\), \(\mu =\frac{x^Ts}{n}, \beta\), \(0\lt \beta \lt 1\), and e is an all-one vector. Further, the columns of V form a basis for the null space of A.
Table 1 shows that NES has a smaller size, since in most practical LOPs, \(m\lt \lt n\). In addition, its symmetric positive definite coefficient matrix is favorable, since classically it can be solved faster with Cholesky factorization or conjugate gradient. It is also more adaptable to QLSAs since QLSAs are able to solve linear systems with a Hermitian matrix. To solve linear systems whose matrix is not Hermitian, like (OSS), must be embedded in a bigger system with a Hermitian matrix. Furthermore, solving normal equations has better complexity on quantum machines by using quantum singular value transformation [9]. Thus, NES has a better structure compared to others; however, its condition number grows at a faster rate than the one of the OSS. The key takeaway is that inexact solutions of NES, FNS, and AS may lead to infeasibility, whereas inexact solutions to OSS allow to preserve the feasibility [32]. In this article, we develop an Inexact Feasible QIPM (IF-QIPM) based on a modified version of the NES.
Table 1.
SystemSize of SystemSymmetricPositive DefiniteRate of Condition Number
FNS\(2n+m\)\(\mathcal {O}\big (\frac{1}{\mu ^2}\big)\)
AS\(n+m\)\(\mathcal {O}\big (\frac{1}{\mu ^2}\big)\)
NES\(m\)\(\mathcal {O}\big (\frac{1}{\mu ^2}\big)\)
OSS\(n\)\(\mathcal {O}\big (\frac{1}{\mu }\big)\)
Table 1. Characteristics of the Coefficient Matrix of Different Newton Systems

1.1 Related Works

Inexact IPMs . IPMs can be divided into two main types: Feasible IPMs (F-IPMs) and Infeasible IPMs (I-IPMs). F-IPMs necessitate an initial feasible interior point to begin the optimization process. These methods often utilize a self-dual embedding model of the LOP, facilitating the construction of a feasible interior solution [38]. Conversely, I-IPMs start with a solution that is infeasible but strictly positive. Theoretical studies indicate that the optimal time complexity for F-IPMs in addressing LOPs is \(O(\sqrt {n}L)\), where L represents the binary length of the input data. In contrast, the optimal time complexity for I-IPMs applied to LOPs is \(O(nL)\). Although F-IPMs theoretically outperform I-IPMs in terms of time complexity, both types are effective in practically solving LOPs, as noted by the work of Wright [44].
The prevailing approach to solve LOPs is exact IPMs, in which the Newton direction is calculated by solving NES using Cholesky factorization [38]. Thus, although IPMs enjoy a fast convergence rate, the cost per iteration of IPMs is considerably high when applied to large-scale LOPs. In an effort to reduce the per-iteration cost of IPMs, Inexact Infeasible (II-IPMs) were proposed, in which the Newton system is solved with an iterative method, e.g., using Conjugate Gradient Methods (CGMs) [1, 35].
Freund et al. [15] and Mizuno and Jarre [28] initially explored the convergence properties of II-IPMs through a series of studies. Subsequently, Baryamureeba and Steihaug [6] demonstrated the convergence of a variant of the I-IPM originally proposed by Kojima et al. [24], incorporating an inexact Newton step. Furthermore, Korzak [25] established that his specific version of an II-IPM operates within polynomial time complexity.
As early as the 1980s, partial update techniques were utilized to compute the inexact Newton direction through several rank-one updates to the inverse of the NES matrix. This strategy achieved the total complexity of \(\mathcal {O}(n^3L)\) arithmetic operations for solving LOPs [38]. This concept has been significantly strengthened in recent years through the adoption of advanced techniques such as fast matrix multiplication, spectral sparsification, and stochastic central path methods [11, 26]. By leveraging these modern techniques, the complexity of IPMs can be reduced to \(\mathcal {O}(n^{w}L)\), where \(w \lt 2.3729\) is the matrix multiplication constant [40].
The use of Preconditioned CGMs (PCGMs) in II-IPMs has been extensively studied by several researchers [1, 35]. Al-Jeiroudi and Gondzio [1] utilized the I-IPM framework from Wright [44] to solve (AS) using a PCGM. In a similar vein, Monteiro and O’Neal [35] applied a PCGM to solve (NES). Inexact linear systems algorithms like CG exhibit favorable dependence on dimension compared to factorization methods and are able to exploit sparsity patterns present in the Newton system. The rub is that these inexact approaches depend on a condition number bound, which could pose a challenge. To tackle the ill-conditioned Newton system, they used the so-called maximum weight basis precondition [34].
Further investigations into II-IPMs have been conducted by Bellavia [7], who examined their convergence for general convex optimization problems, and by Zhou and Toh [47], who developed an II-IPM specifically for semidefinite optimization problems. It has been established that the best-known bound for the number of iterations required by II-IPMs to solve LOPs is \(\mathcal {O}(n^2L)\).
All mentioned inexact versions of IPMs tend to be inherently infeasible, as the inexact solutions to the conventional formulations of Newton systems—such as FNS, AS, and NES—result in infeasibility. Gondzio [21] highlighted conceptionally that if Newton systems within IPMs can be solved inexactly while still maintaining feasibility, IPMs could achieve the best iteration complexity of \(\mathcal {O}(\sqrt {n}L)\) for quadratic optimization. To leverage this superior complexity, Inexact Feasible IPMs (IF-IPMs) have been introduced. These methods utilize the OSS system to derive inexact but feasible Newton directions [4, 32]. In this article, we present a novel IF-IPM that, at each iteration, employs a modified version of the NES that is solved inexactly. This modification is crucial, as it ensures that the inexact direction remains within the feasible region.
Quantum Interior Point Methods . QIPMs were first proposed by Kerenidis and Prakash [23], who sought to decrease the cost per iteration by classically estimating the Newton step through the use of a QLSA and quantum state tomography. Adopting this approach, Casares and Martin-Delgado [8] developed a predictor-correcter QIPM for Linear Optimization (LO). However, these algorithms were proposed and analyzed using an exact IPM framework, which is invalid because the use of quantum subroutines naturally introduces errors into the solution and leads to inexactness in the Newton step. Specifically, without further safeguards, this inexactness means that the sequence of iterates generated the algorithms in other works [8, 23] may leave the feasible set, and so convergence cannot be guaranteed.
To address these issues, Augustino et al. [4] proposed an Inexact Infeasible QIPM (II-QIPM) (which closely quantized the II-IPM of Zhou and Toh [47]) and a novel IF-QIPM using OSS. The latter algorithm was shown to solve LOPs to precision \(\zeta \in (0,1)\) using at most1
\begin{equation*} \widetilde{\mathcal {O}}_{n, \kappa , \frac{1}{\zeta }} \left(n^2 \frac{\kappa ^2}{\zeta } \right) \end{equation*}
QRAM queries and \(\widetilde{\mathcal {O}}_{n, \kappa , \frac{1}{\zeta }} (n^{2.5})\) arithmetic operations, where \(\kappa\) is an upper bound on the Newton system coefficient matrices that arise over the run of the algorithm.
Mohammadisiahroudi et al. [31, 32] specialized the algorithms in the work of Augustino et al. [4] to LO and used Iterative Refinement (IR) techniques to exponentially improve the dependence of the algorithms in the work of Augustino et al. [4] on precision and the condition number bound. In particular, Mohammadisiahroudi et al. [31] developed an II-QIPM, which addresses the inexactness of QLSA. In another work, Mohammadisiahroudi et al. [32] improved this complexity by developing a short-step IF-QIPM for LOPs with
\begin{equation*} \widetilde{\mathcal {O}}_{n, \kappa _A, \Vert A\Vert , \omega }(n^{2}L\omega ^2\Vert A\Vert ^2\kappa _A^2) \end{equation*}
QRAM queries and \(\mathcal {O}(n^{2.5}L)\) arithmetic operations, where \(\omega\) is an upper bound for the norm of the optimal solution and \(\kappa _A\) is the condition number of matrix A.
Note that the use of IR techniques indirectly led to another improvement in the complexity, reducing the dependence on a condition number bound \(\kappa\) for the intermediate Newton systems with the condition number \(\kappa _A\) of the input matrix A [33]. IF-QIPMs built on similar techniques have also been developed for linearly constrained quadratic optimization problems in the work of Wu et al. [45] and second-order cone optimization problems in the work of Augustino et al. [5].
A month after the original submission of this work, Apers and Gribling [3] introduced a QIPM for LO that operates independently of any condition number. Under specific conditions,2 and with access to QRAM, this approach reports a quantum speedup for “tall” LOPs. In these problems, all constraints are inequalities, and the number of constraints substantially exceeds the number of variables. Instead of employing QLSAs to resolve the Newton system, the method calculates Newton steps using spectral approximations of the Hessian. Although their worst-case complexity has no dependence on the condition number, for general LOPs their approach can have unfavorable dimension dependence of \(n^{7.5}\) and their complexity has a linear dependence on the inverse of the precision. In this article, we show how IR and preconditioning can mitigate the impact of condition number on the complexity of the proposed IF-QIPM using modified NES.
Iterative Refinement . IR is a well-established method used to enhance numerical accuracy when solving linear systems of equations [20, 42]. Gleixner et al. [19] pioneered the adaptation of this technique to LOPs, introducing the first IR methodology specifically tailored for constrained optimization. They successfully applied this method to achieve precise solutions to LOPs using limited-precision oracles [18].
In parallel developments, other studies have illustrated the utility of IR in achieving high-precision solutions for LOPs, particularly those emerging in the context of integer optimization [12, 13]. More recently, Mohammadisiahroudi et al. [31] applied IR techniques to obtain exact solutions for LOPs within the framework of limited-precision QIPMs. Their research demonstrated that IR could effectively mitigate the effects of ill-conditioned Newton systems, thereby reducing the overall computational complexity associated with QIPMs.

1.2 Contributions of This Work

The contributions of this work are outlined as follows:
A novel IF-QIPM is proposed by modifying the NES, which ensures that the inexact Newton direction remains within the feasible region. This modification allows our IF-QIPM to achieve the best-known iteration complexity of \(\mathcal {O}(\sqrt {n}L)\), an improvement over traditional II-IPMs that use the NES and exhibit a complexity of \(\mathcal {O}(n^2L)\).
The modified NES features an m-by-m symmetric positive definite matrix, which is advantageous for both classical and quantum linear solvers. Notably, while the state-of-the-art QIPM requires solving systems of dimension \(2n\), our proposed IF-QIPM only needs to solve m-dimensional systems, and in many practical problems, m is usually much smaller than n. Additionally, the favorable structure of the NES facilitates the use of more efficient matrix inversion techniques on quantum computers [9], enhancing the complexity performance of our IF-QIPM.
IR and preconditioning are utilized to mitigate the effects of the condition number of the Newton system. The integration of these techniques into our IF-QIPM results in a speedup relative to classical solvers in terms of dimension, and compared to previous QIPMs, a speedup relative to the condition number and other problem-dependent parameters.
We have implemented a version of our IF-QIPM using the Qiskit platform and the IBM QLSA simulator. Numerical experiments demonstrate that IR and preconditioning can significantly alleviate issues related to the condition number.
The rest of this article is structured as follows. Section 2 reviews some notations and preliminaries. In Section 3, a modified NES is utilized to produce an inexact but feasible Newton step, and a short-step IF-IPM is developed. Section 4 explores how we use the QLSA to solve the modified NES system to develop an IF-QIPM. In Section 5, an IR method coupled with preconditioning is developed to address the impacts of the condition number on the complexity. Finally, numerical experiments using the IBM Qiskit simulator are carried out in Section 6, and Section 7 concludes the article.

2 Preliminaries

We denote the quantity a to the k-th power by \(a^{k}\), and the notation \(a^{(k)}\) indicates the value of a at iteration k of an algorithm. In the rest of the article, \(\Vert M\Vert =\Vert M\Vert _2\) is the 2-norm of matrix M and \(\Vert M\Vert _F\) is the Frobenius norm of M. Further, \(\mathbb {R}^n\) denotes the set of n-dimensional vectors of real numbers and \(\mathbb {C}^n\) denotes the set of n-dimensional vectors of complex numbers.
The set of feasible primal-dual solutions is defined as
\begin{equation*} \mathcal {PD}=\left\lbrace (x,y,s)\in \mathbb {R}^{n} \times \mathbb {R}^m\times \mathbb {R}^n |\ Ax=b,\ A^Ty+s=c, \ (x,s)\ge 0\right\rbrace . \end{equation*}
Then, the set of all feasible interior solutions is defined as
\begin{equation*} \mathcal {PD}^0=\left\lbrace (x,y,s)\in \mathcal {PD}\ |\ (x,s)\gt 0\right\rbrace . \end{equation*}
In this work, we assume that \(\mathcal {PD}^0\) is not empty. By the Strong Duality theorem, optimal solutions exist and belong to the set \(\mathcal {PD}^*\) defined as
\begin{equation*} \mathcal {PD}^*=\left\lbrace (x,y,s)\in \mathcal {PD} \ | \ x^Ts=0\right\rbrace . \end{equation*}
Let \(\zeta \ge 0\), then the set of \(\zeta\)-optimal solutions is defined as
\begin{equation*} \mathcal {PD}_{\zeta } = \left\lbrace (x,y,s)\in \mathcal {PD} \ \Big |\ \frac{x^Ts}{n}\le \zeta \right\rbrace . \end{equation*}
Assuming integer data, we denote the binary length of the input data by
\begin{equation*} L=mn+m+n+\sum _{i,j}\lceil \log (|a_{ij}|+1)\rceil +\sum _{i}\lceil \log (|c_{i}|+1)\rceil +\sum _{j}\lceil \log (|b_{j}|+1)\rceil , \end{equation*}
where \(a_{ij}\) represents the ij-element of matrix A. It is well established that one can find the exact solution of an LOP using a rounding procedure if the solution provided IPM has \(2^{-2L}\) precision [38, 44].
Generally, IPMs approach the optimal set by traversing along a central path. The central path is defined as
\begin{equation*} \mathcal {CP}=\Big \lbrace (x,y,s)\in \mathcal {PD}\ \big | \ x_is_i=\mu \ \text{ for }\ i\in \lbrace 1,\dots ,n\rbrace \Big \rbrace , \end{equation*}
where \(\mu =\frac{x^Ts}{n}\). As moving exactly on the central path is not possible, Newton steps need to be calculated in a way that the iterates remain in a neighborhood of the central path. For any \(\theta \in [0,1)\), a small neighborhood of the central path is defined as
\begin{align*} \mathcal {N}(\theta)=\Big \lbrace (x,y,s)\in \mathcal {PD}^0\ \big |\ \Vert XSe-\mu e\Vert _2\le \theta \mu \Big \rbrace . \end{align*}
Algorithm 1 demonstrate the generic scheme of IPMs. In this work, we use a feasible short-step version of IPMs in which \(\beta\) is close to 1, i.e., there is a small reduction in the optimality gap at each iteration. Consequently, we can take a full Newton step, \(\alpha ^{(k)}=1\), at each iteration [38].
When we refer to complexity, we mean the number of queries to the controlled version of the input oracles and their inverses. We define \(\mathcal {O}(\cdot)\) as
\begin{equation*} f(x) = \mathcal {O}(g(x)) \iff \text{ there exist } \gamma \in \mathbb {R}, \ c \in {\mathbb {R}}_+~\text{such that}~f(x) \le c g(x) \; \forall \, x \gt \gamma . \end{equation*}
We also define \(\widetilde{\mathcal {O}} (f(x)) =\mathcal {O}(f(x) \cdot \mbox{polylog}(f(x)))\). When the function depends polylogarithmically on other variables, we write \(\widetilde{\mathcal {O}}_{a, b} (f(x)) =\mathcal {O}(f(x) \cdot \mbox{polylog}(a, b, f(x)))\). It should be mentioned that the complexity of a classical algorithm refers to the number of arithmetic operations. When we refer to the complexity of quantum algorithms, we mean the number of queries to the controlled version of the input oracles and their inverses, and we also specify the bound for classical arithmetic operations.

3 Inexact Feasible Newton Step Using NES

In this section, we discuss how an inexact but feasible Newton direction can be calculated using the NES at each iteration of the proposed IF-QIPM.
To compute the Newton step, we need to determine \((\Delta x^{(k)}, \Delta y^{(k)}, \Delta s^{(k)})\) such that
\begin{equation} \begin{aligned}A\Delta x^{(k)}&=0,\\ A^T \Delta y^{(k)} +\Delta s^{(k)}&=0,\\ X^{(k)}\Delta s^{(k)}+S^{(k)}\Delta x^{(k)}&=\beta \mu ^{(k)} e-X^{(k)}s^{(k)}. \end{aligned} \end{equation}
(1)
As discussed in Section 1, we are interested in using NES because it has a smaller symmetric positive definite matrix, favorable for both quantum and classical linear system solvers. First note that an exact solution \(\Delta y\) to NES satisfies
\begin{equation} A(D^{(k)})^2A^T \Delta y^{(k)}= Ax^{(k)}- \beta \mu ^{(k)} A (S^{(k)})^{-1}e. \end{equation}
(2)
Having obtained \(\Delta y^{(k)}\), we then compute \(\Delta s^{(k)}\) and \(\Delta x^{(k)}\) using the following:
\begin{align} \Delta s^{(k)}&=-A^T \Delta y^{(k)}, \end{align}
(3a)
\begin{align} \Delta x^{(k)}&=\beta \mu (S^{(k)})^{-1}e-x^{(k)}-(D^{(k)})^2\Delta s^{(k)}. \end{align}
(3b)
Now, when system (2) is solved inexactly, the resulting solution \(\overline{\Delta y}^{(k)}\) satisfies
\begin{equation*} A(D^{(k)})^2A^T \overline{\Delta y}^{(k)}= Ax^{(k)}- \beta \mu ^{(k)} A (S^{(k)})^{-1}e+r^{(k)}, \end{equation*}
where r is the residual as \(\Vert Ax- \beta \mu A (S^{(k)})^{-1}e- A(D^{(k)})^2A^T \overline{\Delta y}^{(k)}\Vert\). In place of (3a) and (3b), we now have
\begin{equation*} \begin{aligned}A\Delta x^{(k)}&=r^{(k)},\\ A^T \Delta y^{(k)} +\Delta s^{(k)}&=0,\\ X^{(k)}\Delta s^{(k)}+S^{(k)}\Delta x^{(k)}&=\beta \mu ^{(k)} e-X^{(k)}s^{(k)}. \end{aligned} \end{equation*}
While dual feasibility is preserved, the same cannot be said for primal feasibility. To preserve primal feasibility using inexact solutions to (2), one can alternatively solve
\begin{align*} \Delta s^{(k)}&=-A^T \Delta y^{(k)},\\ \Delta x^{(k)}&=\beta \mu ^{(k)} (S^{(k)})^{-1}e-x^{(k)}-(D^{(k)})^2\Delta s^{(k)}-v^{(k)}, \end{align*}
where \(Av^{(k)}=r^{(k)}\). Updating \(\Delta x^{(k)}\) in this way, we correct primal infeasibility, and one can verify that
\begin{equation*} \begin{aligned}A\Delta x^{(k)}&=0,\\ A^T \Delta y^{(k)} +\Delta s^{(k)}&=0,\\ X^{(k)}\Delta s^{(k)}+S^{(k)}\Delta x^{(k)}&=\beta \mu ^{(k)} e-X^{(k)}s^{(k)}+r^{\prime }{}^{(k)}, \end{aligned} \end{equation*}
where \(r^{\prime }{}^{(k)}=-S^{(k)}v^{(k)}\). Next, we describe two procedures to calculate \(v^{(k)}\) efficiently.
Procedure A . Since A has full row rank, we can calculate
\begin{equation*} \hat{A}=A^T(AA^T)^{-1}, \end{equation*}
as a preprocessing step before the IPM starts. Then, in each iteration, we calculate \(v^{(k)}=\hat{A}r^{(k)}\) using classical matrix-vector products. To recover the convergence analysis of Mohammadisiahroudi et al. [32], the residual must satisfy \(\Vert r^{\prime }{}^{(k)}\Vert \le \eta \mu ^{(k)}\) for \(\eta \in [0,1)\). One can show that this requirement amounts to
\begin{equation*} \Vert r^{(k)}\Vert \le \eta \frac{\mu ^{(k)}}{\Vert s^{(k)}\Vert _{\infty }\sigma _{\max }}, \end{equation*}
where \(\sigma _{{\max }}\) is the maximum singular value of A. Since \(\Vert s^{(k)}\Vert _{\infty }\) and \(\sigma _{\max }\) can be exponentially large, this residual bound can be unacceptably small.
Procedure B . Letting \(\hat{B}\) be an arbitrary basis for matrix A, we can calculate
\begin{equation*} v^{(k)}= \left(v_{\hat{B}}^{(k)},v_N^{(k)}\right) = \left(A_{\hat{B}}^{-1}r^{(k)},0\right). \end{equation*}
It is straightforward to verify that \(Av^{(k)}=r^{(k)}\). Now, we show that this procedure coupled with an appropriate modification of the NES leads to a favorable residual bound.
Since A has full row rank, one can choose an arbitrary basis \(\hat{B}\), and subsequently calculate \(A_{\hat{B}}^{-1}\), \(\hat{A}=A_{\hat{B}}^{-1}A\), and \(\hat{b}=A_{\hat{B}}^{-1}b\). These steps require \(\mathcal {O}(m^2n)\) arithmetic operations and take place only once prior to the first iteration of IPM. The cost of this preprocessing can be reduced by leveraging the structure of A. For example, there is no need for this preprocessing if the problem is in the canonical form as
\begin{equation*} (\text{P}^{\prime })\quad \begin{aligned}\min \ {c^{\prime }}^T&x, \\ {\rm s.t. }\;\; A^{\prime }x &\ge b^{\prime }, \\ x &\ge 0, \end{aligned} \qquad \qquad (\text{D}^{\prime }) \quad \begin{aligned}\max \ {b^{\prime }}^Ty,\ \ & \\ {\rm s.t. }\;\; {A^{\prime }}^Ty &\le {c^{\prime }},\\ &y \ge 0. \end{aligned} \end{equation*}
The reason is that after adding slack variables to the primal constraint, we have \([A^{\prime }, I]\) and the identity part can be used as \(A_{\hat{B}}\), which does not require inversion. In this work, we neglect the cost of preprocessing, since it can be avoided by using the following reformulation:
\begin{equation*} \begin{aligned}\min \ c^T&x, \\ {\rm s.t. }\;\; Ax +u&= b, \\ -Ax +u^{\prime }&= -b, \\ x,u,u^{\prime } &\ge 0. \end{aligned} \end{equation*}
This is a standard form LOP, but its interior is empty. This issue is remedied upon using the self-dual embedding model [46], and we refer the readers to the work of Mohammadisiahroudi et al. [32] for details. While this formulation does not require calculation, the price one pays for this case is using a larger system. In the rest of this article, we assume that we are working with the preprocessed problem with input data that includes an identity matrix.
Now, we can modify system (NES) with coefficient matrix \(M^{(k)}=A(D^{(k)})^2A^T\) and right-hand side vector \(\sigma ^{(k)} =b-\beta \mu ^{(k)}A(S^{(k)})^{-1}e\) to
\begin{equation*} \hat{M}^{(k)} z^{(k)}=\hat{\sigma }^{(k)}, \end{equation*}
where
\begin{align*} \hat{M}^{(k)} &= \left(D_{\hat{B}}^{(k)}\right)^{-1}A_{\hat{B}}^{-1}M^{(k)}\left(\left(D_{\hat{B}}^{(k)}\right)^{-1}A_{\hat{B}}^{-1}\right)^T = \left(D_{\hat{B}}^{(k)}\right)^{-1}\hat{A}(D^{(k)})^2\hat{A}^T \left(D_{\hat{B}}^{(k)}\right)^{-1},\\ \hat{\sigma }^{(k)}&= \left(D_{\hat{B}}^{(k)}\right)^{-1}A_{\hat{B}}^{-1}\sigma ^{(k)}= \left(D_{\hat{B}}^{(k)}\right)^{-1}\hat{b}-\beta \mu ^{(k)} \left(D_{\hat{B}}^{(k)}\right)^{-1}\hat{A}(S^{(k)})^{-1}e. \end{align*}
We use the following procedure to find the Newton direction by solving ()nexactly with QLSA+QTA:
Step 1: Find \(\tilde{z}^{(k)}\) such that \(\hat{M}^{(k)}\tilde{z}^{(k)}=\hat{\sigma }^{(k)}+\hat{r}^{(k)}\) and \(\Vert \hat{r}^{(k)}\Vert \le \frac{\eta }{\sqrt {1+\theta }}\sqrt {\mu ^{(k)}}\).
Step 2: Calculate \(\widetilde{\Delta y}{}^{(k)}=((D_{\hat{B}}^{(k)})^{-1}A_{\hat{B}}^{-1})^T\tilde{z}^{(k)}\).
Step 3: Calculate \(v^{(k)}=(v^{(k)}_{\hat{B}},v^{(k)}_{\hat{N}})=(D_{\hat{B}}^{(k)}\hat{r}^{(k)},0)\).
Step 4: Calculate \(\widetilde{\Delta s}{}^{(k)}=-A^T\widetilde{\Delta y}{}^{(k)}\).
Step 5: Calculate \(\widetilde{\Delta x}{}^{(k)}=\beta \mu ^{(k)} (S^{(k)})^{-1}e-x^{(k)}-(D^{(k)})^2\widetilde{\Delta s}{}^{(k)}-v^{(k)}\).
It is noteworthy that this modification technique is similar to maximum weight basis preconditioning techniques in other works [1, 35]. One major difference is that we modify the NES for an F-IPM setting, although others apply it for I-IPMs. In addition, we preprocess the data initially, before starting IPM, although in preconditioning, one needs to do the modification, calculating the precondition with \(\mathcal {O}(n^3)\) cost, in each iteration. Thus, we show that the complexity of our approach has better dimension dependence \(\mathcal {O}(n^{2.5}),\) although the complexity of other infeasible approaches with preconditioning has \(\mathcal {O}(n^{5})\) dimension dependence. In Section 5, we explore how quantum computing can speed up the preconditioning part.
Lemma 3.1.
For the Newton direction \((\widetilde{\Delta x}{}^{(k)},\widetilde{\Delta y}{}^{(k)},\widetilde{\Delta s}{}^{(k)})\), we have
\begin{equation} \begin{aligned}A\widetilde{\Delta x}{}^{(k)}&=0,\\ A^T \widetilde{\Delta y}{}^{(k)} +\widetilde{\Delta s}{}^{(k)}&=0,\\ X^{(k)}\widetilde{\Delta s}{}^{(k)}+S^{(k)}\widetilde{\Delta x}{}^{(k)}&=\beta \mu ^{(k)} e-X^{(k)}s^{(k)}+{r^{\prime }}^{(k)}. \end{aligned} \end{equation}
(4)
where \({r^{\prime }}^{(k)}=-S^{(k)}v^{(k)}\).
Proof.
For the Newton direction \((\widetilde{\Delta x}{}^{(k)},\widetilde{\Delta y}{}^{(k)},\widetilde{\Delta s}{}^{(k)})\), one can verify that \(\hat{M}^{(k)}\tilde{z}^{(k)}=\hat{\sigma }^{(k)}+\hat{r}^{(k)}\) implies
\begin{align*} M^{(k)}\widetilde{\Delta y}{}^{(k)}&=\sigma ^{(k)}+A_{\hat{B}}D_{\hat{B}}^{(k)}\hat{r}^{(k)}. \end{align*}
For the first equation of (4), we can write
\begin{align*} A\widetilde{\Delta x}{}^{(k)} &= A(\beta \mu ^{(k)} (S^{(k)})^{-1}e-x^{(k)}-(S^{(k)})^{-1}X^{(k)}\widetilde{\Delta s}{}^{(k)}-v^{(k)})\\ &= A(\beta \mu ^{(k)} (S^{(k)})^{-1}e-x^{(k)}-(S^{(k)})^{-1}X^{(k)}(-A^T\widetilde{\Delta y}{}^{(k)})-v^{(k)})\\ &= \beta \mu ^{(k)} A(S^{(k)})^{-1}e-Ax^{(k)}+A(S^{(k)})^{-1}X^{(k)} A^T\widetilde{\Delta y}{}^{(k)}-Av^{(k)}\\ &= \beta \mu ^{(k)} A(S^{(k)})^{-1}e-Ax^{(k)}+\sigma ^{(k)}+A_{\hat{B}}D_{\hat{B}}^{(k)}\hat{r}^{(k)}-Av^{(k)}\\ &= 0. \end{align*}
The second and third equations of (4) are obtained from Steps 4 and 5. □
To have a convergent IPM, we need \(\Vert {r^{\prime }}^{(k)}\Vert _{\infty }\le \eta \mu ^{(k)}\), where \(0\le \eta \lt 1\) is an enforcing parameter. The next lemma gives an analogous residual bound for the modified NES.
Lemma 3.2.
For the Newton direction \((\widetilde{\Delta x}{}^{(k)},\widetilde{\Delta y}{}^{(k)},\widetilde{\Delta s}{}^{(k)})\), if the residual \(\Vert \hat{r}^{(k)}\Vert _{\infty }\le \frac{\eta }{\sqrt {1+\theta }} \sqrt {\mu ^{(k)}}\), then \(\Vert {r^{\prime }}^{(k)}\Vert _{\infty }\le \eta \mu ^{(k)}\).
Proof.
As \((x^{(k)},y^{(k)},s^{(k)})\in \mathcal {N}(\theta)\), we have
\begin{equation*} (1-\theta)\mu ^{(k)}\le x_is_i\le (1+\theta)\mu ^{(k)}. \end{equation*}
Thus,
\begin{equation*} \left\Vert \left(S^{(k)}_{\hat{B}}\right)^{1/2}\left(X_{\hat{B}}^{(k)}\right)^{1/2}\right\Vert _{\infty }=\max _{i\in \hat{B}} \sqrt {x_is_i}\le \max _{i=1}^n \sqrt {x_is_i}\le \sqrt {(1+\theta)\mu ^{(k)}}. \end{equation*}
Now we can conclude that
\begin{align*} \Vert {r^{\prime }}^{(k)}\Vert _{\infty } &= \left\Vert S^{(k)}_{\hat{B}}v^{(k)}_{\hat{B}}\right\Vert _{\infty }= \left\Vert S^{(k)}_{\hat{B}}D_{\hat{B}}^{(k)}\hat{r}^{(k)}\right\Vert _{\infty }\le \left\Vert \left(S^{(k)}_{\hat{B}}\right)^{1/2}\left(X_{\hat{B}}^{(k)}\right)^{1/2}\right\Vert _{\infty }\Vert \hat{r}^{(k)}\Vert _{\infty } \\ &\le \sqrt {(1+\theta)\mu ^{(k)}}\Vert \hat{r}^{(k)}\Vert _{\infty }\le \eta \mu ^{(k)}. \end{align*}
 □
In the next subsection, we analyze the complexity of QLSA+QTA to solve ()

3.1 Solving MNES by QLSA+QTA

Before discussing QLSAs, we should mention that the \(\mathinner {|{z}\rangle }\) notation represents the quantum state corresponding to the unit classical vector z. We denote the basis state \(\mathinner {|{i}\rangle }\), which is a column vector with dimension p, with value 1 in coordinate i and 0 in other coordinates [10].
The first QLSA was proposed by Harrow et al. [22] and known as the HHL algorithm. It takes as input a sparse, Hermitian matrix M, and prepares a state \(\mathinner {|{z}\rangle } = \mathinner {|{M^{-1} \sigma }\rangle }\) that is proportional to the solution of the linear system \(Mz = \sigma\). The complexity of the HHL algorithm is \(\tilde{\mathcal {O}}_{d} ({\tau ^2\kappa _M^2}/{\epsilon })\), where d is the dimension of the problem, \(\tau\) is the maximum number of nonzeros found in any row of M, and \(\epsilon\) is the target bound on the error. This complexity bound shows a speedup with respect to dimension, although it depends on an upper bound for the condition number \(\kappa _M\) of the coefficient matrix. Generally, the HHL algorithm starts from quantum state \(\mathinner {|{\sigma }\rangle }\) and performs an eigendecomposition of A through the following general steps:
(1)
Represent \(\sigma\) as a quantum state \(\mathinner {|{\sigma }\rangle }=\sum _{i=1}^{d} \sigma _i \mathinner {|{i}\rangle }\).
(2)
Apply the conditional Hamiltonian evolution \(e^{iMt}\) to \(\mathinner {|{\sigma }\rangle }\) for a superposition of different times t. With the phase estimation algorithm, we can decompose \(\mathinner {|{\sigma }\rangle }\) in the eigenbasis of M and find the corresponding eigenvalues \(\lambda _j\). After this stage, the state of the system is close to \(\sum _{j=1}^{d} \beta _j \mathinner {|{u_j}\rangle } \mathinner {|{\lambda _j}\rangle }\), where \(u_j\) is the eigenbasis of M, and \(\mathinner {|{\sigma }\rangle } = \sum _{j=1}^{d}\beta _j \mathinner {|{u_j}\rangle }\).
(3)
By uncomputing the \(\mathinner {|{\lambda _j}\rangle }\) register, we can invert the eigenvalues and prepare the solution state
\begin{equation*} \sum _{j=1}^{d} \beta _j \lambda _{j}^{-1}\mathinner {|{u_j}\rangle } = \mathinner {|{z}\rangle }. \end{equation*}
Following a number of improvements to the HHL algorithm [2, 10, 41, 43], the current state-of-the-art QLSA is attributed to Charkraborty et al. [9], who use variable-time amplitude estimation and so-called block-encoded matrices, whereas the HHL algorithm uses the sparse-encoding model [22].
The block-encoding model was formalized in the work of Low and Chuang [27], and it assumes that one has access to unitaries that store the coefficient matrix in their top-left block:
\begin{equation*} U = \begin{pmatrix} M/\psi & \cdot \\ \cdot & \cdot \end{pmatrix}, \end{equation*}
where \(\psi \ge \Vert M \Vert\) is a normalization factor chosen to ensure that U has operator norm at most 1. Assuming access to QRAM, the QLSA of Chakraborty et al. [9] has \(\tilde{\mathcal {O}}_{d,\kappa _M, \frac{1}{\epsilon } }(\kappa _M \psi)\) complexity, where \(\psi =\Vert M\Vert _F\). For the details of the QLSA and its complexity analysis, the reader is referred to other works [4, 9, 17].
If the linear system of equations has the normal equation form, i.e., \(EE^T z= E\sigma\) for a rectangular matrix E, one can use the block-encoding of E and apply singular value inversion of E on \(\sigma\) and prepare solution state \(\mathinner {|{E^{\star }\sigma }\rangle }\), where \(E^{\star }=(EE^T)^{-1}E\) is the pseudoinverse (or Moore–Penrose inverse) of E. In this framework, we can avoid the matrix-matrix product aiming the total query complexity is \(\tilde{\mathcal {O}}_{d,\kappa _E, \frac{1}{\epsilon } }(\kappa _E \Vert E\Vert _F)\) [9]. As we can see, by singular value inversion, we can improve the complexity of QLSA for normal equations from \(\kappa _M=\mathcal {O}(\kappa _E^2)\) to \(\kappa _E\) and from \(\Vert M\Vert _F\) to \(\Vert E\Vert _F\). As the proposed ()as this favorable structure, we can use this framework to get better complexity than previous QIPMs.
While QLSAs provide a quantum state proportional to the solution, it is not possible to extract the classical solution by a single measurement. Quantum Tomography Algorithms (QTAs) are needed to extract the classical solution. There are several works improving QTAs, and the best QTA [39] has \(\mathcal {O}(\frac{d\varrho }{\epsilon })\) complexity, where \(\varrho\) is a bound for the norm of the solution. The direct use of the QLSA from Chakraborty et al. [9] and the QTA by Apeldoorn et al. [39] costs \(\tilde{\mathcal {O}}_{d,\kappa _M, \frac{1}{\epsilon } }(d\kappa _M^2 \Vert M\Vert _F \frac{\Vert \sigma \Vert }{\epsilon })\). Mohammadisiahroudi et al. [29] used an IR approach employing limited precision QLSA+QTA with \(\tilde{\mathcal {O}}_{d,\kappa _M, \frac{\Vert \sigma \Vert }{\epsilon } }(d\Vert M\Vert _F\kappa _M^2)\) complexity with \(\tilde{\mathcal {O}}_{ \frac{\Vert \sigma \Vert }{\epsilon } }(d^2)\) classical arithmetic operations.
Theorem 3.3.
Using the linear solver of Mohammadisiahroudi et al. [29], the ()ystem can be built and solved to obtain a solution \({\widetilde{z}}^{(k)}\) satisfying \(\Vert \hat{r}^{(k)}\Vert \le \frac{\eta }{\sqrt {1+\theta }}\sqrt {\mu ^{(k)}}\) with at most \(\tilde{\mathcal {O}}_{m,\kappa _E^{(k)}, \frac{\Vert \hat{\sigma }^{(k)}\Vert }{\mu ^{(k)}} }(m(\kappa _E^{(k)})^2 \Vert E^{(k)}\Vert _F)\) complexity and \(\tilde{\mathcal {O}}_{ \frac{\Vert \hat{\sigma }^{(k)}\Vert }{\mu ^{(k)}} }(mn)\) classical arithmetic operations, where \(E^{(k)}=(D^{(k)}_{\hat{B}})^{-1}\hat{A}D^{(k)}\) and \(\kappa _E^{(k)}\) is the condition number of \(E^{(k)}\).
Proof.
Building the (MNES) system in a classical computer requires matrix multiplications, which cost \(\mathcal {O}(m^2n)\) arithmetic operations. We can write (MNES) as
\begin{equation*} E^{(k)}(E^{(k)})^T \tilde{z}{}^{(k)}=\hat{\sigma }^{(k)}=E^{(k)}((X^{(k)})^{1/2}(S^{(k)})^{1/2}e-\beta \mu ^{(k)}(X^{(k)})^{-1/2}(S^{(k)})^{-1/2}e), \end{equation*}
where \(E^{(k)}=(D_{\hat{B}}^{(k)})^{-1}\hat{A}D^{(k)}\). Calculating \(E^{(k)}\) and \(\hat{\sigma }^{(k)}\) requires just \(\mathcal {O}(mn)\) arithmetic operations. Then, we can use the quantum linear system solver of Mohammadisiahroudi et al. [29] with \(\tilde{\mathcal {O}}_{m,\kappa _E^{(k)}, \frac{\Vert \hat{\sigma }^{(k)}\Vert }{\mu ^{(k)}} }(m(\kappa _E^{(k)})^2 \Vert E^{(k)}\Vert _F)\) complexity and \(\tilde{\mathcal {O}}_{ \frac{\Vert \hat{\sigma }^{(k)}\Vert }{\mu ^{(k)}} }(mn)\) classical arithmetic operations. □
We should remark that the error of the solution from QLSA+QTA can have several sources in practice. Part of the error is from the algorithmic errors of QLSA and QTA. Another part has a physical source of noise, especially in current NISQ devices. In the theoretical analysis of QLSAs, we address the algorithmic error and assume that we have fault-tolerant quantum computers on which the error correction procedure can handle physical noises. However, the proposed framework is designed very robust against errors, which may also handle some level of hardware noise in practice. This is further discussed in Section 6.
In the next section, we apply the proposed modification of NES to develop an IF-QIPM.

4 The IF-QIPM Method

We develop the IF-QIPM using the modified NES with the short-step version of IPMs.
As the IF-QIPM is an F-IPM, it requires an initial feasible interior solution to start with. It is well known in the literature [38] that this is not an issue, as one can embed a LOP in a homogeneous self-dual embedding formulation for which an all-one vector is a feasible and interior solution. We also apply IF-QIPM to the self-dual embedding formulation in practice.
In the next section, we prove the convergence of the IF-QIPM and analyze its complexity.

4.1 Convergence Analysis

To prove the convergence of IF-QIPM, we use the analysis of IF-QIPM in the work of Mohammadisiahroudi et al. [32]. The only difference is the choice of the Newton system: Mohammadisiahroudi et al. [32] use OSS and in the current work we propose the use of MNES, however both compute \((\Delta x, \Delta y, \Delta s)\) such that
\begin{align*} A\Delta x&=0,\\ A^T \Delta y +\Delta s&=0,\\ X\Delta s+S\Delta x&=\beta \mu e-X^{(k)}s^{(k)}+r^{\prime }, \end{align*}
where \(\Vert r^{\prime }\Vert \le \frac{\eta }{\sqrt {1+\theta }} \sqrt {\mu }\).
Next we provide relevant theory for our IF-QIPM in Lemmas 4.1 and 4.2 and Theorem 4.3. For the relevant proofs, we refer to Lemmas 3.1 and 3.2 and Theorem 2.6, respectively, in the work of Mohammadisiahroudi et al. [32].
Lemma 4.1.
Let the step \((\Delta x,\Delta y, \Delta s)\) be calculated from (MNES) in each iteration of the IF-IPM. Then
\begin{align*} \Delta x^T\Delta s&=0,\\ (x+\Delta x)^T(s+\Delta s)&\le \left(\beta +\frac{\eta }{\sqrt {1+\theta }}\right)x^Ts,\\ (x+\Delta x)^T(s+\Delta s)&\ge \left(\beta -\frac{\eta }{\sqrt {1+\theta }}\right)x^Ts. \end{align*}
Now, we can show that the iterates of IF-QIPM remain in the neighborhood of the central path in Lemma 4.2, by using results of Lemma 4.1.
Lemma 4.2.
Let \((x^0,s^0,y^0)\in \mathcal {N}(\theta)\) for a given \(\theta \in [0,1)\), then \((x^{(k)},s^{(k)},y^{(k)})\in \mathcal {N}(\theta)\) for any \(k\in \mathbb {N}\).
Based on Lemma 4.2, IF-IPM remains in the neighborhood of the central path, and it converges to the optimal solution if \(\mu ^{(k)}\) converges to zero. In Theorem 4.3, we prove that the algorithm reaches \(\zeta\)-optimal solution after \(\mathcal {O}(\sqrt {n}\log (\frac{{\mu }_0}{\zeta }))\) iteration.
Theorem 4.3.
The sequence \(\mu ^{(k)}\) converges to zero linearly, and we have \(\mu ^{(k)}\le \zeta\) after \(\mathcal {O}(\sqrt {n}\log (\frac{{\mu }_0}{\zeta }))\) iterations.
This demonstrates that the IF-IPM achieves the best-known iteration complexity, and the proof holds for any values satisfying the following two conditions:
\begin{align} \beta &\le \left(1-\frac{\eta +0.01}{\sqrt {n}}\right), \end{align}
(5)
\begin{align} \frac{\theta ^2 +n(1-\beta)^2+\eta ^2}{2^{3/2}(1-\theta)}+ \eta &\le \theta \left(\beta -\frac{\eta }{\sqrt {n}}\right). \end{align}
(6)
It is not hard to check that \(\theta =0.7\) and \(\eta =0.1\) satisfy these conditions.
An exact solution can be calculated by rounding [44], provided that we terminate with \(\mu ^{(k)}\le \mathcal {O}(2^{- L}).\) Accordingly, the IF-IPM may require \(\mathcal {O}(\sqrt {n}L)\) to determine an exact optimal solution; for more details, see the work of Wright [44, Chapter 3].

4.2 Complexity

The next theorem characterizes the total time complexity of the proposed IF-QIPM for finding a \(\zeta\) optimal solution of a LOP.
Theorem 4.4.
The proposed IF-QIPM of Algorithm 2 determines a \(\zeta\)-optimal solution using at most
\begin{equation*} \tilde{\mathcal {O}}_{m,\kappa _{\hat{A}}, \Vert \hat{A}\Vert ,\Vert \hat{b}\Vert ,\mu ^0, \frac{\omega }{\zeta }}\left(\sqrt {n}m\kappa _{\hat{A}}^2\Vert \hat{A}\Vert _F\frac{\omega ^5}{\zeta ^5}\right) \end{equation*}
queries to the QRAM and \(\tilde{\mathcal {O}}_{\mu ^0, \frac{1}{\zeta }}(n^{1.5}m)\) classical arithmetic operations.
Proof.
The complexity analysis of the different parts of IF-QIPM 2 is outlined as follows:
After \(\mathcal {O}(\sqrt {n}\log (\frac{\mu ^0}{\zeta })),\) the IF-QIPM obtains a \(\zeta\)-optimal solution.
In Theorem 4.2 of Mohammadisiahroudi et al. [31], the norm and condition number bounds of MNES are derived as
\begin{equation*} \Vert \sigma ^{(k)}\Vert =\mathcal {O}\left(\frac{\Vert \hat{A}\Vert +\Vert \hat{b}\Vert }{\zeta }\right), \quad \Vert E^{(k)}\Vert _F=\mathcal {O}\left(\frac{\omega }{\zeta }\Vert \hat{A}\Vert _F\right),\quad \kappa ^{(k)}_E=\mathcal {O}\left(\frac{\omega ^2}{\zeta ^{2}}\kappa _{\hat{A}}\right), \end{equation*}
where \(\omega\) is a bound on \(\Vert x^*,s^*\Vert _{\infty }\) for all \((x^*,y^*,s^*)\in \mathcal {PD}^*\).
Applying Theorem 3.3, the complexity of quantum subroutine to solve the MNES is
\begin{equation*} \tilde{\mathcal {O}}_{m,\kappa _{\hat{A}}, \Vert \hat{A}\Vert ,\Vert \hat{b}\Vert ,\frac{\omega }{\zeta }}\left(m\kappa _{\hat{A}}^2\Vert \hat{A}\Vert _F\frac{\omega ^5}{\zeta ^5}\right). \end{equation*}
In each iteration of IF-QIPM, we need to build \(E^{(k)}\) classically and load to QRAM with \(\mathcal {O}(mn)\) complexity. In addition, some classical matrix products happen with \(\mathcal {O}(mn)\) cost. Thus, the classical cost per iteration is \(\mathcal {O}(mn)\).
Thus, in the worst case, the IF-QIPM requires
\begin{equation*} \tilde{\mathcal {O}}_{m,\kappa _{\hat{A}}, \Vert \hat{A}\Vert ,\Vert \hat{b}\Vert ,\mu ^0, \frac{\omega }{\zeta }}\left(\sqrt {n}m\kappa _{\hat{A}}^2\Vert \hat{A}\Vert _F\frac{\omega ^5}{\zeta ^5}\right) \end{equation*}
accesses to the QRAM and \(\tilde{\mathcal {O}}_{\mu ^0, \frac{1}{\zeta }}(n^{1.5}m)\) classical arithmetic operations.
 □

4.3 Improving the Error Dependence of the IF-QIPM

To get an exact optimal solution, the time complexity contains the exponential term \(2^L\), due to the polynomial dependence on \(\frac{1}{\zeta }\). To address this problem, we can fix \(\zeta =10^{-2}\) and improve the precision by IR in \(\mathcal {O}(L)\) iterations [31]. The first IR method for LO is proposed by Gleixner et al. [19]. Here, we adopt the IR of Mohammadisiahroudi et al. [32], which is designed specifically for IF-QIPM as Algorithm 3.
First, we scale the LOP by
\begin{equation*} \rho \leftarrow \max \left\lbrace \Vert A\Vert _F, \Vert b \Vert , \Vert c \Vert \right\rbrace , \end{equation*}
then we solve the scaled problem \((\widetilde{A}, \tilde{b}, \tilde{c}) \leftarrow \frac{1}{\rho } (A,b,c)\) instead of the original problem. Let \((x^*, y^*, s^*)\) be \(\frac{\zeta }{\rho }\)-optimal solution for the scaled LOP \((\widetilde{A}, \tilde{b}, \tilde{c})\). It is easy to verify that \((x^*, y^*, \rho s^*)\) is an \(\zeta\)-optimal solution for the original problem. By this scaling, we can improve the complexity from polynomial to polylogarithmic dependence on the norm of input data.
In the IR method, we initially solve the LOP with limited precision \(\hat{\zeta }\) and then iteratively improve the precision by solving the following refining problem with the limited precision \(\hat{\zeta }\),
\begin{equation} \begin{aligned}\min _{\hat{x}}\ \nabla s^T&\hat{x}, \\ {\rm s.t. }\;\; \tilde{A}\hat{x}&= 0, \\ \hat{x}&\ge -\nabla x, \end{aligned} \qquad \qquad \begin{aligned}\max _{\hat{y},\hat{s}} \ -\nabla x^T\hat{s},\ \ & \\ {\rm s.t. }\;\; \tilde{A}^T\hat{y}\ +&\ \hat{s}= \nabla s,\\ &\hat{s}\ge 0, \end{aligned} \end{equation}
(7)
where \(\nabla \gt 1\) is a scaling factor. By changing variables, one can easily reformulate this problem to a standard LOP. The next lemma demonstrates the core idea of the IR technique.
Lemma 4.5.
If \((\hat{x},\hat{y},\hat{s})\) is a \(\hat{\zeta }\)-optimal solution for refining problem (7), then \((x^r, y^r, s^r)\) is a \(\frac{\hat{\zeta }}{\nabla ^2}\)-optimal solution for LOP \((A,b,c)\), where \(x^r=x+\frac{1}{\nabla } \hat{x}\), \(y^r=y+\frac{1}{\nabla } \hat{y}\), and \(s^r=c-A^Ty^r\).
The proof for Lemma 4.5 is analogous to the one of Lemma 6.1 in the work of Mohammadisiahroudi et al. [32]. Based on this result, we develop the IR method as outlined in Algorithm 3.
Theorem 4.6.
The proposed IR-IF-QIPM of Algorithm 3 produces a \(\zeta\)-optimal solution with at most \(\mathcal {O}(\tfrac{\log (\rho /\zeta)}{\log (1/\hat{\zeta })})\) inquiry to IF-QIPM with precision \(\hat{\zeta }\).
Theorem 4.7.
The total time complexity of finding an exact optimal solution with the IR-IF-QIPM is
\begin{equation*} \tilde{\mathcal {O}}_{m,\kappa _{\hat{A}}, \Vert \hat{A}\Vert ,\Vert \hat{b}\Vert ,\mu ^0}\left(\sqrt {n}m\kappa _{\hat{A}}^2\omega ^5 L\right) \end{equation*}
with \(\tilde{\mathcal {O}}_{\mu ^0}(n^{1.5}mL)\) classical arithmetic operation.
The proof is similar to the proof of Mohammadisiahroudi et al. [32, Theorem 6.2].
In the next section, we investigate how preconditioning can help to mitigate the effect of the condition number with respect to \(\kappa _{\hat{A}}\) and \(\omega\).

5 Iteratively Refined IF-QIPM Using Preconditioned NES

To mitigate the impact of the condition number, we need to analyze how the matrices \(M^{(k)}\) and \(E^{(k)}\) evolve through the iterations. As in Theorem 6.8 of Wright [44] or Lemma I.42 of Roos et al. [38], considering the optimal partition \(\mathcal {B}\) and \(\mathcal {N}\), we have
\begin{equation} \frac{x_i^{(k)}}{s_i^{(k)}}=\mathcal {O}\left(\frac{\hat{C}}{\mu ^{(k)}}\right)\rightarrow \infty \text{ for }i\in \mathcal {B}\qquad \text{and} \qquad \frac{x_i^{(k)}}{s_i^{(k)}}=\mathcal {O}\left(\frac{\mu ^{(k)}}{\hat{C}}\right)\rightarrow 0\text{ for }i\in \mathcal {N}, \end{equation}
(8)
where \(\hat{C}\) is a constant dependent on the LOP’s parameters. For a more detailed analysis, see pages 121–124 of the work of Wright [44]. To analyze the condition number of NES, we have
\begin{equation*} M^{(k)}=A(D^{(k)})^2A^T=A_{\mathcal {B}}\left(D_{\mathcal {B}}^{(k)}\right)^2A_{\mathcal {B}}^T+A_{\mathcal {N}}\left(D_{\mathcal {N}}^{(k)}\right)^2A_{\mathcal {N}}^T. \end{equation*}
As the sequence of iterates converges to the optimal set, it is easy to see that \(A_{\mathcal {N}}(D_{\mathcal {N}}^{(k)})^2A_{\mathcal {N}}^T\rightarrow 0\). Thus, the dominant component is the \(A_{\mathcal {B}}(D_{\mathcal {B}}^{(k)})^2A_{\mathcal {B}}^T\) term. If \(A_{\mathcal {B}}\) includes a basis of A, i.e., \(\mbox{rank}(A_{\mathcal {B}})=m\), then the condition number of \(M^{(k)}\) will converge to a constant depending on \(\frac{\max _{i\in {\mathcal {B}}} x^2_i}{\min _{i\in {\mathcal {B}}} x^2_i}\). This implies that when the problem is primal nondegenerate, the condition number will converge to a constant, although that constant may be as big as \(\mathcal {O}(2^{2L})\) in the worst case. If \(A_{\mathcal {B}}\) has a rank less than m, then the condition number of \(M^{(k)}\) goes to infinity with the rate of \(\frac{1}{\mu ^2}\), which can be addressed by IR. In the proposed IF-QIPM, we initially choose basis \(\hspace{0.83328pt}\overline{\hspace{-0.83328pt}B}\). If in each iteration of IF-QIPM we choose basis \(\hat{B}^{(k)}\) representing the m largest \(\frac{x_i^{(k)}}{s_i^{(k)}}\), then by modifying MNES we can precondition it too. We refer the readers to the work of Monteiro et al. [34] for the algorithm for determining basis \(\hat{B}^{(k)}\). Suppose that the LOP is nondegenerate. As the trajectory generated by the IF-QIPM converges to the optimal solution, we have
\begin{align*} \hat{B}^{(k)} & \rightarrow B\\ A_{\mathcal {N}}\left(D_{\mathcal {N}}^{(k)}\right)^2A_{\mathcal {N}}^T&\rightarrow 0\\ M^{(k)} &\rightarrow A_B\left(D_{\mathcal {B}}^{(k)}\right)^2A_{\mathcal {B}}^T\\ \hat{M}^{(k)} = \left(D_{\hat{B}}^{(k)}\right)^{-1}A_{\hat{B}}^{-1}M^{(k)}\left(\left(D_{\hat{B}}^{(k)}\right)^{-1}A_{\hat{B}}^{-1}\right)^T &\rightarrow I. \end{align*}
Thus, \((D_{\hat{B}}^{(k)})^{-1}A_{\hat{B}}^{-1}\) is a precondition for \(M^{(k)}\). When the LOP is degenerate, which is the more general setting, Theorem 2.2.3 of O’Neal [37] asserts that the condition number of \(\hat{M}^{(k)}\) is bounded by \((\bar{\chi })^2,\) where
\begin{equation} \bar{\chi } = \max \left\lbrace \Vert A_B^{-1} A\Vert _F :\ A_B {\rm ~is~a~basis~of~} A\right\rbrace . \end{equation}
(9)
Furthermore, based on Lemma 2.2.2 of O’Neal [37], we have \(\Vert \hat{M}^{(k)}\Vert _F=\mathcal {O}(\bar{\chi })\).
To utilize this preconditioning method within our IF-QIPM framework, we adopt the following procedure:
Step 1: Choose basis \(\hat{B}^{(k)}\) as indices of m largest, \(\frac{x_i^{(k)}}{s_i^{(k)}}\) where \(A_{\hat{B}^{(k)}}\) are linearly independent.
Step 2: Build block-encoding of \((D_{\hat{B}}^{(k)})^{-1}\) and \(A_{\hat{B}}\).
Step 3: Calculate \(A_{\hat{B}}^{-1}\) on the quantum computer.
Step 4: Build \(\hat{M}^{(k)}=(D_{\hat{B}}^{(k)})^{-1}A_{\hat{B}}^{-1}M^{(k)}((D_{\hat{B}}^{(k)})^{-1}A_{\hat{B}}^{-1})^T\) on the quantum computer.
Step 5: Find \(\tilde{z}^{(k)}\) such that \(\hat{M}^{(k)}\tilde{z}^{(k)}=\hat{\sigma }^{(k)}+\hat{r}^{(k)}\) and \(\Vert \hat{r}^{(k)}\Vert \le \frac{\eta }{\sqrt {1+\theta }}\sqrt {\mu ^{(k)}}\) on the quantum computer.
Step 6: Calculate \(\widetilde{\Delta y}{}^{(k)}=((D_{\hat{B}}^{(k)})^{-1}A_{\hat{B}}^{-1})^T\tilde{z}^{(k)}\).
Step 7: Calculate \(v^{(k)}=(v^{(k)}_{\hat{B}},v^{(k)}_{\hat{N}})=(D_{\hat{B}}^{(k)}\hat{r}^{(k)},0)\).
Step 8: Calculate \(\widetilde{\Delta s}{}^{(k)}=c-A^Ty^{(k)}-s^{(k)}-A^T\widetilde{\Delta y}{}^{(k)}\).
Step 9: Calculate \(\widetilde{\Delta x}{}^{(k)}=\beta \mu ^{(k)} (S^{(k)})^{-1}e-x^{(k)}-(D^{(k)})^2\widetilde{\Delta s}{}^{(k)}-v^{(k)}\).
As we can see, the only change in the Newton step calculation is that the basis \(\hat{B}^{(k)}\) is changing at each iteration, thus we need to calculate \(A_{\hat{B}}^{-1}\) at each iteration. One can verify that the Newton steps calculated with the preceding steps admit properties of Lemma 3.2. It is straightforward to provide a convergence proof of an IF-QIPM that uses this procedure, since it produces feasible inexact iterates satisfying \(\Vert \hat{r}^{(k)}\Vert \le \frac{\eta }{\sqrt {1+\theta }}\sqrt {\mu ^{(k)}}\). Thus, the iteration complexity of IF-QIPM using Preconditioned NES (PNES) is \(\mathcal {O}(\sqrt {n}\log (\frac{\mu ^0}{\zeta }))\). However, the cost per iteration is different, as classically calculating \(A_{\hat{B}}^{-1}\) requires \(\mathcal {O}(nm^2)\) arithmetic operation in the worst case. If only a few columns in \(\hat{B}^{(k)}\) are changing from one iteration to another, then one can update \(A_{\hat{B}}^{-1}\) by the Sherman-Morrison-Woodbury formula, which is computationally cheaper than the full inversion. Here, we apply the inverse of the block-encoding of \(A_{\hat{B}}^{-1}\) and build block-encoding of \(\hat{E}^{(k)}=D_{\hat{B}}^{(k)})^{-1}A_{\hat{B}}^{-1}E^{(k)}\) on a quantum machine. As the complexity of applying the inverse of a block-encoding has a logarithmic dependence on \(\frac{1}{\zeta }\), we can do that calculation with high precision, i.e., \(\zeta = 2^{-\mathcal {O}(L)}\). Thus, errors in those steps are negligible, and analysis of IF-QIPM in Section 4 is valid.
To estimate the asymptotic scaling of the overall complexity, we begin by analyzing the cost of solving the PNES at each iteration. As discussed in Section 3.1, the PNES is still an NES, and we need to build the block-encoding of \(\hat{E}^{(k)}\), then we can use singular value transformation of Chakraborty et al. [9] and find the solution of the PNES by QLSA+QTA with \(\tilde{\mathcal {O}}_{m,\kappa _{\hat{E}}^{(k)}, \frac{\Vert \hat{\sigma }^{(k)}\Vert }{\mu ^{(k)}} }(m(\kappa _{\hat{E}}^{(k)})^2 \Vert \hat{E}^{(k)}\Vert _F)\) quantum complexity and \(\tilde{\mathcal {O}}_{ \frac{\Vert \hat{\sigma }^{(k)}\Vert }{\mu ^{(k)}} }(mn)\) classical arithmetic operations. As in IR, we scale the LOP initially, so in our analysis here we assume that \(\Vert A\Vert _F\le 1\).
Proposition 5.1.
Suppose that A and \(A_{\hat{B}}\) are stored in a QRAM data structure. Then, one can prepare a block-encoding of \(\hat{E}^{(k)}\) accesses to the QRAM and \(\widetilde{\mathcal {O}}(n)\) arithmetic operations.
Proof.
First, observe that we always have classical access to \(x^{(k)}\) and \(s^{(k)}\). We can therefore store the nonzero entries of the matrices \(D^{(k)}\) and \((D_{\hat{B}}^{(k)})^{-1}\) in QRAM using \(\widetilde{\mathcal {O}}_n (n)\) classical operations. From here, applying the work of Gilyén et al. [17, Lemma 50] asserts that \(\widetilde{\mathcal {O}}_{n, \frac{1}{\xi _D}} (1)\) accesses to QRAM suffices to construct an \((\alpha _{D}, \log (n) + 2, \xi _D)\)-block-encoding of \(D_{\hat{B}}^{(k)}\) and an \((\alpha _{D_{\hat{B}}}, \log (n) + 2, \xi _D)\)-block-encoding of \((D_{\hat{B}}^{(k)})^{-1}\). As \((D_{\hat{B}}^{(k)}\) and \((D_{\hat{B}}^{(k)})^{-1}\) are diagonal matrices, they can be efficiently encoded on QRAM and the normalization parameters \(\alpha _{D^{-1}}\) and \(\alpha _{D}\) have upper bound \(\mathcal {O}(\frac{\omega }{\epsilon })\) as analyzed in the work of Mohammadisiahroudi et al. [31]. Likewise, with A and \(A_{\hat{B}}\) stored in QRAM, invoking the work of Gilyén et al. [17, Lemma 50] we can construct a \((\Vert A \Vert _F , \log (n) + 2, \xi _A)\)-block-encoding of A and a \((\Vert A_{\hat{B}} \Vert _F , \log (n) + 2, \xi _A)\)-block-encoding of \(A_{\hat{B}}\), using \(\widetilde{\mathcal {O}}_{m, n, \frac{1}{\xi _A}} (1)\) accesses to the QRAM.
Having prepared block-encodings of A and \(D^{(k)}\), we can take their product prepare an \((\alpha _{D} \Vert A \Vert _F , \mathcal {O}(\log (n)), \xi _M)\)-block-encoding of \(M^{(k)}\) as
\begin{equation*} U_A U_{D^2} U_{A^{\top }} = U_{M}. \end{equation*}
Similarly, having prepared a \((\Vert A_{\hat{B}} \Vert _F , \log (n) + 2, \xi _A)\)-block-encoding of \(A_{\hat{B}}\) for \(A_{\hat{B}}\), applying the work of Gilyén [16, Corollary 3.4.13], we can prepare a \((\Vert A_{\hat{B}} \Vert _F \kappa _{A_{\hat{B}}}, \log (n) + 2, \xi _A)\)-block-encoding of \((A_{\hat{B}})^{-1}\). From here, applying the work of Chakraborty et al. [9, Lemma 4], we can take the product of block-encodings of \((A_{\hat{B}})^{-1}\) and \((D_{\hat{B}}^{(k)})^{-1}\), which yields an \((\alpha _{D^{-1}} \kappa _{A_{\hat{B}}} \Vert A_{\hat{B}} \Vert _F, \mathcal {O}(\log (n)), \xi _P)\)-block-encoding of P.
Observe that we have prepared block-encodings \(U_E\) and \(U_P\) for \(E^{(k)}\) and P such that \(U_P U_E\) corresponds to a \((\alpha _{D^{-1}} \alpha _{D} \kappa _{A_{\hat{B}}} \Vert A \Vert _F \Vert A_{\hat{B}} \Vert _F, \mathcal {O}(\log (n)), \xi _P)\)-block-encoding of P, since it is defined as the product of block-encodings [9, Lemma 4]. The complexity result follows from observing that constructing the unitary \(U_{\hat{E}}\) has a gate cost of \(\widetilde{\mathcal {O}}_{m,n,\frac{1}{\epsilon }}(1)\), where we have properly set the parameters \(\xi _A\) and \(\xi _D\) such that \(U_{\hat{M}}\) implements \(\hat{M}\) up to error \(\epsilon\). We also needed \(\widetilde{\mathcal {O}}_n (n)\) classical operations to create the QRAM data structures for \(D^{-1}\) and D. The proof is complete. □
Corollary 5.2.
Suppose A, \(A_{\hat{B}}\), \(x^{(k)}\), \(s^{(k)}\), and \(\hat{\sigma }^{(k)}\) are stored in a QRAM data structure and define
\begin{equation*} \alpha := \alpha _{D^{-1}} \alpha _{D}. \end{equation*}
Then, one can obtain an \(\frac{\eta }{\sqrt {1+\theta }} \sqrt {\mu ^{(k)}}\)-precise solution (in \(\ell _{\infty }\)-norm) to the linear system
\begin{equation*} \hat{M}^{(k)}\tilde{z}^{(k)}=\hat{\sigma }^{(k)}, \end{equation*}
using at most
\begin{equation*} {\widetilde{\mathcal {O}}_{m, n, \frac{1}{\epsilon }} \left(m \alpha \kappa _{\hat{E}}^2 \right)} \end{equation*}
QRAM accesses and \(\mathcal {O}(mn)\) arithmetic operations.
Proof.
Using the linear systems algorithm from Mohammadisiahroudi et al. [29] with the subnormalization factor \(\alpha _{D^{-1}} \alpha ^2_{D^2} \kappa _{A_{\hat{B}}}^2 \Vert A \Vert ^2_F \Vert A_{\hat{B}} \Vert _F^2\) and condition number \(\kappa _{\hat{M}}\) gives the result. □
For both \(\alpha _{D^{-1}}\) and \(\alpha _{D}\), we can use upper bound \(\mathcal {O}(\frac{\omega }{\epsilon })\). Thus, \(\alpha =\mathcal {O}(\frac{\omega ^2}{\epsilon ^2})\). We are now in a position to state the complexity of the Iteratively Refined IF-QIPM (IR-IF-QIPM) using PNES.
Theorem 5.3.
Suppose that the LOP data \(A,b,c\) is stored in QRAM. Then, the IR-IF-QIPM using PNES obtains an \(\epsilon\)-precise solution to the primal-dual LO pair (P)-(D) using at most
\begin{equation*} \widetilde{\mathcal {O}}_{ n, \Vert A\Vert , \Vert b\Vert , \kappa _{A}, \frac{\mu ^0}{\epsilon }} \left(\sqrt {n} m \bar{\chi }^2 \omega ^2 \right) \end{equation*}
QRAM accesses and \(\widetilde{\mathcal {O}}_{\Vert A\Vert , \Vert b\Vert ,\frac{1}{\epsilon }} (mn^{1.5})\) arithmetic operations.
Proof.
The result follows upon adjusting the proof of Theorem 4.4 to account for the result in Corollary 5.2. □
Based on the analysis in the work of O’Neal [37], we have \(\alpha =\mathcal {O}(\omega ^2)\) and \(\kappa _{\hat{E}}=\mathcal {O}(\bar{\chi })\). Thus, we can simplify the quantum complexity to
\begin{equation*} \widetilde{\mathcal {O}}_{ n, \Vert A\Vert , \Vert b\Vert , \kappa _{A}, \frac{\mu ^0}{\epsilon }} \left(\sqrt {n} m \bar{\chi }^2 \omega ^2 \right) \end{equation*}
with \(\widetilde{\mathcal {O}}_{\Vert A\Vert , \Vert b\Vert ,\frac{1}{\epsilon }} (mn^{1.5})\) arithmetic operations.

6 Numerical Experiments

In this section, we present a series of numerical experiments aimed at elucidating the behavior of various QIPMs. Additionally, we compare the impact of employing IR versus preconditioning techniques. It is important to note that our analysis presupposes access to QRAM; however, it is essential to highlight that physical QRAM infrastructure has yet to be realized. Likewise, QLSAs remain beyond the capabilities of existing quantum hardware.
Consequently, our experiments are conducted using the IBM Qiskit HHL simulator, and it is imperative to acknowledge that our numerical results cannot be extrapolated to gauge the performance of QIPMs and QLSAs on actual quantum hardware. Simulating quantum computers on classical computers is known to be exponentially time consuming, which precludes any empirical time comparison between classical and quantum methodologies at this juncture. Notwithstanding, we are capable of simulating QLSAs for problems with limited numbers of variables and manageable condition numbers.
Our primary focus therefore centers on presenting numerical findings pertaining to QIPMs, and we refrain from presenting QLSA results in this article. Interested readers are referred to the work of Mohammadisiahroudi et al. [29] for comprehensive numerical experiments related to QLSAs. Each of the algorithms discussed in this article have been implemented in Python and are readily accessible on our GitHub repository at https://github.com/QCOL-LU. Our Python package encompasses a versatile array of QIPMs designed to solve linear, semidefinite, and second-order cone optimization problems. To enhancing their versatility and efficacy, we have also incorporated IR techniques into both QLSAs and QIPMs. Users are offered the flexibility to conduct experiments with QIPMs using either classical or quantum linear solvers, with the option to employ preconditioning.
For our experimental setup, we employ the LOP generators described in the work of Mohammadisiahroudi et al. [30]. These generators have been demonstrated to produce randomly generated LOPs with predefined optimal and interior solutions, thereby facilitating the evaluation of IF-QIPMs. Furthermore, these generators offer users the flexibility to control various characteristics of the problems, including the condition number of the coefficient matrix—a critical parameter for assessing QIPMs’ performance. Our numerical experiments were conducted on a workstation equipped with Dual Intel Xeon CPU E5-2630 @ 2.20 GHz, featuring 20 cores and 64 GB of RAM.
To illustrate how the condition numbers of different Newton systems evolve in IPMs, Figure 1 shows the condition number of different linear systems trend for four problems. As Figure 1(a) shows, the condition number of FNS, OSS, and NES converges to a constant for nondegenerate LOPs with a well-conditioned matrix A. However, the condition number of the AS may go to infinity, even for nondegenerate well-conditioned problems, as approaching the unique optimal solution. For nondegenerate problems with ill-conditioned matrices (see Figure 1(b)), the condition number of NES, OSS, and FNS still converges to a constant that can be very large, like \(10^{12}\). If the LOP is degenerate (see Figure 1(c)), the condition number of all Newton systems goes to infinity as approaching the optimal solution. However, the FNS and OSS have a better rate than the NES and AS. The worst case happens when the problem is degenerate and matrix A is ill conditioned (see Figure 1(d)). In this case, the condition number of NES can be as large as \(10^{20}\) for \(\mu =10^{-6}\). As these figures illustrate, the condition number of the Newton systems is affected by the condition number of matrix A and the degeneracy status of the problem. Generally, OSS has a better condition number than the NES. In the next figures, we show how IR and preconditioning can mitigate the condition number of Newton systems, especially for NES.
Fig. 1.
Fig. 1. Condition number trend of different Newton systems for different types of LOPs.
Figure 2 shows the performance of different IF-QIPMs with respect to condition number to solve a nondegenerate problem with an ill-conditioned matrix A. As we can see, PNES has a significantly smaller condition number, even better than OSS. However, IR is not helping with the condition number, since for this type of problem, early stopping the IF-QIPM and restarting it will not change the condition number, as it is almost constant, dependent on the condition number of A. The performance of IF-QIPMs for solving a primal degenerate LOP with well-conditioned coefficient matrix A is depicted in Figure 3. However, for PNES, there is a theoretical uniform condition number bound, which is independent of the algorithm’s iterates and depends only on the problem’s input data. Yet, in this problem, as is the case for many other problems, such as degenerate problems, this upper bound can be extremely large. The condition number can grow with a slightly lower rate but at the same rate as the one for MNES. However, for degenerate problems, IR can help with condition numbers. As Figure 3(b) shows, in the IR steps, we stop IF-QIPMs early, when \(\mu =10^{-2}\), and so the condition number remains bounded. Then we restart the IF-QIPM for the refining problems, where the condition number is as low as the initial condition number. By IR, the condition number will not grow above an upper bound.
Fig. 2.
Fig. 2. Effect of preconditioning on the condition number sequence of linear systems arising in IR-IF-QIPM to solve a nondegenerate LOP with \(\kappa _A=10^{6}\).
Fig. 3.
Fig. 3. Condition number sequence of linear systems arising in different IF-QIPMs to solve a degenerate LOP with \(\kappa _A=10\).
Figure 4 shows how IR coupled with preconditioning can keep the condition number of the NES bounded during iterations of the QIPM for this challenging degenerate LOP with an ill-conditioned matrix. All in all, IR can mitigate the impact of degeneracy on the condition number of Newton systems. However, preconditioning is effective for addressing problems with an ill-conditioned matrix A.
Fig. 4.
Fig. 4. Condition number sequence of linear systems arising in different IF-QIPMs to solve a degenerate LOP with \(\kappa _A=10^{6}\).
We also solved 100 randomly generated problems with different IF-QIPMs using the IBM Qiskit simulator. Figure 5 shows some statistics of solved problems. As we can see, with the OSS system, we could not solve problems with more than eight variables, as the size of the linear systems that could be solved by the Qiskit simulator is limited to 16. In addition, the OSS has a nonsymmetric n-by-n coefficient matrix. However, with MNES, we could solve an LOP with a million variables and 16 constraints, as the dimension of MNES is dependent on the number of constraints. These results show that the proposed IF-QIPM is more adaptable to near-term devices. In addition, we can see that the IR coupled with preconditioning enables the solution of the problem with a larger condition number. In addition, with both inner and outer IR, we could improve the precision from \(10^{-1}\) to \(10^{-4}\) on average.
Fig. 5.
Fig. 5. Statistics of 100 randomly generated LOPs solved by IF-QIPM using the Qiskit simulator (max time = 2 hours).

7 Conclusion

In this article, we proposed an IF-QIPM in which we solve a modified NES with QLSA+QTA. In addition, we applied an IR and preconditioning to mitigate the effect of condition number on the complexity of QIPMs. Applications of these classical ideas lead to improvements of QIPMs as follows:
By modifying the NES, in each iteration of the proposed IF-QIPM, we solve a linear system with m-by-m symmetric positive definite matrix, which is smaller than the OSS with n-by-n nonsymmetric matrix. In other words, the proposed IF-QIPM needs fewer qubits and gates.
We use an IR scheme coupled with preconditioning the NES that builds a uniform bound on the condition number and speeds up QIPMs with respect to precision and condition number.
By preconditioning the NES in the quantum setting, we achieved speedup with respect to the dimension compared to classical inexact approaches.
In Table 2, the complexities of some recent classical and quantum IPMs are provided. As we can see, IR-IF-QIPM with MNES achieves the best complexity of IR-IF-QIPM using OSS with slightly better dependence on \(\Vert A\Vert\), due to using quantum solver of Mohammadisiahroudi et al. [29]. By switching to PNES, complexity improves with respect to dimension compared with classical preconditioned IPMs. It is natural that calculating preconditioner on a quantum machine will have better complexity with respect to dimension, but the challenge is addressing normalization factors in block-encoding. Another difference is dependence on \(\bar{\phi }\) instead of \(\omega\), which is an upper bound for norm of optimal solution. It should be mentioned that both \(\omega\) and \(\bar{\phi }\) are constants depending on input data. However, for some problems, they can be extremely large. As numerical results demonstrated, one advantage of IR and preconditioning is bounding the condition number. Mostly, IR avoids the growing condition number of the Newton system in degenerate problems, and preconditioning alleviates the impact of the condition number of matrix A. All in all, QIPMs have the potential to speed up the solution of LOPs with respect to dimension compared to classical, but they are more dependent on condition number and norm of the coefficient matrix. In this article, we explored some classical ideas like using a better formulation of Newton’s system and using IR coupled with preconditioning to shorten this gap. Table 2 demonstrated that the proposed IR-IF-QIPM using modified NES or PNES outperforms previous quantum and classical IPMs with respect to worst-case complexity.
Table 2.
AlgorithmSystemLinear System SolverQuantum ComplexityClassical ComplexityBound for \(\kappa\)
IPM with Partial Updates [38]NESLow rank updates \(\mathcal {O}(n^{3}L)\) 
F-IPM [38]NESCholesky \(\mathcal {O}(n^{3.5}L)\) 
II-IPM [35]PNESPCG \(\mathcal {O}(n^{5}L\bar{\chi }^2)\)\(\bar{\chi }^2\)
II-QIPM [31]NESQLSA+QTA\(\tilde{\mathcal {O}}_{n,\kappa _{A}, \omega }(n^{4}L\kappa _A^{4}\omega ^{19}\Vert A\Vert)\)\(\tilde{\mathcal {O}}_{ \omega }(n^{4}L)\)\(\mathcal {O}(\kappa _{\hat{A}}^{2} \omega ^5)\)
IF-QIPM [32]OSSQLSA+QTA\(\tilde{\mathcal {O}}_{n,\kappa _{A}, \omega }(n^{2}L\kappa _A^2\omega ^{5}\Vert A\Vert)\)\(\tilde{\mathcal {O}}_{\mu ^0}(n^{2.5}L)\)\(\mathcal {O}(\kappa _{\hat{A}} \omega ^2)\)
The proposed IR-IF-IPMMNESCG \(\tilde{\mathcal {O}}_{\mu ^0}(n^{2.5}L\kappa _A^2\omega ^4)\)\(\mathcal {O}(\kappa _{\hat{A}}^{2} \omega ^4)\)
The proposed IR-IF-IPMPNESPCG \(\tilde{\mathcal {O}}_{\mu ^0}(n^{3.5}L\bar{\chi }^2)\)\(\bar{\chi }^2\)
The proposed IR-IF-QIPMMNESQLSA+QTA\(\tilde{\mathcal {O}}_{n,\kappa _{\hat{A}}, \Vert \hat{A}\Vert ,\Vert \hat{b}\Vert ,\mu ^0}(\sqrt {n} m L\kappa _{\hat{A}}^2\omega ^5)\)\(\tilde{\mathcal {O}}_{\mu ^0}(n^{2.5}L)\)\(\mathcal {O}(\kappa _{\hat{A}}^{2} \omega ^4)\)
The proposed IR-IF-QIPMPNESQLSA+QTA\(\widetilde{\mathcal {O}}_{ n, \kappa _{A}, \Vert A\Vert , \Vert b\Vert , \mu ^0} (\sqrt {n} m L \bar{\chi }^2 \omega ^2)\)\(\tilde{\mathcal {O}}_{\mu ^0}(n^{2.5}L)\)\(\bar{\chi }^2\)
Table 2. Complexity of Different IPMs for LO
Using MNES also enables regularizing the Newton system. It is worth exploring the regularization in the quantum setting to address the impact of condition number on QIPMs. In addition, the proposed IR-IF-QIPM with PNES can be generalized to other conic problems such as semidefinite optimization where the size of Newton systems may grow quadratically for large-scale problems.

Footnotes

1
The \(\widetilde{\mathcal {O}}_{\alpha , \beta } (g(x))\) notation indicates that quantities polylogarithmic in \(\alpha , \beta\), and \(g(x)\) are suppressed.
2
The algorithmic speedup proposed in the work of Apers and Gribling [3] applies to LOPs where the bit-complexity of each entry in \(A, b, c\) is at most \(\text{polylog}(n,m)\), and \(n\ge m^{10}\), with n and m denoting the number of constraints and variables in their standard dual form, respectively.

References

[1]
Ghussoun Al-Jeiroudi and Jacek Gondzio. 2009. Convergence analysis of the inexact infeasible interior-point method for linear optimization. Journal of Optimization Theory and Applications 141, 2 (2009), 231–247.
[2]
Andris Ambainis. 2012. Variable time amplitude amplification and quantum algorithms for linear algebra problems. In Proceedings of the 29th Symposium on Theoretical Aspects of Computer Science (STACS ’12), Vol. 14. 636–647.
[3]
Simon Apers and Sander Gribling. 2023. Quantum speedups for linear programming via interior point methods. arXiv preprint arXiv:2311.03215 (2023).
[4]
Brandon Augustino, Giacomo Nannicini, Tamás Terlaky, and Luis F. Zuluaga. 2023. A quantum interior point method for semidefinite optimization problems. Quantum 7 (2023), 1110.
[5]
Brandon Augustino, Tamás Terlaky, Mohammadhossein Mohammadisiahroudi, and Luis F. Zuluaga. 2021. An Inexact-feasible Quantum Interior Point Method for Second-Order Cone Optimization. Technical Report 21T-009. ISE, Lehigh University.
[6]
Venansius Baryamureeba and Trond Steihaug. 2006. On the convergence of an inexact primal-dual interior point method for linear programming. In Large-Scale Scientific Computing, Ivan Lirkov, Svetozar Margenov, and Jerzy Waśniewski (Eds.). Springer, Berlin, Germany, 629–637.
[7]
Stefania Bellavia. 1998. Inexact interior-point method. Journal of Optimization Theory and Applications 96, 1 (1998), 109–121.
[8]
P. A. M. Casares and M. A. Martin-Delgado. 2020. A quantum interior-point predictor–corrector algorithm for linear programming. Journal of Physics A: Mathematical and Theoretical 53, 44 (2020), 445305.
[9]
Shantanav Chakraborty, András Gilyén, and Stacey Jeffery. 2018. The power of block-encoded matrix powers: Improved regression techniques via faster Hamiltonian simulation. arXiv preprint arXiv:1804.01973 (2018).
[10]
Andrew M. Childs, Robin Kothari, and Rolando D. Somma. 2017. Quantum algorithm for systems of linear equations with exponentially improved dependence on precision. SIAM Journal on Computing 46, 6 (2017), 1920–1950.
[11]
Michael B. Cohen, Yin Tat Lee, and Zhao Song. 2021. Solving linear programs in the current matrix multiplication time. Journal of the ACM 68, 1 (Jan. 2021), Article 3, 39 pages.
[12]
Fabio D’Andreagiovanni and Ambros M. Gleixner. 2016. Towards an accurate solution of wireless network design problems. In Combinatorial Optimization. Lecture Notes in Computer Science, Vol. 9849. Springer, 135–147.
[13]
Leon Eifler and Ambros Gleixner. 2023. A computational status update for exact rational mixed integer programming. Mathematical Programming 197, 2 (2023), 793–812.
[14]
Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. 2014. A quantum approximate optimization algorithm. arXiv preprint arXiv:1411.4028 (2014). https://arxiv.org/abs/1411.4028
[15]
Roland W. Freund, Florian Jarre, and Shinji Mizuno. 1999. Convergence of a class of inexact interior-point algorithms for linear programs. Mathematics of Operations Research 24, 1 (1999), 50–71.
[16]
András Gilyén. 2019. Quantum Singular Value Transformation and Its Algorithmic Applications. Ph.D. Dissertation. University of Amsterdam.
[17]
András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe. 2019. Quantum singular value transformation and beyond: Exponential improvements for quantum matrix arithmetics. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing. 193–204.
[18]
Ambros Gleixner and Daniel E. Steffy. 2020. Linear programming using limited-precision oracles. Mathematical Programming 183, 1 (2020), 525–554.
[19]
Ambros M. Gleixner, Daniel E. Steffy, and Kati Wolter. 2016. Iterative refinement for linear programming. INFORMS Journal on Computing 28, 3 (2016), 449–464.
[20]
Gene H. Golub and Charles F. Van Loan. 2013. Matrix Computations. JHU Press, Baltimore, MD, USA.
[21]
Jacek Gondzio. 2013. Convergence analysis of an inexact feasible interior point method for convex quadratic programming. SIAM Journal on Optimization 23, 3 (2013), 1510–1527.
[22]
Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. 2009. Quantum algorithm for linear systems of equations. Physical Review Letters 103, 15 (2009), 150502.
[23]
Iordanis Kerenidis and Anupam Prakash. 2020. A quantum interior point method for LPs and SDPs. ACM Transactions on Quantum Computing 1, 1 (2020), 1–32.
[24]
Masakazu Kojima, Nimrod Megiddo, and Shinji Mizuno. 1993. A primal–dual infeasible-interior-point algorithm for linear programming. Mathematical Programming 61, 1 (1993), 263–280.
[25]
Janos Korzak. 2000. Convergence analysis of inexact infeasible-interior-point algorithms for solving linear programming problems. SIAM Journal on Optimization 11, 1 (2000), 133–148.
[26]
Yin Tat Lee and Aaron Sidford. 2015. Efficient inverse maintenance and faster algorithms for linear programming. In Proceedings of the 2015 IEEE 56th Annual Symposium on Foundations of Computer Science. 230–249.
[27]
Guang Hao Low and Isaac L. Chuang. 2019. Hamiltonian simulation by qubitization. Quantum 3 (2019), 163.
[28]
Shinji Mizuno and Florian Jarre. 1999. Global and polynomial-time convergence of an infeasible-interior-point algorithm using inexact computation. Mathematical Programming 84, 1 (1999), 105–122.
[29]
Mohammadhossein Mohammadisiahroudi, Brandon Augustino, Ramin Fakhimi, Giacomo Nannicini, and Tamás Terlaky. 2023. Accurately Solving Linear Systems with Quantum Oracles. Technical Report 23T-006. ISE, Lehigh University.
[30]
Mohammadhossein Mohammadisiahroudi, Ramin Fakhimi, Brandon Augustino, and Tamás Terlaky. 2024. Generating linear, semidefinite, and second-order cone optimization problems for numerical experiments. Optimization Methods and Software 39, 4 (2024), 725–755.
[31]
Mohammadhossein Mohammadisiahroudi, Ramin Fakhimi, and Tamás Terlaky. 2024. Efficient use of quantum linear system algorithms in inexact infeasible IPMs for linear optimization. Journal of Optimization Theory and Applications 202 (2024), 146–183.
[32]
Mohammadhossein Mohammadisiahroudi, Ramin Fakhimi, Zeguan Wu, and Tamás Terlaky. 2023. An inexact feasible interior point method for linear optimization with high adaptability to quantum computers. arXiv preprint arXiv:2307.14445 (2023).
[33]
Mohammadhossein Mohammadisiahroudi and Tamás Terlaky. 2020. Quantum IPMs for linear optimization. In Encyclopedia of Optimization. Springer International Publishing, Cham, 1–11.
[34]
Renato D. C. Monteiro, Jerome W. O’Neal, and Takashi Tsuchiya. 2004. Uniform boundedness of a preconditioned normal matrix used in interior-point methods. SIAM Journal on Optimization 15, 1 (2004), 96–100.
[35]
Renato D. C. Monteiro and Jerome W. O’Neal. 2003. Convergence Analysis of a Long-Step Primal-Dual Infeasible Interior-Point LP Algorithm Based on Iterative Linear Solvers. Technical Report 30332. ISyE, Georgia Institute of Technology. http://www.optimization-online.org/DB_FILE/2003/10/768.pdf
[36]
Giacomo Nannicini. 2022. Fast quantum subroutines for the simplex method. Operations Research. Published Online, October 18, 2022.
[37]
Jerome W. O’Neal. 2006. The Use of Preconditioned Iterative Linear Solvers in Interior-Point Methods and Related Topics. Georgia Institute of Technology.
[38]
Cornelis Roos, Tamás Terlaky, and J.-Ph. Vial. 2005. Interior Point Methods for Linear Optimization. Springer, New York, NY, USA.
[39]
Joran van Apeldoorn, Arjan Cornelissen, András Gilyén, and Giacomo Nannicini. 2023. Quantum tomography using state-preparation unitaries. In Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA ’23). 1265–1318.
[40]
Jan van den Brand. 2020. A deterministic linear program solver in current matrix multiplication time. In Proceedings of the 14th Annual ACM-SIAM Symposium on Discrete Algorithms. 259–278.
[41]
Almudena Carrera Vazquez, Ralf Hiptmair, and Stefan Woerner. 2022. Enhancing the quantum linear systems algorithm using Richardson extrapolation. ACM Transactions on Quantum Computing 3, 1 (2022), 1–37.
[42]
James Hardy Wilkinson. 1963. Rounding Errors in Algebraic Processes. Prentice Hall, Englewood Cliffs, NJ, USA.
[43]
Leonard Wossnig, Zhikuan Zhao, and Anupam Prakash. 2018. Quantum linear system algorithm for dense matrices. Physical Review Letters 120, 5 (2018), 050502.
[44]
Stephen J. Wright. 1997. Primal-Dual Interior-Point Methods. SIAM.
[45]
Zeguan Wu, Mohammadhossein Mohammadisiahroudi, Brandon Augustino, Xiu Yang, and Tamás Terlaky. 2023. An inexact feasible quantum interior point method for linearly constrained quadratic optimization. Entropy 25, 2 (2023), 330.
[46]
Yinyu Ye, Michael J. Todd, and Shinji Mizuno. 1994. An \(O(\sqrt {nL})\)-iteration homogeneous and self-dual linear programming algorithm. Mathematics of Operations Research 19, 1 (1994), 53–67.
[47]
Guanglu Zhou and Kim-Chuan Toh. 2004. Polynomiality of an inexact infeasible interior point algorithm for semidefinite programming. Mathematical Programming 99, 2 (2004), 261–282.

Cited By

View all
  • (2025)Quantum computing inspired iterative refinement for semidefinite optimizationMathematical Programming10.1007/s10107-024-02183-zOnline publication date: 11-Jan-2025

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Quantum Computing
ACM Transactions on Quantum Computing  Volume 6, Issue 1
March 2025
259 pages
EISSN:2643-6817
DOI:10.1145/3696818
  • Editors:
  • Travis S. Humble,
  • Mingsheng Ying,
  • Guest Editors:
  • A.Y. Matsuura,
  • Timothy G. Mattson
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 January 2025
Online AM: 29 October 2024
Accepted: 08 October 2024
Revised: 20 August 2024
Received: 10 October 2023
Published in TQC Volume 6, Issue 1

Check for updates

Author Tags

  1. Quantum linear system algorithm
  2. quantum interior point method
  3. linear optimization
  4. iterative refinement
  5. preconditioning

Qualifiers

  • Research-article

Funding Sources

  • Defense Advanced Research Projects Agency
  • National Science Foundation CAREER

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)322
  • Downloads (Last 6 weeks)156
Reflects downloads up to 07 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Quantum computing inspired iterative refinement for semidefinite optimizationMathematical Programming10.1007/s10107-024-02183-zOnline publication date: 11-Jan-2025

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media