Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Quantum Linear System Solver Based on Time-optimal Adiabatic Quantum Computing and Quantum Approximate Optimization Algorithm

Published: 04 March 2022 Publication History
  • Get Citation Alerts
  • Abstract

    We demonstrate that with an optimally tuned scheduling function, adiabatic quantum computing (AQC) can readily solve a quantum linear system problem (QLSP) with O(κ poly(log (κ ε))) runtime, where κ is the condition number, and ε is the target accuracy. This is near optimal with respect to both κ and ε, and is achieved without relying on complicated amplitude amplification procedures that are difficult to implement. Our method is applicable to general non-Hermitian matrices, and the cost as well as the number of qubits can be reduced when restricted to Hermitian matrices, and further to Hermitian positive definite matrices. The success of the time-optimal AQC implies that the quantum approximate optimization algorithm (QAOA) with an optimal control protocol can also achieve the same complexity in terms of the runtime. Numerical results indicate that QAOA can yield the lowest runtime compared to the time-optimal AQC, vanilla AQC, and the recently proposed randomization method.

    1 Introduction

    Linear system solvers are used ubiquitously in scientific computing. Quantum algorithms for solving large systems of linear equations, also called the quantum linear system problem (QLSP), have received much attention recently [8, 10, 11, 12, 16, 17, 30, 33, 34]. The goal of QLSP is to efficiently compute \( \mathinner {|{x}\rangle }=A^{-1}\mathinner {|{b}\rangle }/{\parallel }{A^{-1}\mathinner {|{b}\rangle }}_2 \) on a quantum computer, where \( A\in {\mathbb {C}}^{N\times N} \) , and \( \mathinner {|{b}\rangle }\in {\mathbb {C}}^N \) is a normalized vector (for simplicity we assume \( N=2^n \) , and \( \Vert A\Vert _2 = 1 \) ). The ground-breaking Harrow, Hassidim, and Lloyd (HHL) algorithm obtains \( \mathinner {|{x}\rangle } \) with cost \( {\mathcal {O}}(\text{poly}(n) \kappa ^2 /\epsilon) \) , where \( \kappa = \Vert A\Vert \Vert A^{-1}\Vert \) is the condition number of A, and \( \epsilon \) is the target accuracy. On the other hand, the best classical iterative algorithm is achieved by the conjugate gradient method, where the cost is at least \( {\mathcal {O}}(N \sqrt {\kappa }\log (1/\epsilon)) \) , with the additional assumptions that A should be Hermitian positive definite and a matrix-vector product can be done with \( {\mathcal {O}}(N) \) cost [29]. The complexity of direct methods based on the Gaussian elimination procedure removes the dependence on \( \kappa \) , but the dependence on N is typically super-linear even for sparse matrices [21]. Therefore, the HHL algorithm can potentially be exponentially faster than classical algorithms with respect to N. The undesirable dependence with respect to \( \epsilon \) is due to the usage of the quantum phase estimation (QPE) algorithm. Recent progresses based on linear combination of unitaries (LCU) [12] and quantum signal processing (QSP) [16, 22] have further improved the scaling to \( {\mathcal {O}}(\kappa ^2 \text{poly}(\log (\kappa /\epsilon))) \) under different query models, without using QPE. However, the \( {\mathcal {O}}(\kappa ^2) \) scaling can be rather intrinsic to the methods, at least before complex techniques such as variable time amplitude amplification (VTAA) algorithm [2] are applied.
    The VTAA algorithm is a generalization of the conventional amplitude amplification algorithm, and allows to quadratically amplify the success probability of quantum algorithms in which different branches stop at different time. In [2], VTAA was first used to successfully improve the complexity of HHL algorithm to \( \widetilde{{\mathcal {O}}}(\kappa /\epsilon ^3) \) . In [12], the authors further combine VTAA algorithm and a low-precision phase estimate to improve the complexity of LCU to \( \widetilde{{\mathcal {O}}}(\kappa \, \text{poly}(\log (\kappa /\epsilon))) \) , which is near-optimal with respect to both \( \kappa \) and \( \epsilon \) . It is worth noting that the VTAA algorithm is a complicated procedure and can be difficult to implement. Thus, it remains of great interest to obtain alternative algorithms to solve QLSP with near-optimal complexity scaling without resorting to VTAA.
    Some of the alternative routes for solving QLSP are provided by the adiabatic quantum computing (AQC) [1, 19] and a closely related method called the randomization method (RM) [6, 30]. The key idea of both AQC and RM is to solve QLSP as an eigenvalue problem with respect to a transformed matrix. Assume that a Hamiltonian simulation can be efficiently performed on a quantum computer, it is shown that the runtime of RM scales as \( {\mathcal {O}}(\kappa \log (\kappa)/\epsilon) \) [30], which achieves near-optimal complexity with respect to \( \kappa \) without using VTAA algorithm as a subroutine. The key idea of the RM is to approximately follow the adiabatic path based on the quantum Zeno effect (QZE) using a Monte Carlo method. Although RM is inspired by AQC, the runtime complexity of the (vanilla) AQC is at least \( {\mathcal {O}}(\kappa ^2/\epsilon) \) [1, 7, 30]. Therefore, the RM is found to be at least quadratically faster than AQC with respect to \( \kappa \) .
    In this article, we find that with a simple modification of the scheduling function to traverse the adiabatic path, the gap between AQC and RM can be fully closed, along with the following two aspects. (1) We propose a family of rescheduled AQC algorithms called AQC(p). Assuming \( \kappa \) (or its upper bound) is known, we demonstrate that for any matrix A (possibly non-Hermitian or dense), when \( 1\lt p\lt 2 \) , the runtime complexity of AQC(p) can be only \( {\mathcal {O}}(\kappa /\epsilon) \) . Thus, AQC(p) removes a logarithmic factor with respect to \( \kappa \) compared to RM. (2) We propose another rescheduled algorithm called AQC(exp), of which the runtime is \( {\mathcal {O}}(\kappa ~\text{poly}(\log (\kappa /\epsilon))) \) . The main benefit of AQC(exp) is the improved dependence with respect to the accuracy \( \epsilon \) , and this is the near-optimal complexity (up to logarithmic factors) with respect to both \( \kappa \) and \( \epsilon \) . The scheduling function of AQC(exp) is also universal because we do not even need the knowledge of an upper bound of \( \kappa \) . Existing works along this line [15, 25] only suggest that runtime complexity is \( {\mathcal {O}}(\kappa ^3~\text{poly}(\log (\kappa /\epsilon))) \) , which improves the dependence with respect to \( \epsilon \) at the expense of a much weaker dependence on \( \kappa \) . Our main technical contribution is to again improve the dependence on \( \kappa . \) Since the cost of any generic QLSP solver can not be less than \( {\mathcal {O}}(\kappa) \) [17], our result achieves the near-optimal complexity up to logarithmic factors. We remark that in the AQC based algorithm, only the total runtime T depends on \( \kappa \) .
    Beyond the runtime complexity, we also discuss gate-efficient approaches to implement our AQC(p) and AQC(exp) methods. In particular, assume that we are given access to the same query models as those in [12]: the sparse input model of a d-sparse matrix A and the prepare oracle of the state \( \mathinner {|{b}\rangle } \) . We demonstrate that, when the adiabatic dynamics is simulated using the truncated Dyson series method [23], the query complexity of the AQC(p) method scales \( {\mathcal {O}}(d\kappa /\epsilon \log (d\kappa /\epsilon)) \) , and that of the AQC(exp) method scales \( {\mathcal {O}}(d\kappa ~\text{poly}\log (d\kappa /\epsilon)) \) . Both algorithms scale almost linearly in terms of \( \kappa \) , and the AQC(exp) method can achieve near-optimal scaling in both \( \kappa \) and \( \epsilon \) . Furthermore, the asymptotic scaling of the AQC(exp) method is the same as that of LCU with VTAA method [Theorem 5][12]. However, the AQC(exp) method avoids the usage of complex VTAA routine, which significantly simplifies its practical implementation.
    The quantum approximate optimization algorithm (QAOA) [14], as a quantum variational algorithm, has received much attention recently thanks to the feasibility of being implemented on near-term quantum devices. Due to the natural connection between AQC and QAOA, our result immediately suggests that the time-complexity for solving QLSP with QAOA is also at most \( {\mathcal {O}}(\kappa ~\text{poly}(\log (\kappa /\epsilon))) \) , which is also confirmed by numerical results. We also remark that both QAOA and AQC schemes prepare an approximate solution to the QLSP in a pure state, while RM prepares a mixed state. Moreover, all methods above can be efficiently implemented on gate-based computers and are much simpler than those using the VTAA algorithm as a subroutine.

    2 Quantum Linear System Problem and Vanilla AQC

    Assume \( A\in {\mathbb {C}}^{N\times N} \) is an invertible matrix with condition number \( \kappa \) and \( \Vert A\Vert _2 = 1 \) . Let \( \mathinner {|{b}\rangle }\in {\mathbb {C}}^{N} \) be a normalized vector. Given a target error \( \epsilon \) , the goal of QLSP is to prepare a normalized state \( \mathinner {|{x_{\text{a}}}\rangle } \) , which is an \( \epsilon \) -approximation of the normalized solution of the linear system \( \mathinner {|{x}\rangle }=A^{-1}\mathinner {|{b}\rangle }/{\parallel }{A^{-1}\mathinner {|{b}\rangle }}_2 \) , in the sense that \( \Vert \mathinner {|{x_{\text{a}}}\rangle }\mathinner {\langle {x_{\text{a}}}|}-\mathinner {|{x}\rangle }\mathinner {\langle {x}|}\Vert _2 \le \epsilon \) .
    For simplicity, we first assume A is Hermitian and positive definite and will discuss the generalization to non-Hermitian case later.
    The first step to design an AQC-based algorithm for solving QLSP is to transform the QLSP to an equivalent eigenvalue problem. Here, we follow the procedure introduced in [30]. Let \( Q_{b}=I_N-\mathinner {|{b}\rangle }\mathinner {\langle {b}|} \) . We introduce
    \( \begin{equation*} H_0=\sigma _x \otimes Q_b=\begin{pmatrix} 0 & Q_b\\ Q_b & 0 \end{pmatrix}, \end{equation*} \)
    then \( H_0 \) is a Hermitian matrix and the null space of \( H_0 \) is \( \text{Null}(H_0)=\text{span}\lbrace \mathinner {|{{\widetilde{b}}}\rangle },\mathinner {|{\bar{b}}\rangle }\rbrace \) . Here, \( \mathinner {|{{\widetilde{b}}}\rangle }=\mathinner {|{0,b}\rangle }:=(b,0)^{\top },\mathinner {|{\bar{b}}\rangle }=\mathinner {|{1,b}\rangle }:=(0,b)^{\top } \) . The dimension of \( H_0 \) is \( 2N \) and one ancilla qubit is needed to enlarge the matrix block. We also define
    \( \begin{equation*} H_1=\sigma _{+}\otimes (AQ_b)+\sigma _{-}\otimes (Q_bA)=\begin{pmatrix} 0 & AQ_b\\ Q_bA & 0 \end{pmatrix}. \end{equation*} \)
    Here, \( \sigma _{\pm }=\frac{1}{2}(\sigma _x\pm {\mathrm{i}}\sigma _y) \) . Note that if \( \mathinner {|{x}\rangle } \) satisfies \( A\mathinner {|{x}\rangle }\propto \mathinner {|{b}\rangle } \) , we have \( Q_bA\mathinner {|{x}\rangle }=Q_b\mathinner {|{b}\rangle }=0 \) . Then \( \text{Null}(H_1)=\text{span}\lbrace \mathinner {|{{\widetilde{x}}}\rangle },\mathinner {|{\bar{b}}\rangle }\rbrace \) with \( \mathinner {|{{\widetilde{x}}}\rangle }=\mathinner {|{0,x}\rangle } \) . Since \( Q_b \) is a projection operator, the gap between 0 and the rest of the eigenvalues of \( H_0 \) is 1. The gap between 0 and the rest of the eigenvalues of \( H_1 \) is bounded from below by \( 1/\kappa \) (see Appendix A).
    QLSP can be solved if we can prepare the zero-energy state \( \mathinner {|{{\widetilde{x}}}\rangle } \) of \( H_1 \) , which can be achieved by the AQC approach. Let \( H(f(s)) = (1-f(s))H_0 + f(s)H_1, 0\le s\le 1 \) . The function \( f:[0,1]\rightarrow [0,1] \) is called a scheduling function, and is a strictly increasing mapping with \( f(0) = 0, f(1) = 1 \) . The simplest choice is \( f(s)=s \) , which gives the “vanilla AQC”. We sometimes omit the s-dependence as \( H(f) \) to emphasize the dependence on f. Note that for any s, \( \mathinner {|{\bar{b}}\rangle } \) is always in \( \text{Null}(H(f(s))) \) , and there exists a state \( \mathinner {|{{\widetilde{x}}(s)}\rangle }=\mathinner {|{0,x(s)}\rangle } \) , such that \( \text{Null}(H(f(s)))=\lbrace \mathinner {|{{\widetilde{x}}(s)}\rangle },\mathinner {|{\bar{b}}\rangle }\rbrace \) . In particular, \( \mathinner {|{{\widetilde{x}}(0)}\rangle }=\mathinner {|{{\widetilde{b}}}\rangle } \) and \( \mathinner {|{{\widetilde{x}}(1)}\rangle }=\mathinner {|{{\widetilde{x}}}\rangle } \) ; and therefore, \( \mathinner {|{{\widetilde{x}}(s)}\rangle } \) is the desired adiabatic path. Let \( P_0(s) \) be the projection to the subspace \( \text{Null}(H(f(s))) \) , which is a rank-2 projection operator \( P_0(s)=\mathinner {|{{\widetilde{x}}(s)}\rangle }\mathinner {\langle {{\widetilde{x}}(s)}|}+\mathinner {|{\bar{b}}\rangle }\mathinner {\langle {\bar{b}}|} \) . Furthermore, the eigenvalue 0 is separated from the rest of the eigenvalues of \( H(f(s)) \) by a gap
    \( \begin{equation} \Delta (f(s))\ge \Delta _*(f(s)) := 1-f(s)+f(s)/\kappa . \end{equation} \)
    (1)
    We refer to the Appendix A for the derivation.
    Consider the adiabatic evolution
    \( \begin{equation} \frac{1}{T}{\mathrm{i}} \partial _s \left|\psi _T(s)\right\gt = H(f(s))\left|\psi _T(s)\right\gt , \quad \mathinner {|{\psi _T(0)}\rangle }=\mathinner {|{{\widetilde{b}}}\rangle }, \end{equation} \)
    (2)
    where \( 0 \le s \le 1 \) , and the parameter T is called the runtime of AQC. The quantum adiabatic theorem [Theorem 3][19] states that for any \( 0\le s\le 1 \) ,
    \( \begin{equation} |1-\mathinner {\langle {\psi _T(s)|P_0(s)|\psi _T(s)}\rangle }|\le \eta ^2(s), \end{equation} \)
    (3)
    where
    \( \begin{equation} \eta (s)=C\left\lbrace \frac{\Vert H^{(1)}(0)\Vert _2}{T \Delta ^2(0)} + \frac{\Vert H^{(1)}(s)\Vert _2}{T \Delta ^2(f(s))} + \frac{1}{T}\int _0^s \left(\frac{\Vert H^{(2)}(s^{\prime })\Vert _2}{\Delta ^2(f(s^{\prime }))} + \frac{\Vert H^{(1)}(s^{\prime })\Vert ^2_2}{\Delta ^3(f(s^{\prime }))}\right)ds^{\prime }\right\rbrace . \end{equation} \)
    (4)
    The derivatives of H are taken with respect to s, i.e., \( H^{(k)}(s) := \frac{d^k}{ds^k} H(f(s)), k = 1,2 \) . Throughout the article, we shall use a generic symbol C to denote constants independent of \( s,\Delta ,T \) .
    Intuitively, the quantum adiabatic theorem in Equation (3) says that, if the initial state is an eigenstate corresponding to the eigenvalue 0, then for large enough T the state \( \mathinner {|{\psi _T(s)}\rangle } \) will almost stay in the eigenspace of \( H(s) \) corresponding to the eigenvalue 0, where there is a double degeneracy and only one of the eigenstate \( \mathinner {|{{\widetilde{x}}(s)}\rangle } \) is on the desired adiabatic path. However, such degeneracy will not break the effectiveness of AQC for the following reason. Note that \( \mathinner {\langle {\bar{b}|\psi _T(0)}\rangle }=0 \) , and \( H(f(s))\mathinner {|{\bar{b}}\rangle }=0 \) for all \( 0\le s\le 1 \) , so the Schrödinger dynamics Equation (2) implies \( \mathinner {\langle {\bar{b}|\psi _T(s)}\rangle }=0 \) , which prevents any transition of \( \mathinner {|{\psi _T(s)}\rangle } \) to \( \mathinner {|{\bar{b}}\rangle } \) . Therefore, the adiabatic path will stay along \( \mathinner {|{{\widetilde{x}}(s)}\rangle } \) . Using \( \mathinner {\langle {\bar{b}|\psi _T(s)}\rangle }=0 \) , we have \( P_0(s)\mathinner {|{\psi _T(s)}\rangle }=\mathinner {|{{\widetilde{x}}(s)}\rangle }\mathinner {\langle {{\widetilde{x}}(s)|\psi _T(s)}\rangle } \) . Therefore, the estimate in Equation (3) becomes
    \( \begin{equation*} 1-|\mathinner {\langle {\psi _T(s)|{\widetilde{x}}(s)}\rangle }|^2\le \eta ^2(s). \end{equation*} \)
    This also implies that (see appendix)
    \( \begin{equation*} {\parallel }{\mathinner {|{\psi _T(s)}\rangle }\mathinner {\langle {\psi _T(s)}|}-\mathinner {|{{\widetilde{x}}(s)}\rangle }\mathinner {\langle {{\widetilde{x}}(s)}|}}_2\le \eta (s). \end{equation*} \)
    Therefore \( \eta (1) \) can be an upper bound of the distance of the density matrix.
    If we simply assume \( {\parallel }{H^{(1)}}_2,{\parallel }{H^{(2)}}_2 \) are bounded by constants, and use the worst case bound that \( \Delta \ge \kappa ^{-1} \) , we arrive at the conclusion that in order to have \( \eta (1)\le \epsilon \) , the runtime of vanilla AQC is \( T\gtrsim \kappa ^3/\epsilon \) .

    3 AQC(P) Method

    Our goal is to reduce the runtime by choosing a proper scheduling function. The key observation is that the accuracy of AQC depends not only on the gap \( \Delta (f(s)) \) but also on the derivatives of \( H(f(s)) \) , as revealed in the estimate in Equation (4). Therefore, it is possible to improve the accuracy if a proper time schedule allows the Hamiltonian \( H(f(s)) \) to slow down when the gap is close to 0. We consider the following schedule [1, 19]:
    \( \begin{equation} \dot{f}(s) = c_p \Delta _*^p(f(s)), \quad f(0) = 0, \quad p \gt 0{.} \end{equation} \)
    (5)
    Here, \( \Delta _* \) is defined in Equation (1), and \( c_p = \int _0^1 \Delta _*^{-p}(u) du \) is a normalization constant chosen so that \( f(1) = 1 \) . When \( 1 \lt p \le 2 \) , Equation (5) can be explicitly solved as
    \( \begin{equation} f(s) = \frac{\kappa }{\kappa - 1}\left[1-\left(1+s(\kappa ^{p-1}-1)\right)^{\frac{1}{1-p}}\right]{.} \end{equation} \)
    (6)
    Note that as \( s\rightarrow 1 \) , \( \Delta _{*}(f(s))\rightarrow \kappa ^{-1} \) , and therefore the dynamics of \( f(s) \) slows down as \( f\rightarrow 1 \) and the gap decreases towards \( \kappa ^{-1} \) . We refer to the adiabatic dynamics Equation \( (2) \) with the schedule in Equation \( (5) \) as the AQC(p) scheme. Our main result is given in Theorem 1 (See Appendix D for the proof).
    Theorem 1.
    Let \( A\in {\mathbb {C}}^{N\times N} \) be a Hermitian positive definite matrix with condition number \( \kappa \) . For any choice of \( 1 \lt p \lt 2 \) , the error of the AQC(p) scheme satisfies
    \( \begin{equation} \Vert \mathinner {|{\psi _T(1)}\rangle }\mathinner {\langle {\psi _T(1)}|}-\mathinner {|{{\widetilde{x}}}\rangle }\mathinner {\langle {{\widetilde{x}}}|}\Vert _2 \le C \kappa /T. \end{equation} \)
    (7)
    Therefore, in order to prepare an \( \epsilon - \) approximation of the solution of QLSP, it suffices to choose the runtime \( T = {\mathcal {O}}(\kappa /\epsilon) \) . Furthermore, when \( p=1,2 \) , the bound for the runtime becomes \( T={\mathcal {O}}(\kappa \log (\kappa)/\epsilon) \) .
    The runtime complexity of the AQC(p) method with respect to \( \kappa \) is only \( {\mathcal {O}}(\kappa) \) . Compared to Reference [30], AQC(p) further removes the \( \log (\kappa) \) dependence when \( 1\lt p\lt 2 \) , and hence reaches the optimal complexity with respect to \( \kappa \) . Interestingly, though not explicitly mentioned in [30], the success of RM for solving QLSP relies on a proper choice of the scheduling function, which approximately corresponds to AQC(p=1). It is this scheduling function, rather than the QZE or its Monte Carlo approximation per se that achieves the desired \( {\mathcal {O}}(\kappa \log \kappa) \) scaling with respect to \( \kappa \) . Furthermore, the scheduling function in RM is similar to the choice of the schedule in the AQC(p=1) scheme. The speedup of AQC(p) versus the vanilla AQC is closely related to the quadratic speedup of the optimal time complexity of AQC for Grover’s search [1, 19, 27, 28], in which the optimal time scheduling reduces the runtime from \( T\sim {\mathcal {O}}(N) \) (i.e., no speedup compared to classical algorithms) to \( T\sim {\mathcal {O}}(\sqrt {N}) \) (i.e., Grover speedup). In fact, the choice of the scheduling function in Reference [28] corresponds to AQC(p=2) and that in Reference [19] corresponds to AQC(1<p<2).

    4 AQC(Exp) Method

    Although AQC(p) achieves the optimal runtime complexity with respect to \( \kappa \) , the dependence on \( \epsilon \) is still \( {\mathcal {O}}(\epsilon ^{-1}) \) , which limits the method from achieving high accuracy. It turns out that when T is sufficiently large, the dependence on \( \epsilon \) could be improved to \( {\mathcal {O}}(\text{poly} \log (1/\epsilon)) \) , by choosing an alternative scheduling function.
    The basic observation is as follows. In the AQC(p) method, the adiabatic error bound we consider, i.e., Equation (4), is the so-called instantaneous adiabatic error bound, which holds true for all \( s\in [0,1] \) . However, when using AQC for solving QLSP, it suffices only to focus on the error bound at the final time \( s=1 \) . It turns out that this allows us to obtain a tighter error bound. In fact, such an error bound can be exponentially small with respect to the runtime [1, 15, 25, 32]. Roughly speaking, with an additional assumption for the Hamiltonian \( H(f(s)) \) that the derivatives of any order vanish at \( s=0,1 \) , the adiabatic error can be bounded by \( c_1\exp (-c_2T^{\alpha }) \) for some positive constants \( c_1, c_2, \alpha \) . Furthermore, it is proved in [15] that if the target eigenvalue is simple, then \( c_1 = {\mathcal {O}}(\Delta _*^{-1}) \) and \( c_2 = {\mathcal {O}}(\Delta _*^3) \) . Note that \( \Delta _* \ge \kappa ^{-1} \) for QLSP, thus, according to this bound, to obtain an \( \epsilon \) -approximation, it suffices to choose \( T = {\mathcal {O}}(\kappa ^3~\text{poly}(\log (\kappa /\epsilon))) \) . This is an exponential speedup with respect to \( \epsilon \) , but the dependence on the condition number becomes cubic again.
    However, it is possible to reduce the runtime if the change of the Hamiltonian is slow when the gap is small, as we have already seen in the AQC(p) method. For QLSP, the gap monotonically decreases, and the smallest gap occurs uniquely at the final time, where the Hamiltonian \( H(s) \) can be set to vary slowly by requiring its derivatives to vanish at the boundary.
    We consider the following schedule
    \( \begin{equation} f(s) = c_e^{-1} \int _0^s \exp \left(-\frac{1}{s^{\prime }(1-s^{\prime })}\right){\mathrm{d}} s^{\prime }, \end{equation} \)
    (8)
    where \( c_e = \int _0^1\exp \left(-1/(s^{\prime }(1-s^{\prime }))\right){\mathrm{d}} s^{\prime } \) is a normalization constant such that \( f(1) = 1 \) . This schedule can assure that \( H^{(k)}(0) = H^{(k)}(1) = 0 \) for all \( k\ge 1 \) . We refer to the adiabatic dynamics Equation (2) with the schedule in Equation (8) as the AQC(exp) scheme. Our main result is given in Theorem 2 (see Appendix E for the proof).
    Theorem 2.
    Let \( A\in {\mathbb {C}}^{N\times N} \) be a Hermitian positive definite matrix with condition number \( \kappa \) . Then for large enough \( T\gt 0 \) , the final time error \( \Vert \mathinner {|{\psi _T(1)}\rangle }\mathinner {\langle {\psi _T(1)}|} - \mathinner {|{{\widetilde{x}}}\rangle }\mathinner {\langle {{\widetilde{x}}}|}\Vert _2 \) of the AQC(exp) scheme is bounded by
    \( \begin{equation} C\log (\kappa)\exp \left(-C\left(\frac{\kappa \log ^2\kappa }{T}\right)^{-\frac{1}{4}}\right). \end{equation} \)
    (9)
    Therefore, for any \( \kappa \gt e \) , \( 0\lt \epsilon \lt 1 \) , in order to prepare an \( \epsilon - \) approximation of the solution of QLSP, it suffices to choose the runtime \( T = {\mathcal {O}}(\kappa \log ^2(\kappa)\log ^4(\tfrac{\log \kappa }{\epsilon })) \) .
    Compared with RM and AQC(p), although the \( \log (\kappa) \) dependence reoccurs, AQC(exp) achieves an exponential speedup over RM and AQC(p) with respect to \( \epsilon \) (and hence giving its name), and thus is more suitable for preparing the solution of QLSP with high fidelity. Furthermore, the time scheduling of AQC(exp) is universal and AQC(exp) does not require knowledge on the bound of \( \kappa \) .
    We remark that the performance of the AQC(exp) method is sensitive to the perturbations in the scheduling function, which can affect the final error in the AQC(exp) method. This is similar to the finite precision effect in the adiabatic Grover search reported in [18]. Therefore, the scheduling function should be computed to sufficient accuracy on classical computers using numerical quadrature, and implemented accurately as a control protocol on quantum computers.

    5 Gate-based Implementation of AQC

    We briefly discuss how to implement AQC(p) and AQC(exp) on a gate-based quantum computer. Since \( \mathinner {|{\psi _T(s)}\rangle } = \mathcal {T}\exp (-{\mathrm{i}} T\int _0^sH(f(s^{\prime }))ds^{\prime })\mathinner {|{\psi _T(0)}\rangle } \) , where \( \mathcal {T} \) is the time-ordering operator, it is sufficient to implement an efficient time-dependent Hamiltonian simulation of \( H(f(s)) \) .
    One straightforward approach is to use the Trotter splitting method. The lowest order approximation takes the form
    \( \begin{equation} \begin{split} \mathcal {T}\exp \left(-{\mathrm{i}} T\int _0^sH(f(s^{\prime })){\mathrm{d}} s^{\prime }\right) \approx \prod _{m=1}^M \exp \left(-{\mathrm{i}} T h H(f(s_m))\right)\\ \approx \prod _{m=1}^M \exp \left(-{\mathrm{i}} T h (1-f(s_m))H_0\right)\exp \left(-{\mathrm{i}} T h f(s_m)H_1\right), \end{split} \end{equation} \)
    (10)
    where \( h = s/M, s_m = mh \) . It is proved in [31] that the error of such an approximation is \( {\mathcal {O}}(\text{poly} \) \( (\log (N))T^2/M) \) , which indicates that to achieve an \( \epsilon \) -approximation, it suffices to choose \( M = {\mathcal {O}}(\text{poly}(\log (N))T^2/\epsilon) \) . On a quantum computer, the operations \( e^{-{\mathrm{i}} \tau H_0} \) and \( e^{-{\mathrm{i}} \tau H_1} \) require a time-independent Hamiltonian simulation process, which can be implemented via techniques such as LCU and QSP [4, 22]. For a d-sparse matrix A, according to [5], the query complexity is \( f=\widetilde{{\mathcal {O}}}(g) \) if \( f={{\mathcal {O}}}(g \text{poly}\log (g)) \) for a single step. Here, the \( \widetilde{{\mathcal {O}}} \) means that we neglect the \( \log \log \) factors. Note that the total sum of the simulation time of single steps is exactly T regardless of the choice of M, and the total query complexity is \( \widetilde{{\mathcal {O}}}(dT\log (dT/\epsilon)) \) . Using Theorems 1 and 2, the query complexity of AQC(p) and AQC(exp) is \( \widetilde{{\mathcal {O}}}(d\kappa /\epsilon \log (d\kappa /\epsilon)) \) and \( \widetilde{{\mathcal {O}}}(d\kappa ~\text{poly}(\log (d\kappa /\epsilon))) \) , respectively. Nevertheless, M scales as \( {\mathcal {O}}(T^2) \) with respect to the runtime T, which implies that the number of time slices should be at least \( {\mathcal {O}}(\kappa ^2) \) . Therefore the gate complexity scales superlinearly with respect to \( \kappa \) . The scaling of the Trotter expansion can be improved using high order Trotter–Suzuki formula as well as the recently developed commutator-based error analysis [13], but we will not pursue this direction here.
    There is an efficient way to directly perform the time evolution of \( H(f(s)) \) without using the splitting strategy, following the algorithm proposed by Low and Wiebe in [23], where the time-dependent Hamiltonian simulation is performed based on a truncated Dyson expansion. A detailed discussion on how to implement this algorithm in a gate-efficient way is presented in [Appendix C][20], and here we summarize the basic idea as follows. Assume that we are given two input query models: \( \mathcal {P}_A \) that gives the locations and values of the nonzero entries of the matrix A, and \( \mathcal {P}_B \) that produces the quantum state \( \mathinner {|{b}\rangle } \) . Here, the input query models are the same as those in [12]. Then, one can construct the block-encoding representations of the matrix A [16] and the matrix \( Q_b \) with \( {\mathcal {O}}(1) \) additional primitive gates. Next, the block-encodings of A and \( Q_b \) can be applied to build the block-encodings of \( H_0 \) and \( H_1 \) , and then the HAM-T model, which is a block-encoding of the select oracle of the time-dependent Hamiltonian \( H(s) \) evaluated at different time steps and serves as the input model in the truncated Dyson series method [23]. Finally, after the construction of HAM-T, the adiabatic dynamics can be simulated following the procedure for solving time-dependent Schrödinger equations discussed in [23].
    The costs of AQC(p) and AQC(exp) are summarized in Table 1, where for both AQC(p) and AQC(exp), almost linear dependence with respect to \( \kappa \) is achieved. The almost linear dependence on \( \kappa \) cannot be expected to be improved to \( {\mathcal {O}}(\kappa ^{1-\delta }) \) with any \( \delta \gt 0 \) [17]. Thus, both AQC(p) and AQC(exp) are almost optimal with respect to \( \kappa \) , and AQC(exp) further achieves an exponential speedup with respect to \( \epsilon \) .
    Table 1.
     AQC(p)AQC(exp)
    Queries \( {\mathcal {O}}(d\kappa /\epsilon \log (d\kappa /\epsilon)) \) \( {\mathcal {O}}(d\kappa \text{ poly}(\log (d\kappa /\epsilon))) \)
    Qubits \( {\mathcal {O}}(n+\log (d\kappa /\epsilon)) \) \( \widetilde{{\mathcal {O}}}(n+\log (d\kappa /\epsilon)) \)
    Primitive gates \( {\mathcal {O}}(nd\kappa /\epsilon ~\text{poly}(\log (d\kappa /\epsilon))) \) \( {\mathcal {O}}(nd\kappa ~ \text{poly}(\log (d\kappa /\epsilon))) \)
    Table 1. Computational Costs of AQC(p) and AQC(exp) via a Time-dependent Hamiltonian Simulation using the Truncated Dyson Expansion [23]

    6 QAOA for Solving QLSP

    The QAOA scheme [14] considers the following parameterized wavefunction
    \( \begin{equation} \mathinner {|{\psi _{\theta }}\rangle }:=e^{-{\mathrm{i}}\gamma _P H_1} e^{-{\mathrm{i}}\beta _P H_0}\cdots e^{-{\mathrm{i}}\gamma _1 H_1} e^{-{\mathrm{i}}\beta _1 H_0}\mathinner {|{\psi _i}\rangle }. \end{equation} \)
    (11)
    Here, \( \theta \) denotes the set of \( 2P \) adjustable real parameters \( \lbrace \beta _i,\gamma _i\rbrace _{i=1}^{P} \) , and \( \mathinner {|{\psi _i}\rangle } \) is an initial wavefunction. The goal of QAOA is to choose \( \mathinner {|{\psi _i}\rangle } \) and to tune \( \theta \) , so that \( \mathinner {|{\psi _\theta }\rangle } \) approximates a target state. In the context of QLSP, we may choose \( \mathinner {|{\psi _i}\rangle }=\mathinner {|{{\widetilde{b}}}\rangle } \) , and each step of the QAOA ansatz in Equation (11) can be efficiently implemented using the quantum singular value transformation [16]. More specifically, as discussed in Section 5 and in [20], the block-encodings of \( H_0 \) and \( H_1 \) can be efficiently constructed via the input models for the matrix A and the vector \( \mathinner {|{b}\rangle } \) . Then, the quantum singular value transformation can be directly applied to simulate \( e^{-{\mathrm{i}} \beta H_0} \) and \( e^{-{\mathrm{i}} \gamma H_1} \) . According to [Corollary 62][16], the cost of each single simulation scales linearly in time and logarithmically in precision, and hence the total complexity of implementing a QAOA ansatz scales linearly in total runtime of QAOA, defined to be \( T:=\sum _{i=1}^P(|\beta _i|+|\gamma _i|) \) , and logarithmically in precision. Notice that with a sufficiently large P, the optimal Trotter splitting method becomes a special form of Equation (11). Hence, Theorem 2 implies that with an optimal choice of \( \lbrace \beta _i,\gamma _i\rbrace _{i=1}^P \) , the QAOA runtime T is at most \( {\mathcal {O}}(\kappa ~\text{poly}(\log (\kappa /\epsilon))) \) . We remark that the validity of such an upper bound requires a sufficiently large P and an optimal choice of \( \theta \) . On the other hand, our numerical results suggest that the same scaling can be achieved with a much smaller P.
    For a given P, the optimal \( \theta \) maximizes the fidelity as
    \( \begin{equation*} \max _{\theta } F_{\theta }:=|\mathinner {\langle {\psi _{\theta }|{\widetilde{x}}}\rangle }|^2. \end{equation*} \)
    However, the maximization of the fidelity requires the knowledge of the exact solution \( \mathinner {|{{\widetilde{x}}}\rangle }, \) which is not practical. We may instead solve the following minimization problem
    \( \begin{equation} \min _\theta \mathinner {\langle {\psi _\theta |H_1^2|\psi _\theta }\rangle }. \end{equation} \)
    (12)
    Since the null space of \( H_1 \) is of dimension 2, the unconstrained minimizer \( \mathinner {|{\psi _{\theta }}\rangle } \) seems possible to only have a small overlap with \( \mathinner {|{{\widetilde{x}}}\rangle } \) . However, this is not a problem due to the choice of the initial state \( \mathinner {|{\psi _i}\rangle }=\mathinner {|{{\widetilde{b}}}\rangle } \) . Notice that by the variational principle the minimizer \( \mathinner {|{\psi _\theta }\rangle } \) maximizes \( \mathinner {\langle {\psi _\theta |P_0(1)|\psi _\theta }\rangle } \) . Using the fact that \( e^{-{\mathrm{i}} \beta H_0}\mathinner {|{\bar{b}}\rangle }=e^{-{\mathrm{i}} \gamma H_1}\mathinner {|{\bar{b}}\rangle }=\mathinner {|{\bar{b}}\rangle } \) for any \( \beta ,\gamma \) , we obtain \( \mathinner {\langle {\bar{b}|\psi _\theta }\rangle }=\mathinner {\langle {\bar{b}|{\widetilde{b}}}\rangle }=0, \) which means the QAOA ansatz prevents the transition to \( \mathinner {|{\bar{b}}\rangle } \) , similar to AQC. Then, \( \mathinner {\langle {\psi _\theta |P_0(1)|\psi _\theta }\rangle }=\mathinner {\langle {\psi _\theta |{\widetilde{x}}}\rangle }\mathinner {\langle {{\widetilde{x}}|\psi _\theta }\rangle }=F_\theta \) , so the minimizer of Equation \( (12) \) indeed maximizes the fidelity.
    For every choice of \( \theta \) , we evaluate the expectation value \( \mathinner {\langle {\psi _\theta |H_1^2|\psi _\theta }\rangle } \) . Then, the next \( \theta \) is adjusted on a classical computer towards minimizing the objective function. The process is repeated till convergence. Efficient classical algorithms for the optimization of parameters in QAOA are currently an active topic of research, including methods using gradient optimization [24, 36], Pontryagin’s maximum principle [3, 35], reinforcement learning [9, 26], to name a few. Algorithm 1 describes the procedure using QAOA to solve QLSP.
    Compared to AQC(p) and AQC(exp), the QAOA method has the following two potential advantages. The first advantage is that QAOA provides the possibility of going beyond AQC-based algorithms. Notice that the Trotter splitting method is a special form of the QAOA ansatz in Equation (11). If the angles \( \lbrace \beta _i,\gamma _i\rbrace _{i=1}^P \) have been properly optimized (which is a very strong assumption and will be further discussed later), the total QAOA runtime T will be by definition comparable to or even shorter than the runtime of AQC with the best scheduling function (after discretization). Second, one way of implementing AQC(p) and AQC(exp) using an operator splitting method requires the time interval to be explicitly split into a large number of intervals, while numerical results indicate that the number of intervals P in QAOA can be much smaller. This could reduce the depth of the quantum circuit. Compared to AQC, QAOA has the additional advantage that it only consists of \( 2P \) time-independent Hamiltonian simulation problem, once \( \theta \) is known.
    Despite the potential advantages, several severe caveats of using QAOA for QLSP arise when we consider beyond the time complexity. The first is that classical optimization of the angles \( \lbrace \beta _i,\gamma _i\rbrace _{i=1}^{P} \) can be difficult. Commonly used classical optimization algorithms, such as the gradient descent method, are likely to be stuck at local optimizers and thus result in sub-optimal performance. The cost for the classical optimization is also hard to known a priori. The optimization may require many iterations, which can diminish the gain of the runtime reduction. The second is related to the accurate computation of the objective function \( O(\theta ^{(k)}) \) . Note that the minimal spectrum gap of \( H_1 \) is \( {\mathcal {O}}(\kappa ^{-1}) \) . In order to obtain an \( \epsilon \) -approximation, the precision of measuring \( O(\theta)=\mathinner {\langle {\psi _{\theta }|H_1^2|\psi _\theta }\rangle } \) should be at least \( {\mathcal {O}}(\epsilon ^2/\kappa ^2) \) . Hence \( {\mathcal {O}}(\kappa ^4/\epsilon ^4) \) repeated measurements can be needed to achieve the desired accuracy.

    7 Generalization to Non-Hermitian Matrices

    Now we discuss the case when A is not Hermitian positive definite. First, we still assume that A is Hermitian (but not necessarily positive definite). In this case, we adopt the family of Hamiltonians introduced in [30], which overcomes the difficulty brought by the indefiniteness of A at the expense of enlarging the Hilbert space to dimension \( 4N \) (so two ancilla qubits are needed to enlarge the matrix block). Here, we define
    \( \begin{equation*} H_0=\sigma _+ \otimes \left[(\sigma _z \otimes I_N)Q_{+,b}\right] + \sigma _- \otimes \left[Q_{+,b}(\sigma _z \otimes I_N)\right], \end{equation*} \)
    where \( Q_{+,b} = I_{2N}-\mathinner {|{+,b}\rangle }\mathinner {\langle {+,b}|} \) , and \( \mathinner {|{\pm }\rangle }=\frac{1}{\sqrt {2}}(\mathinner {|{0}\rangle }\pm \mathinner {|{1}\rangle }) \) . The null space of \( H_0 \) is \( \text{Null}(H_0)=\text{span}\lbrace \mathinner {|{0,-,b}\rangle }, \mathinner {|{1,+,b}\rangle }\rbrace \) . We also define
    \( \begin{equation*} H_1=\sigma _+ \otimes \left[(\sigma _x \otimes A)Q_{+,b}\right] + \sigma _- \otimes \left[Q_{+,b}(\sigma _x \otimes A)\right]. \end{equation*} \)
    Note that \( \text{Null}(H_1)=\text{span}\lbrace \mathinner {|{0,+,x}\rangle },\mathinner {|{1,+,b}\rangle }\rbrace \) . Therefore, the solution of the QLSP can be obtained if we can prepare the zero-energy state \( \mathinner {|{0,+,x}\rangle } \) of \( H_1 \) .
    The family of Hamiltonians for AQC(p) is still given by \( H(f(s)) = (1-f(s))H_0 + f(s)H_1, 0\le s\le 1 \) . Similar to the case of Hermitian positive definite matrices, there is a double degeneracy of the eigenvalue 0, and we aim at preparing one of the eigenstate via time-optimal adiabatic evolution. More precisely, for any s, \( \mathinner {|{1,+,b}\rangle } \) is always in \( \text{Null}(H(f(s))) \) , and there exists a state \( \mathinner {|{{\widetilde{x}}(s)}\rangle } \) with \( \mathinner {|{{\widetilde{x}}(0)}\rangle } = \mathinner {|{0,-,b}\rangle }, \mathinner {|{{\widetilde{x}}(1)}\rangle } = \mathinner {|{0,+,x}\rangle } \) , such that \( \text{Null}(H(f(s)))=\lbrace \mathinner {|{{\widetilde{x}}(s)}\rangle },\mathinner {|{1,+,b}\rangle }\rbrace \) . Such degeneracy will not influence the adiabatic computation starting with \( \mathinner {|{0,-,b}\rangle } \) for the same reason we discussed for Hermitian positive definite case (also discussed in [30]), and the error of AQC(p) is still bounded by \( \eta (s) \) given in Equation (4).
    Furthermore, the eigenvalue 0 is separated from the rest of the eigenvalues of \( H(f(s)) \) by a gap \( \Delta (f(s))\ge \sqrt {(1-f(s))^2+(f(s)/\kappa)^2} \) [30]. For technical simplicity, note that \( \sqrt {(1-f)^2+(f/\kappa)^2} \ge (1-f+f/\kappa)/\sqrt {2} \) for all \( 0 \le f \le 1 \) , we define the lower bound of the gap to be \( \Delta _*(f) = (1-f+f/\kappa)/\sqrt {2} \) , which is exactly proportional to that for the Hermitian positive definite case. Therefore, we can use exactly the same time schedules as the Hermitian positive definite case to perform AQC(p) and AQC(exp) schemes, and properties of AQC(p) and AQC(exp) are stated in the following theorems (see Appendices D and E for the proof).
    Theorem 3.
    Let \( A\in {\mathbb {C}}^{N\times N} \) be a Hermitian matrix (not necessarily positive definite) with condition number \( \kappa \) . For any choice of \( 1 \lt p \lt 2 \) , the AQC(p) scheme gives
    \( \begin{equation} \Vert \mathinner {|{\psi _T(s)}\rangle }\mathinner {\langle {\psi _T(s)}|}-\mathinner {|{0,+,x}\rangle }\mathinner {\langle {0,+,x}|}\Vert _2 \le C \kappa /T. \end{equation} \)
    (13)
    Therefore, in order to prepare an \( \epsilon - \) approximation of the solution of QLSP, it suffices to choose the runtime \( T = {\mathcal {O}}(\kappa /\epsilon) \) . Furthermore, when \( p=1,2 \) , the bound of the runtime becomes \( T={\mathcal {O}}(\kappa \log (\kappa)/\epsilon) \) .
    Theorem 4.
    Let \( A\in {\mathbb {C}}^{N\times N} \) be a Hermitian matrix (not necessarily positive definite) with condition number \( \kappa \) . Then, for large enough \( T\gt 0 \) , the final time error \( \Vert \mathinner {|{\psi _T(1)}\rangle }\mathinner {\langle {\psi _T(1)}|} - \mathinner {|{0,+,x}\rangle }\mathinner {\langle {0,+,x}|}\Vert _2 \) of the AQC(exp) scheme is bounded by
    \( \begin{equation} C\log (\kappa)\exp \left(-C\left(\frac{\kappa \log ^2\kappa }{T}\right)^{-\frac{1}{4}}\right). \end{equation} \)
    (14)
    Therefore, for any \( \kappa \gt e \) , \( 0\lt \epsilon \lt 1 \) , in order to prepare an \( \epsilon - \) approximation of the solution of QLSP, it suffices to choose the runtime \( T = {\mathcal {O}}(\kappa \log ^2(\kappa)\log ^4(\tfrac{\log \kappa }{\epsilon })) \) .
    For a most general square matrix \( A\in {\mathbb {C}}^{N\times N} \) , following [17] we may transform it into the Hermitian case at the expense of further doubling the dimension of the Hilbert space. Consider an extended QLSP \( \mathfrak {A}\mathinner {|{\mathfrak {x}}\rangle } = \mathinner {|{\mathfrak {b}}\rangle } \) in dimension \( 2N \) where
    \( \begin{equation*} \mathfrak {A} = \sigma _+ \otimes A + \sigma _- \otimes A^{\dagger } =\left(\begin{array}{cc} 0 &#x0026; A \\ A^\dagger &#x0026; 0 \end{array}\right), \quad \mathinner {|{\mathfrak {b}}\rangle } = \mathinner {|{1,b}\rangle }. \end{equation*} \)
    Note that \( \mathfrak {A} \) is a Hermitian matrix of dimension \( 2N \) , with condition number \( \kappa \) and \( \Vert \mathfrak {A}\Vert _2 = 1 \) , and \( \mathinner {|{\mathfrak {x}}\rangle } := \mathinner {|{1,x}\rangle } \) solves the extended QLSP. Therefore, we can directly apply AQC(p) and AQC(exp) for Hermitian matrix \( \mathfrak {A} \) to prepare an \( \epsilon \) -approximation of x. The total dimension of the Hilbert space becomes \( 8N \) for non-Hermitian matrix A (therefore three ancilla qubits are needed).

    8 Numerical Results

    We first report the performance of AQC(p), AQC(exp), and QAOA for a series of Hermitian positive definite dense matrices with varying condition numbers, together with the performance of RM and vanilla AQC. The details of the setup of the numerical experiments are given in Appendix F.
    Figure 1 shows how the total runtime T depends on the condition number \( \kappa \) and the accuracy \( \epsilon \) for the Hermitian positive definite case. The numerical scaling is reported in Table 2. For the \( \kappa \) dependence, despite that RM and AQC(1) share the same asymptotic linear complexity with respect to \( \kappa \) , we observe that the preconstant of RM is larger due to its Monte Carlo strategy and the mixed state nature resulting in the same scaling of errors in fidelity and density (see Appendix C for a detailed explanation). The asymptotic scaling of the vanilla AQC is at least \( {\mathcal {O}}(\kappa ^2) \) . When higher fidelity (0.999) is desired, the cost of vanilla AQC becomes too expensive, and we only report the timing of AQC(p), AQC(exp), and QAOA. For the \( \kappa \) dependence tests, the depth of QAOA ranges from 8 to 60. For the \( \epsilon \) dependence test, the depth of QAOA is fixed to be 20. We find that the runtime for AQC(p), AQC(exp), and QAOA depends approximately linearly on \( \kappa \) , while QAOA has the smallest runtime overall. It is also interesting to observe that although the asymptotic scalings of AQC(1) and AQC(2) are both bounded by \( {\mathcal {O}}(\kappa \log \kappa) \) instead of \( {\mathcal {O}}(\kappa) \) , the numerical performance of AQC(2) is much better than AQC(1); in fact, the scaling is very close to that with the optimal value of p. For the \( \epsilon \) dependence, the scaling of RM and AQC(p) is \( {\mathcal {O}}(1/\epsilon) \) , which agrees with the error bound. Again the preconstant of RM is slightly larger. Our results also confirm that AQC(exp) only depends poly logarithmically on \( \epsilon \) . Note that when \( \epsilon \) is relatively large, AQC(exp) requires a longer runtime than that of AQC(p), and it eventually outperforms AQC(p) when \( \epsilon \) is small enough. The numerical scaling of QAOA with respect to \( \epsilon \) is found to be only \( {\mathcal {O}}(\log ^{1.5}(1/\epsilon)) \) together with the smallest preconstant.
    Fig. 1.
    Fig. 1. Simulation results for the Hermitian positive definite example. Top (left/right): the runtime to reach desired fidelity \( 0.99/0.999 \) as a function of the condition number. Bottom: a log-log plot of the runtime as a function of the accuracy with \( \kappa = 10 \) .
    Table 2.
    methodsscaling w.r.t. \( \kappa \) scaling w.r.t. \( 1/\epsilon \)
    vanilla AQC2.2022/
    RM1.49121.3479
    AQC(1)1.46191.0482
    AQC(1.25)1.32891.0248
    AQC(1.5)1.22621.0008
    AQC(1.75)1.11970.9899
    AQC(2)1.13190.9904
    AQC(exp)1.37180.5377
    AQC(exp)/1.7326 (w.r.t. \( \log (1/\epsilon) \) )
    QAOA1.06350.4188
    QAOA/1.4927 (w.r.t. \( \log (1/\epsilon) \) )
    Table 2. Numerical Scaling of the Runtime as a Function of the Condition Number and the Accuracy, Respectively, for the Hermitian Positive Definite Example
    Figure 2 and Table 3 demonstrate the simulation results for non-Hermitian matrices. We find that numerical performances of RM, AQC(p), AQC(exp), and QAOA are similar to that of the Hermitian positive definite case. Again QAOA obtains the optimal performance in terms of the runtime. The numerical scaling of the optimal AQC(p) is found to be \( {\mathcal {O}}(\kappa /\epsilon) \) , while the time complexity of QAOA and AQC(exp) is only \( {\mathcal {O}}(\kappa ~\text{poly}(\log (\kappa /\epsilon))) \) .
    Fig. 2.
    Fig. 2. Simulation results for the non-Hermitian example. Top: the runtime to reach 0.999 fidelity as a function of the condition number. Bottom: a log-log plot of the runtime as a function of the accuracy with \( \kappa = 10 \) .
    Table 3.
    methodsscaling w.r.t. \( \kappa \) scaling w.r.t. \( 1/\epsilon \)
    vanilla AQC2.1980/
    RM/1.2259
    AQC(1)1.49370.9281
    AQC(1.25)1.34850.9274
    AQC(1.5)1.21350.9309
    AQC(1.75)1.07900.9378
    AQC(2)1.05410.9425
    AQC(exp)1.34380.4415
    AQC(exp) 0.9316 (w.r.t. \( \log (1/\epsilon) \) )
    QAOA0.89070.3283
    QAOA/0.7410 (w.r.t. \( \log (1/\epsilon) \) )
    Table 3. Numerical Scaling of the Runtime as a Function of the Condition, Number, and the Accuracy, Respectively, for the Non-Hermitian Example

    9 Discussion

    By reformulating QLSP into an eigenvalue problem, AQC provides an alternative route to solve QLSP other than those based on phase estimation (such as HHL) and those based on approximation of matrix functions (such as LCU and QSP). However, the scaling of the vanilla AQC is at least \( {\mathcal {O}}(\kappa ^2/\epsilon) \) , which is unfavorable with respect to both \( \kappa \) and \( \epsilon \) . Thanks to the explicit information of the energy gap along the adiabatic path, we demonstrate that we may reschedule the AQC and dramatically improve the performance of AQC for solving QLSP. When the target accuracy \( \epsilon \) is relatively large, the runtime complexity of the AQC(p) method ( \( 1\lt p\lt 2 \) ) is reduced to \( {\mathcal {O}}(\kappa /\epsilon) \) ; for highly accurate calculations with a small \( \epsilon \) , the AQC(exp) method is more advantageous, and its runtime complexity is \( {\mathcal {O}}(\kappa ~\text{poly}(\log (\kappa /\epsilon))) \) . To our knowledge, our ACP(exp) method provides the first example that an adiabatic algorithm can simultaneously achieve near linear dependence on the spectral gap, and poly-logarithmic dependence on the precision.
    Due to the close connection between AQC and QAOA, the runtime complexity of QAOA for solving QLSP is also bounded by \( {\mathcal {O}}(\kappa ~\text{poly}(\log (\kappa /\epsilon))) \) . Both AQC and QAOA can be implemented on gate-based quantum computers. Our numerical results can be summarized using the following relation:
    \( \begin{equation*} \text{QAOA}\lesssim \text{AQC(exp)}\lesssim \text{AQC}(p)\lt \text{RM}\lt \text{vanilla AQC}. \end{equation*} \)
    Here, \( A\lt B \) means that the runtime of A is smaller than that of B. The two exceptions are: \( \text{QAOA}\lesssim \text{AQC(exp)} \) means that the runtime of QAOA is smaller only when the optimizer \( \theta \) is found, while \( \text{AQC(exp)}\lesssim \text{AQC}(p) \) holds only when \( \epsilon \) is sufficiently small. While the runtime complexity of AQC(exp) readily provides an upper bound of the runtime complexity of QAOA, numerical results indicate that the optimizer of QAOA often involves a much smaller depth, and hence the dynamics of QAOA does not necessarily follow the adiabatic path. Therefore, it is of interest to find alternative routes to directly prove the scaling of the QAOA runtime without relying on AQC. In the work [20], our AQC based algorithm has been combined with the eigenvector filtering technique. Reference [20] also proposed another AQC inspired quantum linear system solver, which is based on the quantum Zeno effect. Both methods can scale linearly in \( \kappa \) and logarithmically in \( 1/\epsilon \) . We expect our AQC based QLSP solvers may serve as useful subroutines in other quantum algorithms as well.

    A The Gap of \( H(f(s)) \) for Hermitian Positive Definite Matrices

    The Hamiltonian \( H(f) \) can be written in the block matrix form as
    \( \begin{equation} {H}(f) = \left(\begin{array}{cc} 0 &#x0026; ((1-f)I + fA)Q_b \\ Q_b((1-f)I + fA) &#x0026; 0 \end{array}\right) {.} \end{equation} \)
    (15)
    Let \( \lambda \) be an eigenvalue of H, then
    \( \begin{align*} 0 &#x0026;= \det \left(\begin{array}{cc} \lambda I &#x0026; -((1-f)I + fA)Q_b \\ -Q_b((1-f)I + fA) &#x0026; \lambda I \end{array}\right) \\ &#x0026;= \det \left(\lambda ^2 I - ((1-f)I + fA)Q_b^2((1-f)I + fA)\right), \end{align*} \)
    where the second equality holds because the bottom two blocks are commutable. Thus, \( \lambda ^2 \) is an eigenvalue of \( ((1-f)I + fA)Q_b^2((1-f)I + fA) \) , and \( \Delta ^2(f) \) equals the smallest non-zero eigenvalue of \( ((1-f)I + fA)Q_b^2((1-f)I + fA) \) . Applying a proposition of matrices that XY and YX have the same non-zero eigenvalues, \( \Delta ^2(f) \) also equals the smallest non-zero eigenvalue of \( Q_b((1-f)I + fA)^2Q_b \) .
    Now we focus on the matrix \( Q_b((1-f)I + fA)^2Q_b \) . Note that \( \left|b\right\gt \) is the unique eigenstate corresponding to the eigenvalue 0, and all eigenstates corresponding to non-zero eigenvalues must be orthogonal to with \( \left|b\right\gt \) . Therefore,
    \( \begin{align*} {\Delta }^2(f) &#x0026;= \inf _{\left\lt b|\varphi \right\gt = 0, \left\lt \varphi |\varphi \right\gt = 1} \left\lt \varphi \left|Q_b((1-f)I + fA)^2Q_b\right|\varphi \right\gt \\ &#x0026;= \inf _{\left\lt b|\varphi \right\gt = 0, \left\lt \varphi |\varphi \right\gt = 1} \left\lt \varphi \left|((1-f)I + fA)^2\right|\varphi \right\gt \\ &#x0026;\ge \inf _{\left\lt \varphi |\varphi \right\gt = 1} \left\lt \varphi \left|((1-f)I + fA)^2\right|\varphi \right\gt \\ &#x0026;= (1-f+f/\kappa)^2 {,} \end{align*} \)
    and \( {\Delta }(f) \ge {\Delta }_*(f) = 1-f+f/\kappa \) .

    B Relations Among Different Measurements of Accuracy

    We show two relations that connect the error of density matrix distance and the error of fidelity, which are used in our proof for AQC(p) and AQC(exp). Following the notations in the main text, let \( \mathinner {|{{\widetilde{x}}(s)}\rangle } \) denote the desired eigenpath of \( H(f(s)) \) corresponding to the 0 eigenvalue, and \( \text{Null}(H(f(s)))=\lbrace \mathinner {|{{\widetilde{x}}(s)}\rangle },\mathinner {|{\bar{b}}\rangle }\rbrace \) . \( P_0(s) \) denotes the projector onto the eigenspace corresponding to the 0 eigenvalue.
    Lemma 5.
    (i) The following equation holds,
    \( \begin{equation} |1-\mathinner {\langle {\psi _T(s)|P_0(s)|\psi _T(s)}\rangle }| = 1-\left|\mathinner {\langle {\psi _T(s)|{\widetilde{x}}(s)}\rangle }\right|^2 = \Vert \mathinner {|{\psi _T(s)}\rangle }\mathinner {\langle {\psi _T(s)}|} - \mathinner {|{{\widetilde{x}}(s)}\rangle }\mathinner {\langle {{\widetilde{x}}(s)}|}\Vert _2^2. \end{equation} \)
    (16)
    (ii) Assume that
    \( \begin{equation*} |1-\mathinner {\langle {\psi _T(s)|P_0(s)|\psi _T(s)}\rangle }|\le \eta ^2(s). \end{equation*} \)
    Then the fidelity can be bounded from below by \( 1-\eta ^2(s) \) , and the 2-norm error of the density matrix can be bounded from above by \( \eta (s) \) .
    Proof.
    It suffices only to prove part (i). Note that \( \mathinner {|{\bar{b}}\rangle } \) is the eigenstate for both \( H_0 \) and \( H_1 \) corresponding the 0 eigenvalue, we have \( H(f(s))\mathinner {|{\bar{b}}\rangle }=((1-f(s))H_0+f(s)H_1)\mathinner {|{\bar{b}}\rangle }=0 \) , and thus \( \frac{d}{ds} \mathinner {\langle {\bar{b}|\psi _T(s)}\rangle } = 0 \) . Together with the initial condition \( \mathinner {\langle {\bar{b}|\psi _T(0)}\rangle }=0 \) , the overlap of \( \mathinner {|{\bar{b}}\rangle } \) and \( \mathinner {|{\psi _T(s)}\rangle } \) remains to be 0 for the whole time period, i.e., \( \mathinner {\langle {\bar{b}|\psi _T(s)}\rangle }=0. \) Since \( P_0(s)=\mathinner {|{{\widetilde{x}}(s)}\rangle }\mathinner {\langle {{\widetilde{x}}(s)}|}+\mathinner {|{\bar{b}}\rangle }\mathinner {\langle {\bar{b}}|} \) , we have \( P_0(s)\mathinner {|{\psi _T(s)}\rangle }=\mathinner {|{{\widetilde{x}}(s)}\rangle }\mathinner {\langle {{\widetilde{x}}(s)|\psi _T(s))}\rangle } \) . Therefore,
    \( \begin{equation*} |1-\mathinner {\langle {\psi _T(s)|P_0(s)|\psi _T(s)}\rangle }| = |1-\mathinner {\langle {\psi _T(s)|{\widetilde{x}}(s)}\rangle }\mathinner {\langle {{\widetilde{x}}(s)|\psi _T(s)}\rangle }| = 1-\left|\mathinner {\langle {\psi _T(s)|{\widetilde{x}}(s)}\rangle }\right|^2. \end{equation*} \)
    To prove the second equation, let \( M = \mathinner {|{\psi _T(s)}\rangle }\mathinner {\langle {\psi _T(s)}|} - \mathinner {|{{\widetilde{x}}(s)}\rangle }\mathinner {\langle {{\widetilde{x}}(s)}|} \) . Note that \( \Vert M\Vert _2^2 = \lambda _{\max }(M^\dagger M) \) , we study the eigenvalues of \( M^\dagger M \) by first computing that
    \( \begin{equation*} M^\dagger M = \mathinner {|{\psi _T(s)}\rangle }\mathinner {\langle {\psi _T(s)}|} + \mathinner {|{{\widetilde{x}}(s)}\rangle }\mathinner {\langle {{\widetilde{x}}(s)}|} - \mathinner {\langle {\psi _T(s)|{\widetilde{x}}(s)}\rangle }\mathinner {|{\psi _T(s)}\rangle }\mathinner {\langle {{\widetilde{x}}(s)}|} - \mathinner {\langle {{\widetilde{x}}(s)|\psi _T(s)}\rangle }\mathinner {|{{\widetilde{x}}(s)}\rangle }\mathinner {\langle {\psi _T(s)}|}. \end{equation*} \)
    Since for any \( \mathinner {|{y}\rangle } \in \text{span}\lbrace \mathinner {|{\psi _T(s)}\rangle },\mathinner {|{{\widetilde{x}}(s)}\rangle }\rbrace ^{\bot } \) , \( M^\dagger M\mathinner {|{y}\rangle } = 0 \) , and
    \( \begin{align*} M^\dagger M \mathinner {|{\psi _T(s)}\rangle } &#x0026;= (1-\left|\mathinner {\langle {\psi _T(s)|{\widetilde{x}}(s)}\rangle }\right|^2)\mathinner {|{\psi _T(s)}\rangle }, \\ M^\dagger M \mathinner {|{{\widetilde{x}}(s)}\rangle } &#x0026;= (1-\left|\mathinner {\langle {\psi _T(s)|{\widetilde{x}}(s)}\rangle }\right|^2)\mathinner {|{{\widetilde{x}}(s)}\rangle }, \end{align*} \)
    we have \( \Vert M\Vert _2^2 = \lambda _{\max }(M^{\dagger }M) = 1-\left|\mathinner {\langle {\psi _T(s)|{\widetilde{x}}(s)}\rangle }\right|^2 \) .□

    C Difference between the Scalings of AQC(p) and RM with Respect to Infidelity

    In our numerical test, we observe that to reach a desired fidelity, RM encounters a much larger pre-constant than AQC(p). This is due to the following reason. Although the runtime of both RM and AQC(p) scales as \( {\mathcal {O}}(1/\epsilon) \) where \( \epsilon \) is the 2-norm error of the density matrix, the scalings with respect to the infidelity are different. More specifically, Lemma 5 shows that for AQC, the square of the 2-norm error is exactly equal to the infidelity. Thus, in order to reach infidelity \( 1-F \) using AQC(p), it suffices to choose the runtime to be \( T = {\mathcal {O}}(\kappa /\sqrt {1-F}) \) . Meanwhile, it has been proved in [6] that the runtime complexity of RM is \( {\widetilde{\mathcal {O}}}(\kappa /(1-F)) \) . Therefore, to reach a desired fidelity, the runtime of AQC(p) will be smaller than that of RM, as shown in our numerical examples.
    We further verify the statement above by studying the relation between the 2-norm error of the density matrix and the infidelity for AQC(p), AQC(exp), and RM using the positive definite example with \( \kappa = 10 \) . In AQC(p) and AQC(exp), we change the runtime to obtain approximations with different errors and infidelity. In RM we vary the number of exponential operators to obtain different levels of accuracy. All other numerical treatments remain unchanged. As shown in Figure 3, the infidelity is exactly the square of 2-norm error in the case of AQC(p) and AQC(exp), while the infidelity of RM only scales approximately linearly with respect to 2-norm error. This also verifies that although the runtime of both AQC(p) and RM scales linearly with respect to \( \epsilon \) , the runtime of AQC(p) can be much smaller to reach desired fidelity.
    Fig. 3.
    Fig. 3. Relation between 2-norm error and infidelity of AQC and RM.

    D Proof of Theorems 1 and 3

    The proof of Theorems 1 and 3 rests on some delicate cancellation of the time derivatives \( {\parallel }{H^{(1)}}_2, \) \( {\parallel }{H^{(2)}}_2 \) and the gap \( \Delta (f(s)) \) in the error bound, and can be completed by carefully analyzing the \( \kappa \) -dependence of each term in \( \eta (s) \) given in Equation (4). Note that in both cases \( {H}(f) = (1-f)H_0+fH_1 \) , and we let \( \Delta _*(f) = (1-f+f/\kappa)/\sqrt {2} \) since such a choice of \( \Delta _* \) can serve as a lower bound of the spectrum gap for both the cases of Theorems 1 and 3. We first compute the derivatives of \( H(f(s)) \) by chain rule as
    \( \begin{align*} H^{(1)}(s) = \frac{d}{ds}{H}(f(s)) = \frac{d {H}(f(s))}{df}\frac{df(s)}{ds} = (H_1-H_0)c_p {\Delta }_*^p(f(s)) {,} \end{align*} \)
    and
    \( \begin{align*} H^{(2)}(s) &#x0026;= \frac{d}{ds}H^{(1)}(s) = \frac{d}{ds}\left((H_1-H_0)c_p {\Delta }_*^p(f(s))\right) \\ &#x0026;= (H_1-H_0)c_pp{\Delta }_*^{p-1}(f(s))\frac{d{\Delta }_*(f(s))}{df}\frac{df(s)}{ds} \\ &#x0026;= \frac{1}{\sqrt {2}}(-1+1/\kappa)(H_1-H_0)c_p^2p{\Delta }_*^{2p-1}(f(s)){.} \end{align*} \)
    Then the first two terms of \( \eta (s) \) can be rewritten as
    \( \begin{align*} &#x0026; \frac{\Vert H^{(1)}(0)\Vert _2}{T \Delta ^2(0)} + \frac{\Vert H^{(1)}(s)\Vert _2}{T \Delta ^2(f(s))} \le \frac{\Vert H^{(1)}(0)\Vert _2}{T {\Delta }_*^2(0)} + \frac{\Vert H^{(1)}(s)\Vert _2}{T {\Delta }_*^2(f(s))} \\ =\ &#x0026; \frac{\Vert (H_1-H_0)c_p {\Delta }_*^p(f(0))\Vert _2}{T {\Delta }_*^2(0)} + \frac{\Vert (H_1-H_0)c_p {\Delta }_*^p(f(s))\Vert _2}{T {\Delta }_*^2(f(s))} \\ \le \ &#x0026; \frac{C}{T}\left(c_p{\Delta }_*^{p-2}(0) + c_p{\Delta }_*^{p-2}(f(s))\right) \\ \le \ &#x0026; \frac{C}{T}\left(c_p{\Delta }_*^{p-2}(0) + c_p{\Delta }_*^{p-2}(1)\right). \end{align*} \)
    Here, C stands for a general positive constant independent of \( s,\Delta , \) and T. To compute the remaining two terms of \( \eta (s) \) , we use the following change of variable
    \( \begin{equation*} u = f(s^{\prime }), \quad du = \frac{d}{ds^{\prime }}f(s^{\prime })ds^{\prime } = c_p {\Delta }_*^p(f(s^{\prime }))ds^{\prime }, \end{equation*} \)
    and the last two terms of \( \eta (s) \) become
    \( \begin{align*} &#x0026;\frac{1}{T}\int _0^s \frac{\Vert H^{(2)}\Vert _2}{\Delta ^2}ds^{\prime } \le \frac{1}{T}\int _0^s \frac{\Vert H^{(2)}\Vert _2}{\Delta _*^2}ds^{\prime } \\ =\ &#x0026;\frac{1}{T}\int _0^s \frac{\Vert \frac{1}{\sqrt {2}}(-1+1/\kappa)(H_1-H_0)c_p^2p{\Delta }_*^{2p-1}(f(s^{\prime }))\Vert _2}{{\Delta }_*^2(f(s^{\prime }))}ds^{\prime } \\ =\ &#x0026; \frac{1}{T}\int _0^{f(s)} \frac{\Vert \frac{1}{\sqrt {2}}(-1+1/\kappa)(H_1-H_0)c_p^2p{\Delta }_*^{2p-1}(u)\Vert _2}{{\Delta }_*^2(u)}\frac{du}{c_p{\Delta }_*^p(u)} \\ \le \ &#x0026; \frac{C}{T}\left((1-1/\kappa)c_p\int _0^{f(s)}{\Delta }_*^{p-3}(u) du\right) \\ \le \ &#x0026; \frac{C}{T}\left((1-1/\kappa)c_p\int _0^{1}{\Delta }_*^{p-3}(u) du\right){,} \end{align*} \)
    and similarly
    \( \begin{align*} &#x0026;\frac{1}{T}\int _0^s \frac{\Vert H^{(1)}\Vert ^2_2}{\Delta ^3}ds^{\prime } \le \frac{1}{T}\int _0^s \frac{\Vert H^{(1)}\Vert ^2_2}{\Delta _*^3}ds^{\prime } \\ =\ &#x0026; \frac{1}{T}\int _0^s \frac{\Vert (H_1-H_0)c_p {\Delta }_*^p(f(s^{\prime }))\Vert ^2_2}{{\Delta }_*^3(f(s^{\prime }))}ds^{\prime } \\ =\ &#x0026; \frac{1}{T}\int _0^{f(s)} \frac{\Vert (H_1-H_0)c_p {\Delta }_*^p(u)\Vert ^2_2}{{\Delta }_*^3(u)}\frac{du}{c_p{\Delta }_*^p(u)} \\ \le \ &#x0026; \frac{C}{T}\left(c_p \int _0^{f(s)}{\Delta }_*^{p-3}(u) du\right) \\ \le \ &#x0026; \frac{C}{T}\left(c_p \int _0^{1}{\Delta }_*^{p-3}(u) du\right){.} \end{align*} \)
    Summarize all terms above, an upper bound of \( \eta (s) \) is
    \( \begin{align*} \eta (s) &#x0026;\le \frac{C}{T}\left\lbrace \left(c_p{\Delta }_*^{p-2}(0)+ c_p{\Delta }_*^{p-2}(1)\right) + \left((1-1/\kappa)c_p\int _0^{1}{\Delta }_*^{p-3}(u) du\right) + \left(c_p \int _0^{1}{\Delta }_*^{p-3}(u) du\right)\right\rbrace \\ &#x0026;= \frac{C}{T}\left\lbrace 2^{-(p-2)/2}\left(c_p+ c_p\kappa ^{2-p}\right) + \left((1-1/\kappa)c_p\int _0^{1}{\Delta }_*^{p-3}(u) du\right) + \left(c_p \int _0^{1}{\Delta }_*^{p-3}(u) du\right)\right\rbrace {.} \end{align*} \)
    Finally, since for \( 1\lt p\lt 2 \)
    \( \begin{equation*} c_p = \int _0^1 {\Delta }_*^{-p}(u) du = \frac{2^{p/2}}{p-1}\frac{\kappa }{\kappa -1}(\kappa ^{p-1}-1), \end{equation*} \)
    and
    \( \begin{align*} \int _0^{1}{\Delta }_*^{p-3}(u) du = \frac{2^{-(p-3)/2}}{2-p} \frac{\kappa }{\kappa -1} (\kappa ^{2-p}-1) {,} \end{align*} \)
    we have
    \( \begin{align*} \eta (s) \le &#x0026;\frac{C}{T}\Big \lbrace \frac{\kappa }{\kappa -1}(\kappa ^{p-1}-1) + \frac{\kappa }{\kappa -1}(\kappa -\kappa ^{2-p}) \\ &#x0026;+ \frac{\kappa }{\kappa -1}(\kappa ^{p-1}-1) (\kappa ^{2-p}-1) + \left(\frac{\kappa }{\kappa -1}\right)^2(\kappa ^{p-1}-1)(\kappa ^{2-p}-1)\Big \rbrace {.} \end{align*} \)
    The leading term of the bound is \( {\mathcal {O}}(\kappa /T) \) when \( 1\lt p\lt 2 \) .
    Now we consider the limiting case when \( p = 1,2 \) . Note that the bound for \( \eta (s) \) can still be written as
    \( \begin{align*} \eta (s) &#x0026;\le \frac{C}{T}\left\lbrace \left(c_p{\Delta }_*^{p-2}(0) + c_p{\Delta }_*^{p-2}(1)\right) + \left((1-1/\kappa)c_p\int _0^{1}{\Delta }_*^{p-3}(u) du\right) + \left(c_p \int _0^{1}{\Delta }_*^{p-3}(u) du\right)\right\rbrace \\ &#x0026; = \frac{C}{T}\Big \lbrace 2^{-(p-2)/2}\left(c_p+ c_p\kappa ^{2-p}\right) + (1-1/\kappa)c_pc_{3-p} + c_p c_{3-p}\Big \rbrace {.} \end{align*} \)
    Straightforward computation shows that
    \( \begin{equation*} c_1 = \int _0^1 {\Delta }_*^{-1}(u) du = \sqrt {2}\frac{\kappa }{\kappa -1}\log (\kappa), \end{equation*} \)
    and
    \( \begin{equation*} c_2 = \int _0^1 {\Delta }_*^{-2}(u) du = 2\frac{\kappa }{\kappa -1}(\kappa -1){.} \end{equation*} \)
    Hence, when \( p=1,2 \) ,
    \( \begin{equation*} \eta (s) \le \frac{C}{T}\Big \lbrace 2^{-(p-2)/2}\left(c_p+ c_p\kappa ^{2-p}\right) + (1-1/\kappa)c_1c_2 + c_1 c_2\Big \rbrace \le C\frac{\kappa \log (\kappa)}{T}. \end{equation*} \)
    This completes the proof of Theorems 1 and 3.

    E Proof of Theorems 2 and 4

    We provide a rigorous proof of the error bound for the AQC(exp) scheme. We mainly follow the methodology of [25] and a part of technical treatments of [15]. Our main contribution is carefully revealing an explicit constant dependence in the adiabatic theorem, which is the key to obtain the \( \widetilde{{\mathcal {O}}}(\kappa) \) scaling. In the AQC(exp) scheme, the Hamiltonian \( H(s) = (1-f(s))H_0 + f(s)H_1 \) with \( \Vert H_0\Vert , \Vert H_1\Vert \le 1 \) and
    \( \begin{equation} f(s) = \frac{1}{c_e}\int _0^s \exp \left(-\frac{1}{s^{\prime }(1-s^{\prime })}\right){\mathrm{d}} s^{\prime }{.} \end{equation} \)
    (17)
    The normalization constant \( c_e = \int _0^1 \exp (-\frac{1}{t(1-t)})dt \approx 0.0070 \) . Let \( U_T(s) \) denote the corresponding unitary evolution operator, and \( P_0(s) \) denote the projector onto the eigenspace corresponding to 0. We use \( \Delta _*(f) = (1-f+f/\kappa)/\sqrt {2} \) since this can serve as a lower bound of the spectrum gap for both the cases of Theorems 2 and 4.
    We first restate the theorems universally with more technical details as following.
    Theorem 6.
    Assume the condition number \( \kappa \gt e \) . Then the final time adiabatic error \( |1-\mathinner {\langle {\psi _T(1)|P_0(1)|\psi _T(1)}\rangle }| \) of AQC(exp) can be bounded by \( \eta _1^2 \) where
    (a) for arbitrary N,
    \( \begin{equation*} \eta _1^2 = A_1D\log ^2\kappa \left(C_2\frac{\kappa \log ^2\kappa }{T} N^4\right)^{N}, \end{equation*} \)
    where \( A_1,D, \) and \( C_2 \) are positive constants which are independent of T, \( \kappa \) and N,
    (b) if T is large enough such that
    \( \begin{equation*} 16eA_1^{-1}D\left(\frac{4\pi ^2}{3}\right)^3\frac{\kappa \log ^2\kappa }{T} \le 1, \end{equation*} \)
    then
    \( \begin{equation*} \eta _1^2 = C_1\log ^2\kappa \exp \left(-\left(C_2\frac{\kappa \log ^2\kappa }{T}\right)^{-\frac{1}{4}}\right), \end{equation*} \)
    where \( A_1,D,C_1, \) and \( C_2 \) are positive constants which are independent of T and \( \kappa \) .
    Corollary 7.
    For any \( \kappa \gt e, 0\lt \epsilon \lt 1 \) , to prepare an \( \epsilon \) -approximation of the solution of QLSP using AQC(exp), it is sufficient to choose the runtime \( T = {\mathcal {O}}(\kappa \log ^2\kappa \log ^4(\tfrac{\log \kappa }{\epsilon })) \) .
    Proof.
    We start the proof by considering the projector \( P(s) \) onto an invariant space of H, then \( P(s) \) satisfies
    \( \begin{equation} {\mathrm{i}} \frac{1}{T} \partial _sP(s) = [H(s),P(s)], \quad P^2(s) = P(s) {.} \end{equation} \)
    (18)
    We try the ansatz (only formally)
    \( \begin{equation} P(s) = \sum _{j=0}^{\infty } E_{j}(s)T^{-j}. \end{equation} \)
    (19)
    Substitute it into the Heisenberg equation and match terms with the same orders, we get
    \( \begin{equation} [H(s),E_0(s)] = 0, \quad {\mathrm{i}}\partial _s E_j(s) = [H(s), E_{j+1}(s)], \quad E_j(s) = \sum _{m=0}^jE_m(s)E_{j-m}(s) {.} \end{equation} \)
    (20)
    It has been proved in [25] that the solution of Equation (20) with initial condition \( E_0 = P_0 \) is given by
    \( \begin{align} &#x0026;E_0(s) = P_0(s) = -(2\pi {\mathrm{i}})^{-1} \oint _{\Gamma (s)}(H(s)-z)^{-1}dz{,} \end{align} \)
    (21)
    \( \begin{align} &#x0026;E_{j}(s) = (2\pi)^{-1}\oint _{\Gamma (s)}(H(s)-z)^{-1}[E_{j-1}^{(1)}(s),P_0(s)](H(s)-z)^{-1}dz + S_j(s) - 2P_0(s)S_j(s)P_0(s), \end{align} \)
    (22)
    where \( \Gamma (s) = \lbrace z\in {\mathbb {C}}: |z| = \Delta (s)/2\rbrace \) and
    \( \begin{equation} S_j(s) = \sum _{m=1}^{j-1}E_m(s)E_{j-m}(s){.} \end{equation} \)
    (23)
    Furthermore, given \( E_0=P_0 \) , such a solution is unique.
    In general, Equation (19) does not converge, so for arbitrary positive integer N we define a truncated series as
    \( \begin{equation} P_{N}(s) = \sum _{j=0}^{N}E_{j}(s)T^{-j}{.} \end{equation} \)
    (24)
    Then
    \( \begin{align*} &#x0026;{\mathrm{i}} \frac{1}{T} P_{N}^{(1)} - [H,P_{N}] = {\mathrm{i}}\frac{1}{T} \sum _{j=0}^{N}E_{j}^{(1)}T^{-j} - \sum _{j=0}^{N}[H,E_{j}]T^{-j} ={\mathrm{i}} T^{-(N+1)}E_{N}^{(1)}{.} \end{align*} \)
    In Lemma 10, we prove that \( P_{N}(0)=P_0(0) \) and \( P_{N}(1) = P_0(1) \) , then the adiabatic error becomes
    \( \begin{align*} |1-\mathinner {\langle {\psi _T(1)|P_0(1)|\psi _T(1)}\rangle }| &#x0026;= |\mathinner {\langle {\psi _T(0)|P_0(0)|\psi _T(0)}\rangle }-\mathinner {\langle {\psi _T(0)|U_T(1)^{-1}P_0(1)U_T(1)|\psi _T(0)}\rangle }|\\ &#x0026;\le \Vert P_0(1)-U_{T}(1)^{-1}P_0(0)U_{T}(1)\Vert \\ &#x0026;= \Vert P_N(1)-U_{T}(1)^{-1}P_N(0)U_{T}(1)\Vert \\ &#x0026;= \left\Vert \int _{0}^1ds\frac{d}{ds}\left(U_T^{-1}P_{N}U_T\right)\right\Vert {.} \end{align*} \)
    Straightforward computations show that
    \( \begin{align*} \frac{d}{ds}(U_T^{-1}) = -U_T^{-1}\frac{d}{ds}(U_T)U_T^{-1} = -U_T^{-1}\frac{T}{{\mathrm{i}}} HU_TU_T^{-1} = -\frac{T}{{\mathrm{i}}} U_T^{-1}H{,} \end{align*} \)
    \( \begin{align*} \frac{d}{ds}\left(U_T^{-1}P_{N}U_T\right) &#x0026;= \frac{d}{ds}(U_T^{-1})P_{N}U_T + U_T^{-1}\frac{d}{ds}(P_{N})U_T + U_T^{-1}P_{N}\frac{d}{ds}(U_T) \\ &#x0026;= -\frac{T}{{\mathrm{i}}}U_T^{-1}HP_{N}U_T + U_T^{-1}\frac{T}{{\mathrm{i}}}[H,P_{N}]U_T + U_T^{-1}T^{-N}E_{N}^{(1)}U_T + \frac{T}{{\mathrm{i}}}U_T^{-1}P_{N}HU_T \\ &#x0026;= T^{-N}U_T^{-1}E_{N}^{(1)}U_T{,} \end{align*} \)
    therefore,
    \( \begin{align*} |1-\mathinner {\langle {\psi _T(1)|P_0(1)|\psi _T(1)}\rangle }| \le \left\Vert \int _{0}^1T^{-N}U_T^{-1}E_{N}^{(1)}U_Tds\right\Vert \le T^{-N} \max _{s\in [0,1]}\Vert E_{N}^{(1)}\Vert {.} \end{align*} \)
    In Lemma 15, we prove that (the constant \( c_f = 4\pi ^2/3 \) )
    \( \begin{align*} \Vert E_{N}^{(1)}\Vert &#x0026;\le A_1A_2^{N}A_3\frac{[(N+1)!]^4}{(1+1)^2(N+1)^2} \\ &#x0026;= \frac{A_1}{4}D\log ^2\kappa \left[ A_1^{-1}c_f^3\frac{16}{\Delta }D\log ^2\kappa \right]^{N}\frac{[(N+1)!]^4}{(N+1)^2}\\ &#x0026; \le \frac{A_1}{4}D\log ^2\kappa \left[ 16A_1^{-1}Dc_f^3\kappa \log ^2\kappa \right]^{N}\frac{[(N+1)!]^4}{(N+1)^2}\\ &#x0026; \le A_1D\log ^2\kappa \left[ 16A_1^{-1}Dc_f^3\kappa \log ^2\kappa N^4\right]^{N}, \end{align*} \)
    where the last inequality comes from the fact that \( [(N+1)!]^4/(N+1)^2 \le 4N^{4N} \) . This completes the proof of part (a).
    When T is large enough, we now choose
    \( \begin{equation*} N = \left\lfloor \left(16eA_1^{-1}Dc_f^3\frac{\kappa \log ^2\kappa }{T}\right)^{-\frac{1}{4}} \right\rfloor \ge 1 {,} \end{equation*} \)
    then
    \( \begin{align*} |1-\mathinner {\langle {\psi _T(1)|P_0(1)|\psi _T(1)}\rangle }| &#x0026; \le A_1D\log ^2\kappa \left[ 16A_1^{-1}Dc_f^3\frac{\kappa \log ^2\kappa }{T} N^4\right]^{N} \\ &#x0026; \le A_1D\log ^2\kappa \exp \left(-\left(16eA_1^{-1}Dc_f^3\frac{\kappa \log ^2\kappa }{T}\right)^{-\frac{1}{4}}\right) {.} \end{align*} \)
    This completes the proof of part (b).□
    The remaining part is devoted to some preliminary results regarding \( H, E, \) and the technical estimates for the growth of \( E_j \) . It is worth mentioning in advance that in the proof we will encounter many derivatives taken on a contour integral. In fact all such derivatives taken on a contour integral will not involve derivatives on the contour. Specifically, since \( (H(s)-z)^{-1} \) is analytic for any \( 0 \lt |z| \lt \Delta (s) \) , for any \( s_0 \in (0,1) \) , there exists a small enough neighborhood \( B_{\delta }(s_0) \) such that \( \forall s \in B_{\delta }(s_0) \) , \( \oint _{\Gamma (s)} G(s,(H(s)-z)^{-1})dz = \oint _{\Gamma (s_0)} G(s,(H(s)-z)^{-1})dz \) for any smooth mapping G. This means locally the contour integral does not depend on the smooth change of the contour, and thus the derivatives will not involve derivatives on the contour. In the spirit of this trick, we write the resolvent \( R(z,s,s_0) = (H(s)-z)^{-1} \) for \( 0 \le s \le 1, 0 \le s_0 \le 1, z \in {\mathbb {C}} \) and \( |z| = \Delta (s_0)/2 \) and let \( R^{(k)} \) denote the partial derivative with respect to s, i.e., \( \frac{\partial }{\partial s}R(z,s,s_0) \) , which means by writing \( R^{(k)} \) we only consider the explicit time derivatives brought by H.
    Lemma 8.
    (a) \( H(s) \in C^{\infty } \) with \( H^{(k)}(0) = H^{(k)}(1) = 0 \) for all \( k \ge 1 \) .
    (b) There is a gap \( \Delta (s) \ge \Delta _*(s) = ((1-f(s))+f(s)/\kappa)/\sqrt {2,} \) which separates 0 from the rest of the spectrum.
    The following lemma gives the bound for the derivatives of H.
    Lemma 9.
    For every \( k \ge 1, 0 \lt s \lt 1 \) ,
    \( \begin{equation} \Vert H^{(k)}(s)\Vert \le b(s)a(s)^k\frac{(k!)^2}{(k+1)^2} {,} \end{equation} \)
    (25)
    where
    \( \begin{equation*} b(s) = \frac{2e}{c_e}\exp \left(-\frac{1}{s(1-s)}\right)[s(1-s)]^2, \quad a(s) = \left(\frac{2}{s(1-s)}\right)^2 {.} \end{equation*} \)
    Proof.
    We first compute the derivatives of f. Let \( g(s) = -s(1-s) \) and \( h(y) = \exp (1/y) \) , then \( f^{\prime }(s) = c_e^{-1}h(g(s)) \) . By the chain rule of high order derivatives (also known as Faà di Bruno’s formula),
    \( \begin{equation*} f^{(k+1)}(s) = c_e^{-1}\sum \frac{k!}{m_1!1!^{m_1}m_2!2!^{m_2}\cdots m_k!k!^{m_k}}h^{(m_1+m_2+\cdots +m_k)}(g(s))\prod _{j=1}^k\left(g^{(j)}(s)\right)^{m_j}, \end{equation*} \)
    where the sum is taken over all k-tuples of non-negative integers \( (m_1,\ldots ,m_k) \) satisfying \( \sum _{j=1}^k jm_j = k \) . Note that \( g^{(j)}(s) = 0 \) for \( j \ge 3 \) , and the sum becomes
    \( \begin{align*} f^{(k+1)}(s) &#x0026;= c_e^{-1}\sum _{m_1+2m_2=k} \frac{k!}{m_1!1!^{m_1}m_2!2!^{m_2}}h^{(m_1+m_2)}(g(s))\left(g^{(1)}(s)\right)^{m_1}\left(g^{(2)}(s)\right)^{m_2} \\ &#x0026;= c_e^{-1}\sum _{m_1+2m_2=k} \frac{k!}{m_1!m_2!2^{m_2}}h^{(m_1+m_2)}(g(s))\left(2s-1\right)^{m_1}2^{m_2} \\ &#x0026;= c_e^{-1}\sum _{m_1+2m_2=k} \frac{k!}{m_1!m_2!}h^{(m_1+m_2)}(g(s))\left(2s-1\right)^{m_1}{.} \end{align*} \)
    To compute the derivatives of h, we use the chain rule again to get (the sum is over \( \sum _{j=1}^m jn_j = m \) )
    \( \begin{align*} h^{(m)}(y) &#x0026;= \sum \frac{m!}{n_1!1!^{n_1}n_2!2!^{n_2}\cdots n_m!m!^{n_m}}\exp (1/y)\prod _{j=1}^m\left(\frac{d^{j}(1/y)}{dy^j}\right)^{n_j} \\ &#x0026;= \sum \frac{m!}{n_1!1!^{n_1}n_2!2!^{n_2}\cdots n_m!m!^{n_m}}\exp (1/y)\prod _{j=1}^m\left((-1)^jj!y^{-j-1}\right)^{n_j} \\ &#x0026;= \sum \frac{(-1)^m m!}{n_1!n_2!\cdots n_m!}\exp (1/y)y^{-m-\sum n_j}. \end{align*} \)
    Since \( 0 \le n_j \le m/j \) , the number of tuples \( (m_1,\ldots ,m_n) \) is less than \( (m+1)(m/2+1)(m/3+1)\ldots (m/m+1) = \binom{2m}{m} \lt 2^{2m} \) , so for \( 0 \lt y \lt 1 \) and \( m \le k \) we have
    \( \begin{equation*} |h^{(m)}(y)| \le 2^{2k}k!\exp (1/y)y^{-2k}. \end{equation*} \)
    Therefore \( f^{(k+1)} \) can be bounded as
    \( \begin{align*} |f^{(k+1)}(s)| &#x0026;\le c_e^{-1}\sum _{m_1+2m_2=k} \frac{k!}{m_1!m_2!}2^{2k}k!\exp \left(-\frac{1}{s(1-s)}\right)\left(\frac{1}{s(1-s)}\right)^{2k}|2s-1|^{m_1} \\ &#x0026;\le c_e^{-1}\exp \left(-\frac{1}{s(1-s)}\right)\left(\frac{2}{s(1-s)}\right)^{2k}(k!)^2\sum _{m_1\le k}\frac{1}{m_1!} \\ &#x0026;\le ec_e^{-1}\exp \left(-\frac{1}{s(1-s)}\right)\left(\frac{2}{s(1-s)}\right)^{2k}(k!)^2. \end{align*} \)
    Substitute \( k+1 \) by k and for every \( k \ge 1 \)
    \( \begin{align*} |f^{(k)}(s)| &#x0026;\le ec_e^{-1}\exp \left(-\frac{1}{s(1-s)}\right)\left(\frac{2}{s(1-s)}\right)^{2(k-1)}((k-1)!)^2 \\ &#x0026; \le 4ec_e^{-1}\exp \left(-\frac{1}{s(1-s)}\right)\left(\frac{2}{s(1-s)}\right)^{2(k-1)}\frac{(k!)^2}{(k+1)^2} {.} \end{align*} \)
    Noting that \( \Vert H_0\Vert \le 1, \Vert H_1\Vert \le 1 \) and \( H^{(k)} = (H_1-H_0)f^{(k)} \) , we complete the proof of bounds for \( H^{(k)} \) .□
    The following result demonstrates that \( E_{j} \) ’s for all \( j \ge 1 \) vanish on the boundary.
    Lemma 10.
    (a) For all \( k \ge 1 \) , \( E_{0}^{(k)}(0) = P_0^{(k)}(0) = 0, E_{0}^{(k)}(1) =P_0^{(k)}(1) = 0 \) .
    (b) For all \( j \ge 1, k \ge 0 \) , \( E_{j}^{(k)}(0) = E_{j}^{(k)}(1) = 0 \) .
    Proof.
    We will repeatedly use the fact that \( R^{(k)}(0) = R^{(k)}(1) = 0 \) . This can be proved by taking the k-th order derivative of the equation \( (H-z)R = I \) and
    \( \begin{equation*} R^{(k)} = -R\sum _{l=1}^k \binom{k}{l}(H-z)^{(l)}R^{(k-l)} = -R\sum _{l=1}^k \binom{k}{l}H^{(l)}R^{(k-l)}{.} \end{equation*} \)
    (a) This is a straightforward result by the definition of \( E_0 \) and the fact that \( R^{(k)} \) ’s vanish on the boundary.
    (b) We prove by induction with respect to j. For \( j = 1 \) , Equation (22) tells that
    \( \begin{equation*} E_{1} = (2\pi)^{-1}\oint _{\Gamma }R[P_0^{(1)},P_0]Rdz{.} \end{equation*} \)
    Therefore, each term in the derivatives of \( E_{1} \) must involve at least one of the derivative of R or the derivative of \( P_0 \) , which means the derivatives of \( E_{1} \) much vanish on the boundary.
    Assume the conclusion holds for \( \lt j \) , then for j, first each term of the derivatives of \( S_j \) must involve the derivative of some \( E_{m} \) with \( m \lt j \) , which means the derivatives of \( S_j \) must vanish on the boundary. Furthermore, for the similar reason, Equation (22) tells that the derivatives of \( E_{j} \) must vanish on the boundary.□
    Before we process, we recall three technical lemmas introduced in [15, 25]. Throughout let \( c_f = 4\pi ^2/3 \) denote an absolute constant.
    Lemma 11.
    Let \( \alpha \gt 0 \) be a positive real number, \( p,q \) be non-negative integers, and \( r = p+q \) . Then,
    \( \begin{equation*} \sum _{l=0}^k \binom{k}{l}\frac{[(l+p)!(k-l+q)!]^{1+\alpha }}{(l+p+1)^2(k-l+q+1)^2} \le c_f\frac{[(k+r)!]^{1+\alpha }}{(k+r+1)^2}{.} \end{equation*} \)
    Lemma 12.
    Let k be a non-negative integer, then
    \( \begin{equation*} \sum _{l=0}^k\frac{1}{(l+1)^2(k+1-l)^2} \le c_f\frac{1}{(k+1)^2}{.} \end{equation*} \)
    Lemma 13.
    Let \( A(s),B(s) \) be two smooth matrix-valued functions defined on \( [0,1] \) satisfying
    \( \begin{equation*} \Vert A^{(k)}(s)\Vert \le a_1(s)a_2(s)^k\frac{[(k+p)!]^{1+\alpha }}{(k+1)^2}, \quad \Vert B^{(k)}(s)\Vert \le b_1(s)b_2(s)^k\frac{[(k+q)!]^{1+\alpha }}{(k+1)^2}, \end{equation*} \)
    for some non-negative functions \( a_1,a_2,b_1,b_2 \) , non-negative integers \( p,q, \) and for all \( k \ge 0 \) . Then, for every \( k \ge 0,0 \le s \le 1 \) ,
    \( \begin{equation*} \Vert (A(s)B(s))^{(k)}\Vert \le c_fa_1(s)b_1(s)\max \lbrace a_2(s),b_2(s)\rbrace ^k\frac{[(k+r)!]^{1+\alpha }}{(k+1)^2}, \end{equation*} \)
    where \( r = p+q \) .
    Next we bound the derivatives of the resolvent. This bound provides the most important improvement of the general adiabatic bound.
    Lemma 14.
    For all \( k \ge 0 \) ,
    \( \begin{equation*} \Vert R^{(k)}(z,s_0,s_0)\Vert \le \frac{2}{\Delta (s_0)}\left(D\log ^2\kappa \right)^k\frac{(k!)^4}{(k+1)^2}, \end{equation*} \)
    where
    \( \begin{equation*} D = c_f\frac{2048\sqrt {2}e^2}{c_e}. \end{equation*} \)
    Proof.
    We prove by induction, and for simplicity we will omit explicit dependence on arguments \( z, s, \) and \( s_0 \) . The estimate obviously holds for \( k = 0 \) . Assume the estimate holds for \( \lt k \) . Take the k-th order derivative of the equation \( (H-z)R = I \) and we get
    \( \begin{equation*} R^{(k)} = -R\sum _{l=1}^k \binom{k}{l}(H-z)^{(l)}R^{(k-l)} = -R\sum _{l=1}^k \binom{k}{l}H^{(l)}R^{(k-l)}{.} \end{equation*} \)
    Using Lemma 9 and the induction hypothesis, we have
    \( \begin{align*} \Vert R^{(k)}\Vert _2 &#x0026;\le \frac{2}{\Delta }\sum _{l=1}^k\binom{k}{l}ba^l\frac{(l!)^2}{(l+1)^2}\frac{2}{\Delta }\left(D\log ^2\kappa \right)^{k-l}\frac{[(k-l)!]^4}{(k-l+1)^2}. \end{align*} \)
    To proceed we need to bound the term \( \Delta ^{-1}ba^l \) for \( l \ge 1 \) . Let us define
    \( \begin{equation*} F(s) = \frac{c_e}{2^{2l}2\sqrt {2}e}\Delta ^{-1}_*(s)b(s)a(s)^l = \frac{\exp (-\frac{1}{s(1-s)})}{(1-f(s)+f(s)/\kappa)[s(1-s)]^{2l-2}}{.} \end{equation*} \)
    Note that \( F(0)=F(1)=0, F(s) \gt 0 \) for \( s\in (0,1) \) and \( F(1/2+t)\gt F(1/2-t) \) for \( t \in (0,1/2) \) , then there exists a maximizer \( s_* \in [1/2,1) \) such that \( F(s) \le F(s_*), \forall s \in [0,1] \) . Furthermore, \( F^{\prime }(s_*) = 0 \) . Now we compute the \( F^{\prime } \) as
    \( \begin{align*} &#x0026;[(1-f+f/\kappa)[s(1-s)]^{2l-2}]^2F^{\prime }(s) \\ = &#x0026; \exp \left(-\frac{1}{s(1-s)}\right)\frac{1-2s}{s^2(1-s)^2}(1-f+f/\kappa)[s(1-s)]^{2l-2} \\ &#x0026; \quad - \exp \left(-\frac{1}{s(1-s)}\right)\left[(-f^{\prime }+f^{\prime }/\kappa)[s(1-s)]^{2l-2} + (1-f+f/\kappa)(2l-2)[s(1-s)]^{2l-3}(1-2s)\right] \\ = &#x0026; \exp \left(-\frac{1}{s(1-s)}\right)[s(1-s)]^{2l-4} \\ &#x0026; \times \Big [ (1-f+f/\kappa)(1-2s)[1-(2l-2)s(1-s)] - \exp \left(-\frac{1}{s(1-s)}\right)c_e^{-1}(-1+1/\kappa)s^2(1-s)^2\Big ] \\ =&#x0026; \exp \left(-\frac{1}{s(1-s)}\right)[s(1-s)]^{2l-4}G(s), \end{align*} \)
    where
    \( \begin{equation*} G(s)= (1-f+f/\kappa)(1-2s)[1-(2l-2)s(1-s)] + \exp \left(-\frac{1}{s(1-s)}\right)c_e^{-1}(1-1/\kappa)s^2(1-s)^2{.} \end{equation*} \)
    The sign of \( F^{\prime }(s) \) for \( s \in (0,1) \) is the same as the sign of \( G(s) \) .
    We now show that \( s_* \) cannot be very close to 1. Precisely, we will prove that for all \( s \in [1-\frac{c}{l\log \kappa },1) \) with \( c = \sqrt {c_e}/4 \approx 0.021 \) , \( G(s) \lt 0 \) . For such s, we have
    \( \begin{equation*} 1-f+f/\kappa \ge f(1/2)/\kappa \gt 0, \end{equation*} \)
    \( \begin{equation*} 1-2s \lt -1/2, \end{equation*} \)
    and
    \( \begin{equation*} 1-(2l-2)s(1-s) \ge 1-(2l-2)(1-s) \ge 1-\frac{2c}{\log \kappa } \ge 1/2{,} \end{equation*} \)
    then
    \( \begin{align*} &#x0026;(1-f+f/\kappa)(1-2s)[1-(2l-2)s(1-s)] \le - \frac{f(1/2)}{4\kappa } = -\frac{1}{8\kappa }{.} \end{align*} \)
    On the other hand,
    \( \begin{align*} \exp \left(-\frac{1}{s(1-s)}\right) &#x0026; \le \exp \left(-\left(1-\frac{c}{l\log \kappa }\right)^{-1}\frac{l\log \kappa }{c}\right) \\ &#x0026; = \kappa ^{-(1-\frac{c}{l\log \kappa })^{-1}\frac{l}{c}} \\ &#x0026; \le \kappa ^{-l/c} \\ &#x0026; \le \kappa ^{-1} {,} \end{align*} \)
    then
    \( \begin{align*} &#x0026;\exp \left(-\frac{1}{s(1-s)}\right)c_e^{-1}(1-1/\kappa)s^2(1-s)^2 \\ \le &#x0026; \frac{1}{\kappa }\frac{1}{c_e}\left(\frac{c}{l\log \kappa }\right)^2 \\ \le &#x0026; \frac{1}{16\kappa } {.} \end{align*} \)
    Therefore, for all \( s \in [1-\frac{c}{l\log \kappa },1] \) we have \( G(s) \le -1/(16\kappa) \lt 0 \) , which indicates \( s_* \le 1-\frac{c}{l\log \kappa } \) .
    We are now ready to bound \( F(s) \) . From the equation \( G(s_*) = 0 \) , we get
    \( \begin{equation*} \frac{\exp \left(-\frac{1}{s_*(1-s_*)}\right)}{1-f+f/\kappa } = \frac{(1-2s_*)[1-(2l-2)s_*(1-s_*)]}{c_e^{-1}(-1+1/\kappa)s_*^2(1-s_*)^2} {,} \end{equation*} \)
    which gives
    \( \begin{align*} F(s) &#x0026;\le F(s_*) \\ &#x0026;= \frac{(1-2s_*)[1-(2l-2)s_*(1-s_*)]}{c_e^{-1}(-1+1/\kappa)[s_*(1-s_*)]^{2l}} \\ &#x0026; \le \frac{2s_*-1}{c_e^{-1}(1-1/\kappa)[s_*(1-s_*)]^{2l}} \\ &#x0026; \le 2c_e\cdot 2^{2l}(1-s_*)^{-2l} \\ &#x0026; \le 2c_e\cdot 2^{2l}\left(\frac{l\log \kappa }{c}\right)^{2l} \\ &#x0026; = 2c_e\left(\frac{64}{c_e}\right)^l(\log \kappa)^{2l}l^{2l}\\ &#x0026; \le \frac{2c_e}{e^2}\left(\frac{64e^2}{c_e}\right)^l(\log \kappa)^{2l}(l!)^2{.} \end{align*} \)
    The last inequality comes from the fact \( l^l \le e^{l-1}l! \) , which can be derived from the fact that
    \( \begin{equation*} \sum _{i=1}^n \log i\ge \int _1^n\log x{\mathrm{d}} x=n\log n-(n-1). \end{equation*} \)
    By definition of \( F(s) \) we immediately get
    \( \begin{equation*} \Delta ^{-1}ba^l \le \frac{2\sqrt {2}e}{c_e}4^lF \le \frac{4\sqrt {2}}{e}\left(\frac{256e^2}{c_e}\right)^l(\log \kappa)^{2l}(l!)^2 {.} \end{equation*} \)
    Now we go back to the estimate of \( R^{(k)} \) . By Lemma 11,
    \( \begin{align*} \Vert R^{(k)}\Vert _2 &#x0026;\le \frac{2}{\Delta }\sum _{l=1}^k\binom{k}{l}ba^l\frac{(l!)^2}{(l+1)^2}\frac{2}{\Delta }\left(D\log ^2\kappa \right)^{k-l}\frac{[(k-l)!]^4}{(k-l+1)^2} \\ &#x0026; \le \frac{2}{\Delta }\sum _{l=1}^k\binom{k}{l}\frac{8\sqrt {2}}{e}\left(\frac{256e^2}{c_e}\right)^l(\log \kappa)^{2l}(l!)^2\frac{(l!)^2}{(l+1)^2}\left(D\log ^2\kappa \right)^{k-l}\frac{[(k-l)!]^4}{(k-l+1)^2} \end{align*} \)
    \( \begin{align*} &#x0026; \le \frac{2}{\Delta } (D\log ^2\kappa)^kc_f^{-1}\sum _{l=1}^k\binom{k}{l}\frac{(l!)^4[(k-l)!]^4}{(l+1)^2(k-l+1)^2} \\ &#x0026; \le \frac{2}{\Delta }(D\log ^2\kappa)^k\frac{(k!)^4}{(k+1)^2}{.} \end{align*} \)
    This completes the proof.□
    The next lemma is the main technical result, which gives the bound of derivatives of \( E_{j} \) defined in Equation (20).
    Lemma 15.
    (a) For all \( k \ge 0 \) ,
    \( \begin{equation} |E_0^{(k)}| = |P_0^{(k)}| \le (D\log ^2\kappa)^k\frac{(k!)^4}{(k+1)^2}{.} \end{equation} \)
    (26)
    (b) For all \( k \ge 0, j \ge 1 \) ,
    \( \begin{equation} \Vert E_{j}^{(k)}\Vert \le A_1A_2^{j}A_3^{k}\frac{[(k+j)!]^4}{(k+1)^2(j+1)^2}, \end{equation} \)
    (27)
    with
    \( \begin{align*} A_1 &#x0026;= \frac{1}{2}\left[c_f^2\left(1+2c_f^2\right)\right]^{-1},\\ A_2 &#x0026;= A_1^{-1}c_f^3\frac{16}{\Delta }D\log ^2\kappa ,\\ A_3 &#x0026;= D\log ^2\kappa {.} \end{align*} \)
    Remark 16.
    The choice of \( A_1,A_2 \) can be rewritten as
    \( \begin{align*} c_f^3\frac{16}{\Delta }D\log ^2\kappa &#x0026; = A_1A_2, \\ c_f^2\left(1+2c_f^2\right)A_1 &#x0026;= \frac{1}{2}. \end{align*} \)
    Furthermore, using \( c_f\gt 1 \) , we have
    \( \begin{equation*} c_f^3\frac{16}{\Delta }\frac{A_3}{A_2} =A_1\le \frac{1}{2}{.} \end{equation*} \)
    These relations will be used in the proof later.
    Proof. (a) By Lemma 14,
    \( \begin{equation*} |P_0^{(k)}(s_0)| = \Vert (2\pi {\mathrm{i}})^{-1} \oint _{\Gamma (s_0)}R^{(k)}(z,s_0,s_0)dz\Vert \le (D\log ^2\kappa)^k\frac{(k!)^4}{(k+1)^2} \end{equation*} \)
    (b) We prove by induction with respect to j. For \( j = 1 \) , Equation (22) tells
    \( \begin{equation*} \Vert E_1^{(k)}\Vert = \Vert (2\pi)^{-1}\oint _{\Gamma }\frac{d^k}{ds^k}(R[P_{0}^{(1)},P_0]R)dz\Vert \le \frac{\Delta }{2}\Vert \frac{d^k}{ds^k}(R[P_{0}^{(1)},P_0]R)\Vert {.} \end{equation*} \)
    By Lemmas 13 and 14,
    \( \begin{align*} \Vert E_1^{(k)}\Vert &#x0026;\le \Delta c_f^3\left(\frac{2}{\Delta }\right)^2D\log ^2\kappa (D\log ^2\kappa)^k\frac{[(k+1)!]^4}{(k+1)^2} \\ &#x0026;\le A_1A_2A_3^k\frac{[(k+1)!]^4}{(k+1)^2(1+1)^2} {.} \end{align*} \)
    Now assume \( \lt j \) the estimate holds, for j, by Lemmas 12, 13, and the induction hypothesis,
    \( \begin{align*} \Vert S_j^{(k)}\Vert &#x0026;\le \sum _{m=1}^{j-1}c_fA_1A_2^mA_1A_2^{j-m}A_3^k\frac{[(k+j)!]^4}{(k+1)^2(m+1)^2(j-m+1)^2} \\ &#x0026;= A_1^2A_2^jA_3^k\frac{[(k+j)!]^4}{(k+1)^2}c_f\sum _{m=1}^{j-1}\frac{1}{(m+1)^2(j-m+1)^2} \\ &#x0026;\le c_f^2A_1^2A_2^jA_3^k\frac{[(k+j)!]^4}{(k+1)^2(j+1)^2} {.} \end{align*} \)
    Again by Lemmas 13, 14, and the induction hypothesis,
    \( \begin{align*} \qquad \qquad \Vert E_j^{(k)}\Vert &#x0026;\le \left\Vert \frac{d^k}{ds^k}\left((2\pi)^{-1}\oint _{\Gamma }R[E_{j-1}^{(1)},P_0]Rdz\right)\right\Vert + \left\Vert \frac{d^k}{ds^k}S_j\right\Vert + \left\Vert \frac{d^k}{ds^k}\left(2P_0S_jP_0\right)\right\Vert \\ \qquad \qquad &#x0026;\le \Delta c_f^3\left(\frac{2}{\Delta }\right)^2A_1A_2^{j-1}A_3 \frac{1}{j^2}A_3^k\frac{[(k+j)!]^4}{(k+1)^2} +c_f^2A_1^2A_2^jA_3^k\frac{[(k+j)!]^4}{(k+1)^2(j+1)^2} \\ \qquad \qquad &#x0026; \quad \quad + 2c_f^2c_f^2A_1^2A_2^j\frac{1}{(j+1)^2}A_3^k\frac{[(k+j)!]^4}{(k+1)^2} \\ \qquad \qquad &#x0026; \le c_f^3\frac{16}{\Delta }A_1A_2^{j-1}A_3^{k+1}\frac{[(k+j)!]^4}{(k+1)^2(j+1)^2} +c_f^2A_1^2A_2^jA_3^k\frac{[(k+j)!]^4}{(k+1)^2(j+1)^2} \\ \qquad \qquad &#x0026; \quad \quad + 2c_f^4A_1^2A_2^jA_3^k\frac{[(k+j)!]^4}{(k+1)^2(j+1)^2} \\ \qquad \qquad &#x0026; = \left[c_f^3\frac{16}{\Delta }\frac{A_3}{A_2} + c_f^2\left(1+2c_f^2\right)A_1\right] \times \left[A_1A_2^jA_3^k\frac{[(k+j)!]^4}{(k+1)^2(j+1)^2}\right] \\ \qquad \qquad &#x0026; \le A_1A_2^jA_3^k\frac{[(k+j)!]^4}{(k+1)^2(j+1)^2}{.} \end{align*} \)

    F Details of the Numerical Treatments and Examples

    For simulation purpose, the AQC schemes are carried out using and the induction hypothesis with a time step size 0.2. We use the gradient descent method to optimize QAOA and record the running time corresponding to the lowest error in each case. In QAOA, we also use the true fidelity to measure the error. RM is a Monte Carlo method, and each RM calculation involves performing 200 independent runs to obtain the density matrix \( \rho ^{(i)} \) for i-th repetition, then we use the averaged density \( \bar{\rho } = 1/n_{\text{rep}}\sum \rho ^{(i)} \) to compute the error. We report the averaged runtime of each single RM calculation. We perform calculations for a series of 64-dimensional Hermitian positive definite dense matrices \( A_1 \) , and 32-dimensional non-Hermitian dense matrices \( A_2 \) with varying condition number \( \kappa \) .
    For concreteness, for the Hermitian positive definite example, we choose \( A = U\Lambda U^{\dagger } \) . Here, U is an orthogonal matrix obtained by Gram–Schmidt orthogonalization (implemented via a QR factorization) of the discretized periodic Laplacian operator given by
    \( \begin{equation} L = \left(\begin{array}{cccccc} 1 &#x0026; -0.5 &#x0026; &#x0026; &#x0026; &#x0026;-0.5 \\ -0.5 &#x0026; 1 &#x0026; -0.5 &#x0026; &#x0026; &#x0026; \\ &#x0026; -0.5 &#x0026; 1 &#x0026; -0.5 &#x0026; &#x0026; \\ &#x0026; &#x0026; \ddots &#x0026;\ddots &#x0026; \ddots &#x0026; \\ &#x0026; &#x0026; &#x0026; -0.5&#x0026; 1&#x0026; -0.5 \\ -0.5 &#x0026; &#x0026; &#x0026; &#x0026; -0.5 &#x0026; 1 \\ \end{array}\right){.} \end{equation} \)
    (28)
    \( \Lambda \) is chosen to be a diagonal matrix with diagonals uniformly distributed in \( [1/\kappa ,1] \) . More precisely, \( \Lambda = \text{diag}(\lambda _1,\lambda _2,\ldots ,\lambda _{N}) \) with \( \lambda _k = 1/\kappa + (k-1)h, h = (1-1/\kappa)/(N-1) \) . Such construction ensures A to be a Hermitian positive definite matrix which satisfies \( \Vert A\Vert _2 = 1 \) and the condition number of A is \( \kappa \) . We choose \( \mathinner {|{b}\rangle } = \sum _{k=1}^{N}u_k / \Vert \sum _{k=1}^{N}u_k\Vert _2 \) where \( \lbrace u_k\rbrace \) is the set of the column vectors of U. Here, \( N=64 \) .
    For the non-Hermitian positive definite example, we choose \( A = U\Lambda V^{\dagger } \) . Here, U is the same as those in the Hermitian positive definite case, except that the dimension is reduced to \( N=32 \) . \( \Lambda = \text{diag}(\lambda _1,\lambda _2,\ldots ,\lambda _N) \) with \( \lambda _k = (-1)^k(1/\kappa +(k-1)h), h = (1-1/\kappa)/(N-1) \) . V is an orthogonal matrix obtained by Gram–Schmidt orthogonalization of the matrix
    \( \begin{equation} K = \left(\begin{array}{cccccc} 2 &#x0026; -0.5 &#x0026; &#x0026; &#x0026; &#x0026;-0.5 \\ -0.5 &#x0026; 2 &#x0026; -0.5 &#x0026; &#x0026; &#x0026; \\ &#x0026; -0.5 &#x0026; 2 &#x0026; -0.5 &#x0026; &#x0026; \\ &#x0026; &#x0026; \ddots &#x0026;\ddots &#x0026; \ddots &#x0026; \\ &#x0026; &#x0026; &#x0026; -0.5&#x0026; 2&#x0026; -0.5 \\ -0.5 &#x0026; &#x0026; &#x0026; &#x0026; -0.5 &#x0026; 2 \\ \end{array}\right){.} \end{equation} \)
    (29)
    Such construction ensures A to be non-Hermitian, satisfying \( \Vert A\Vert _2 = 1 \) and the condition number of A is \( \kappa \) . We choose the same \( \mathinner {|{b}\rangle } \) as that in the Hermitian positive definite example.

    Acknowledgments

    We thank Rolando Somma, Yu Tong and Nathan Wiebe for helpful discussions.

    References

    [1]
    Tameem Albash and Daniel A. Lidar. 2018. Adiabatic quantum computation. Rev. Mod. Phys. 90, 1 (2018), 015002.
    [2]
    Andris Ambainis. 2012. Variable time amplitude amplification and quantum algorithms for linear algebra problems. In Proceedings of the STACS’12 (29th Symposium on Theoretical Aspects of Computer Science). Vol. 14. LIPIcs, Paris, France, 636–647.
    [3]
    Seraph Bao, Silken Kleer, Ruoyu Wang, and Armin Rahmani. 2018. Optimal control of superconducting gmon qubits using pontryagin’s minimum principle: Preparing a maximally entangled state with singular bang-bang protocols. Phys. Rev. A 97, 6 (2018), 062343.
    [4]
    Dominic W. Berry, Andrew M. Childs, Richard Cleve, Robin Kothari, and Rolando D. Somma. 2015. Simulating hamiltonian dynamics with a truncated taylor series. Phys. Rev. Lett. 114, 9 (2015), 090502.
    [5]
    Dominic W. Berry, Andrew M. Childs, and Robin Kothari. 2015. Hamiltonian simulation with nearly optimal dependence on all parameters. In Proceedings of the 2015 IEEE 56th Annual Symposium on Foundations of Computer Science. IEEE, Piscataway, NJ, 792–809.
    [6]
    Sergio Boixo, Emanuel Knill, and Rolando D. Somma. 2009. Eigenpath traversal by phase randomization. Quantum Info. Comput. 9 (2009), 833–855.
    [7]
    Sergio Boixo and Rolando D. Somma. 2010. Necessary condition for the quantum adiabatic approximation. Phys. Rev. A 81, 3 (2010), 032308.
    [8]
    Carlos Bravo-Prieto, Ryan LaRose, M. Cerezo, Yigit Subasi, Lukasz Cincio, and Patrick J. Coles. 2020. Variational Quantum Linear Solver. arxiv:1909.05820. Retrieved from https://arxiv.org/abs/1909.05820.
    [9]
    Marin Bukov, Alexandre G.R. Day, Dries Sels, Phillip Weinberg, Anatoli Polkovnikov, and Pankaj Mehta. 2018. Reinforcement learning in different phases of quantum control. Phys. Rev. X 8, 3 (2018), 031086.
    [10]
    Yudong Cao, Anargyros Papageorgiou, Iasonas Petras, Joseph Traub, and Sabre Kais. 2013. Quantum algorithm and circuit design solving the poisson equation. New J. Phys. 15, 1 (2013), 013021.
    [11]
    Shantanav Chakraborty, András Gilyén, and Stacey Jeffery. 2019. The power of block-encoded matrix powers: Improved regression techniques via faster hamiltonian simulation. In Proceedings of the 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)(Leibniz International Proceedings in Informatics (LIPIcs), Vol. 132). Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany, 33:1–33:14.
    [12]
    Andrew M. Childs, Robin Kothari, and Rolando D. Somma. 2017. Quantum algorithm for systems of linear equations with exponentially improved dependence on precision. SIAM J. Comput. 46, 6 (2017), 1920–1950.
    [13]
    Andrew M. Childs, Yuan Su, Minh C. Tran, Nathan Wiebe, and Shuchen Zhu. 2021. Theory of trotter error with commutator scaling. Physical Review X 11, 1 (2021), 011020.
    [14]
    Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. 2014. A Quantum Approximate Optimization Algorithm. arXiv:1411.4028. Retrieved from https://arxiv.org/abs/1411.4028.
    [15]
    Yimin Ge, András Molnár, and J. Ignacio Cirac. 2016. Rapid adiabatic preparation of injective projected entangled pair states and gibbs states. Phys. Rev. Lett. 116 (2016), 080503.
    [16]
    András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe. 2019. Quantum singular value transformation and beyond: Exponential improvements for quantum matrix arithmetics. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (STOC 2019). Association for Computing Machinery, New York, NY, 193–204.
    [17]
    Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. 2009. Quantum algorithm for linear systems of equations. Phys. Rev. Lett. 103, 15 (2009), 150502.
    [18]
    Itay Hen. 2019. How quantum is the speedup in adiabatic unstructured search?Quant. Inf. Proc. 18, 6 (2019), 162.
    [19]
    Sabine Jansen, Mary-Beth Ruskai, and Ruedi Seiler. 2007. Bounds for the adiabatic approximation with applications to quantum computation. J. Math. Phys. 48, 10 (2007), 102111.
    [20]
    Lin Lin and Yu Tong. 2020. Optimal polynomial based quantum eigenstate filtering with application to solving quantum linear systems. Quantum 4 (2020), 361.
    [21]
    Joseph W. H. Liu. 1992. The multifrontal method for sparse matrix solution: Theory and practice. SIAM Rev. 34, 1 (1992), 82–109.
    [22]
    Guang Hao Low and Isaac L. Chuang. 2017. Optimal hamiltonian simulation by quantum signal processing. Phys. Rev. Lett. 118, 1 (2017), 010501.
    [23]
    Guang Hao Low and Nathan Wiebe. 2019. Hamiltonian Simulation in the Interaction Picture. arxiv:1805.00675. Retrieved from https://arxiv.org/abs/1805.00675.
    [24]
    Yvon Maday and Gabriel Turinici. 2003. New formulations of monotonically convergent quantum control algorithms. J. Chem. Phys. 118, 18 (2003), 8191–8196.
    [25]
    Gheorghe Nenciu. 1993. Linear adiabatic theory exponential estimates. Comm. Math. Phys. 152, 3 (1993), 479–496.
    [26]
    Murphy Yuezhen Niu, Sergio Boixo, Vadim N. Smelyanskiy, and Hartmut Neven. 2019. Universal quantum control through deep reinforcement learning. npj Quantum Info. 5, 1 (2019), 33.
    [27]
    Ali T. Rezakhani, W.-J. Kuo, Alioscia Hamma, Daniel A. Lidar, and Paolo Zanardi. 2009. Quantum adiabatic brachistochrone. Phys. Rev. Lett. 103 (2009), 080502.
    [28]
    Jérémie Roland and Nicolas J. Cerf. 2002. Quantum search by local adiabatic evolution. Phys. Rev. A 65, 4 (2002), 042308.
    [29]
    Yousef Saad. 2003. Iterative Methods for Sparse Linear Systems. Vol. 82. SIAM, Philadelphia, PA.
    [30]
    Yiğit Subaşı, Rolando D. Somma, and Davide Orsucci. 2019. Quantum algorithms for systems of linear equations inspired by adiabatic quantum computing. Phys. Rev. Lett. 122, 6 (2019), 060504.
    [31]
    Wim van Dam, Michele Mosca, and Umesh Vazirani. 2001. How powerful is adiabatic quantum computation? In Proceedings 42nd IEEE Symposium on Foundations of Computer Science. IEEE, Piscataway, NJ, 279–287.
    [32]
    Nathan Wiebe and Nathan S. Babcock. 2012. Improved error-scaling for adiabatic quantum evolutions. New J. Phys. 14, 1 (2012), 1–10.
    [33]
    Leonard Wossnig, Zhikuan Zhao, and Anupam Prakash. 2018. Quantum linear system algorithm for dense matrices. Phys. Rev. Lett. 120, 5 (2018), 050502.
    [34]
    Xiaosi Xu, Jinzhao Sun, Suguru Endo, Ying Li, Simon C. Benjamin, and Xiao Yuan. 2021. Variational algorithms for linear algebra. Science Bulletin in press (2021).
    [35]
    Zhi-Cheng Yang, Armin Rahmani, Alireza Shabani, Hartmut Neven, and Claudio Chamon. 2017. Optimizing variational quantum algorithms using pontryagin’s minimum principle. Phys. Rev. X 7, 2 (2017), 021027.
    [36]
    Wusheng Zhu and Herschel Rabitz. 1998. A rapid monotonically convergent iteration algorithm for quantum optimal control over the expectation value of a positive definite operator. J. Chem. Phys. 109, 2 (1998), 385–391.

    Cited By

    View all
    • (2024)Quantum Optimization Algorithms in Operations Research: Methods, Applications, and ImplicationsSSRN Electronic Journal10.2139/ssrn.4854668Online publication date: 2024
    • (2024)An Optimal Linear-combination-of-unitaries-based Quantum Linear System SolverACM Transactions on Quantum Computing10.1145/36493205:2(1-23)Online publication date: 23-Apr-2024
    • (2024)Quantum Simulation for Quantum Dynamics with Artificial Boundary ConditionsSIAM Journal on Scientific Computing10.1137/23M156345146:4(B403-B421)Online publication date: 10-Jul-2024
    • Show More Cited By

    Index Terms

    1. Quantum Linear System Solver Based on Time-optimal Adiabatic Quantum Computing and Quantum Approximate Optimization Algorithm

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Transactions on Quantum Computing
        ACM Transactions on Quantum Computing  Volume 3, Issue 2
        June 2022
        142 pages
        EISSN:2643-6817
        DOI:10.1145/3514178
        Issue’s Table of Contents

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 04 March 2022
        Accepted: 01 November 2021
        Revised: 01 September 2021
        Received: 01 April 2020
        Published in TQC Volume 3, Issue 2

        Check for updates

        Author Tags

        1. Quantum linear system problem
        2. adiabatic quantum computing
        3. quantum approximate optimization algorithm

        Qualifiers

        • Research-article
        • Refereed

        Funding Sources

        • Department of Energy
        • Quantum Algorithm Teams Program
        • Google Quantum Research Award
        • NSF Quantum Leap Challenge Institute (QLCI) program

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)1,870
        • Downloads (Last 6 weeks)207
        Reflects downloads up to 27 Jul 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Quantum Optimization Algorithms in Operations Research: Methods, Applications, and ImplicationsSSRN Electronic Journal10.2139/ssrn.4854668Online publication date: 2024
        • (2024)An Optimal Linear-combination-of-unitaries-based Quantum Linear System SolverACM Transactions on Quantum Computing10.1145/36493205:2(1-23)Online publication date: 23-Apr-2024
        • (2024)Quantum Simulation for Quantum Dynamics with Artificial Boundary ConditionsSIAM Journal on Scientific Computing10.1137/23M156345146:4(B403-B421)Online publication date: 10-Jul-2024
        • (2024)Using Quantum Algorithms for Uncertainty Processing2024 6th International Youth Conference on Radio Electronics, Electrical and Power Engineering (REEPE)10.1109/REEPE60449.2024.10479677(1-5)Online publication date: 29-Feb-2024
        • (2024) Erratum: Lower Bounds on Quantum Annealing Times [Phys. Rev. Lett. 130 , 140601 (2023)] Physical Review Letters10.1103/PhysRevLett.132.269901132:26Online publication date: 24-Jun-2024
        • (2024)Tight quantum depth lower bound for solving systems of linear equationsPhysical Review A10.1103/PhysRevA.110.012422110:1Online publication date: 8-Jul-2024
        • (2024)Reachability deficit of variational Grover searchPhysical Review A10.1103/PhysRevA.109.012414109:1Online publication date: 9-Jan-2024
        • (2024)Qubit-Efficient Randomized Quantum Algorithms for Linear AlgebraPRX Quantum10.1103/PRXQuantum.5.0203245:2Online publication date: 30-Apr-2024
        • (2024)Convergence of digitized-counterdiabatic QAOA: circuit depth versus free parametersNew Journal of Physics10.1088/1367-2630/ad153626:1(013002)Online publication date: 4-Jan-2024
        • (2024)Impact of quantum and neuromorphic computing on biomolecular simulations: Current status and perspectivesCurrent Opinion in Structural Biology10.1016/j.sbi.2024.10281787(102817)Online publication date: Aug-2024
        • Show More Cited By

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Get Access

        Login options

        Full Access

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media