Abstract
We introduce an algorithm to compute expectation values of time-evolved observables on digital quantum computers that requires only bounded average circuit depth to reach arbitrary precision, i.e. produces an unbiased estimator with finite average depth. This finite depth comes with an attenuation of the measured expectation value by a known amplitude, requiring more shots per circuit. The average gate count per circuit for simulation time t is \({\mathcal{O}}({t}^{2}{\mu }^{2})\) with μ the sum of the Hamiltonian coefficients, without dependence on precision, providing a significant improvement over previous algorithms. With shot noise, the average runtime is \({\mathcal{O}}({t}^{2}{\mu }^{2}{\epsilon }^{-2})\) to reach precision ϵ. The only dependence in the sum of the coefficients makes it particularly adapted to non-sparse Hamiltonians. The algorithm generalizes to time-dependent Hamiltonians, appearing for example in adiabatic state preparation. These properties make it particularly suitable for present-day relatively noisy hardware that supports only circuits with moderate depth.
Similar content being viewed by others
Introduction
Hamiltonian dynamics is one of the most promising application of current and near-term quantum computers1,2. It is believed to be exponentially faster to perform on quantum computers and finds multiple applications, either directly or indirectly as subroutines of other quantum algorithms3,4,5,6,7,8. Given a Hamiltonian H and a simulation time t, there exist several ways to implement U(t) = eitH on a digital quantum computer where only one or two-qubit gates can be used. The simplest approach uses product formulas such as Trotterization to decompose the dynamics into elementary gates \(U(t)\,\approx\, {e}^{it{H}_{1}}\ldots {e}^{it{H}_{n}}\)2,9,10. These methods have been generalized and improved in many ways11,12,13,14,15,16,17,18,19,20,21,22,23,24. In particular, the introduction of randomness25,26,27,28,29,30 such as in the qDRIFT algorithm31,32,33,34,35,36,37,38,39 has particularly good theoretical performance for non-sparse Hamiltonians H, i.e. when the number of terms in the Hamiltonian grows faster than \({\mathcal{O}}(L)\) where L is the number of qubits. These algorithms have in common that they only approximate the continuous time evolution with finite-depth circuits, and display discretization errors (sometimes called Trotter errors) that vanish only with infinitely many gates. This is particularly problematic for adiabatic state preparation (since Trotter errors generically lead to an effective heating)5,40,41 and in applications like quantum chemistry42,43,44,45,46,47 where extreme precision is desired. More sophisticated algorithms have better theoretical scalings, such as linear combination of unitaries19,48,49 (LCU), quantum signal processing50,51,52,53,54, or quantum walks55,56, but require a large resource overhead. As a consequence, at least in the near term, exact continuous-time dynamics are considered to be the prerogative of non-gate-based, analog quantum simulators57,58,59,60,61.
In this Article we introduce an algorithm to compute Hamiltonian dynamics of observables \(\langle {\mathcal{M}}(t)\rangle =\langle 0| {e}^{-iHt}{\mathcal{M}}{e}^{iHt}| 0\rangle\) on digital quantum computers without discretization error even with a finite number of gates, for any Hamiltonian H and unitary \({\mathcal{M}}\). This means that the algorithm produces an unbiased estimator for expectation values of time-evolved observables, with bounded average number of gates. It also applies to amplitude probabilities such as ã0â£eiHtâ£0ã. The expectation value \(\langle {\mathcal{M}}(t)\rangle\) is obtained as the average over random circuits drawn from a judiciously chosen distribution, multiplied by a known factor. One can choose freely the angle Ï of the rotations entering the circuit, only modifying the multiplicative factor and not the precision of the result. It allows one to decrease the gate count at the cost of an increased number of shots on a quantum computer, yielding a natural noise-mitigation technique. The optimal choice of gate angle yields a shot overhead of order \({\mathcal{O}}(1)\) with an average gate count per circuit \({\mathcal{O}}({t}^{2}{\mu }^{2})\) where μ is the sum of the coefficients of the Hamiltonian, without any dependence on the precision desired, in contrast with all previous algorithms. Since the gate count per circuit depends only on μ, the algorithm is particularly adapted to non-sparse Hamiltonians. It also straightforwardly generalizes to time-dependent Hamiltonians without any overhead. Taking into account the shot noise, the runtime of our algorithm is thus \({\mathcal{O}}({t}^{2}{\mu }^{2}{\epsilon }^{-2})\) to reach precision ϵ on the expectation value measured. This is in contrast with all previous shot-based algorithms where the dependence in ϵ is at least \({\epsilon }^{-2}\log 1/\epsilon\), due to the gate count per circuit that depends on ϵ. We demonstrate the algorithm with numerical simulations of the electronic structure of the stretched water molecule and on a 2D Ising model where it outperforms both Trotter formulas as well as other randomised compilation techniques.
Results
Simulating small-angle gates with large-angle gates
We first introduce the basic mechanism that underpins our algorithm. For a real number 0 < p ⤠1, an angle 0 â¤ Ï â¤ Ï/2 and O any operator such that O2 = I, we have the equality
where
This is obtained from the fact that \({e}^{i\tau O}=I\cos \tau +iO\sin \tau\) for any Ï, when O2 = I. This relation can be interpreted as follows: the random unitary operator that is equal to the rotation eiÏO with probability p and to identity with probability 1 â p is equal to the (non-random, deterministic) unitary operator \({e}^{i{\tau }^{{\prime} }O}\), up to a rescaling factor λ. This factor λ will be called attenuation factor. Inverting the relation (2), we can instead see Ï and \({\tau }^{{\prime} }\) as parameters and deduce the function \(p(\tau ,{\tau }^{{\prime} })\) as
Let us check that for \(0\le {\tau }^{{\prime} }\le \tau \le \pi /2\), this quantity is between 0 and 1. We have
Since \(\sin \tau\) and \((1-\cos \tau )\tan {\tau }^{{\prime} }\) are both positive in this regime, we see that the fraction in the parenthesis is smaller than 1. Hence, \({\partial }_{{\tau }^{{\prime} }}p\ge 0\). Moreover we have \(p(\tau ,{\tau }^{{\prime} }=0)=0\) and \(p(\tau ,{\tau }^{{\prime} }=\tau )=1\), we conclude that for \(0\le {\tau }^{{\prime} }\le \tau \le \pi /2\) we always have 0 ⤠p ⤠1.
It follows that in this regime \(p(\tau ,{\tau }^{{\prime} })\) can be interpreted as a probability. Let us now explain how (1) can be used to express a circuit with small gate angles in terms of circuits with large gate angles drawn randomly, through the following minimal example. We consider the state \(\vert \psi \rangle ={e}^{i{\tau }^{{\prime} }O}\vert 0\rangle\) with \(\vert 0\rangle\) a given initial state, and an observable \({\mathcal{M}}\), with \(0\le {\tau }^{{\prime} }\le \tau\). Using (1), the expectation value of \({\mathcal{M}}\) within \(\vert \psi \rangle\) can be written as
This equation means that \(\langle 0| {e}^{-i{\tau }^{{\prime} }O}{\mathcal{M}}{e}^{i{\tau }^{{\prime} }O}| 0\rangle\) can be obtained in average by deleting with probability 1 â p the rotations e±iÏO in \(\langle 0| {e}^{-i\tau O}{\mathcal{M}}{e}^{i\tau O}| 0\rangle\), up to a rescaling factor λâ2. This factor λâ2 increases the statistical shot noise when measuring these quantum expectation values on a quantum computer. To reach precision ϵ on \(\langle 0| {e}^{-i{\tau }^{{\prime} }O}{\mathcal{M}}{e}^{i{\tau }^{{\prime} }O}| 0\rangle\), one has to do around \(1/{(\epsilon {\lambda }^{2})}^{2}\) shots on each of the quantum expectation values on the right-hand side of (5). We note that similar mechanisms have been used recently to implement arbitrary rotation angles using predefined rotation angles62.
Let us write more formally this simple example in a general context. We consider a circuit that contains G many rotations \({e}^{i{\tau }^{{\prime} }O}\) and that produces the wave function \(\vert \psi \rangle\). For a subset S of these rotations, we denote \(\vert {\psi }_{S}\rangle\) the wave function obtained with the same circuit but with deleting the rotations \({e}^{i{\tau }^{{\prime} }O}\) contained in S and replacing the remaining rotations \({e}^{i{\tau }^{{\prime} }O}\) not in S by eiÏO. Then, for any observable \({\mathcal{M}}\), by repeatedly applying (1), we have
where the sum runs over all the possible subsets \(S,{S}^{{\prime} }\) of gates to delete in the original circuit. The expectation value \(\langle \psi | {\mathcal{M}}| \psi \rangle\) that originally involves a circuit with angle \({\tau }^{{\prime} } >\, 0\) can thus be exactly computed with a circuit involving only larger angles \(\tau\, >\, {\tau }^{{\prime} }\) but in which some rotations are randomly removed with probability 1 â p. The attenuation factor λ2G increases by a factor λâ4G the number of shots to run on the quantum computer to reach a given precision.
Continuous time limit
Let us now apply the previous mechanism to Hamiltonian simulation. We consider a Hamiltonian H that we write as a linear combination of unitaries
with \({O}_{n}^{2}=I\), which is always possible by writing it as a sum of product of Pauli matrices. Up to replacing On by â On, we can always assume cn > 0. We would like to compute the expectation value of a unitary \({\mathcal{M}}\) within the wave function \(\vert \psi (t)\rangle ={e}^{iHt}\vert \psi (0)\rangle\). To that end, we use the Trotter product formula to write
As \({\tau }^{{\prime} }\to 0\), this formula involves \({\mathcal{O}}(1/{\tau }^{{\prime} })\) rotations \({e}^{i{\tau }^{{\prime} }{c}_{n}{O}_{n}}\) with an angle that goes to 0. It requires circuits increasingly deeper with increasingly smaller angles. Let us apply the previous mechanism to this equation. For each n = 1, â¦, N, we choose 0 < Ïn ⤠Ï/2 and implement \({e}^{i{\tau }^{{\prime} }{c}_{n}{O}_{n}}\) using the previously explained protocol with angle Ïn, for \({\tau }^{{\prime} } > 0\) such that \({\tau }^{{\prime} }\le {\tau }_{n}\) for all n. We introduce here a different gate angle Ïn for every term in the Hamiltonian, but for simplicity it could be taken the same for all n. In the sequence of rotations appearing on the right-hand side of (8), instead of each rotation \({e}^{i{\tau }^{{\prime} }{c}_{n}{O}_{n}}\) we apply \({e}^{i{\tau }_{n}{O}_{n}}\) with probability pn given by
and identity with probability 1 â pn. The corresponding attenuation factor λn is
We are thus writing the time evolution operator eitH as a statistical expectation value over random unitaries as
where we introduced \(M=t/{\tau }^{{\prime} }\) assumed to be an integer, and the random unitary operator V(m) defined as
Here, \({U}_{n}^{(m)}\) are independent random unitary operators that are equal to I with probability 1 â pn, and equal to \({e}^{i{\tau }_{n}{O}_{n}}\) with probability pn. The limit when \({\tau }^{{\prime} }\to 0\) of the product of attenuation factors is given by
In V(m), for finite \({\tau }^{{\prime} } >\, 0\), there is a non-zero probability that two unitaries \({U}_{n}^{(m)}\) are not identity. We write
with \({\tilde{V}}^{(m)}\) the random unitary operator equal to I with probability \(1-{{\prod }_{i}(1-p_i)}\times{{\sum }_{n}{p}_{n}/(1-{p}_{n})}\), and for all n = 1, â¦, N, equal to \({e}^{i{\tau }_{n}{O}_{n}}\) with probability \({p}_{n}/(1-{p}_{n})\times {{\prod }_{i}(1-p_i)}\). The operator δV(m) is thus different from 0 with probability
When it is not 0, the norm-2 of δV(m) is bounded by 2. Hence \(| | {\mathbb{E}}[\delta {V}^{(m)}]| | \le 2q({\tau }^{{\prime} })\). It follows that we have
with
Hence the term δU vanishes when taking the Trotter limit \({\tau }^{{\prime} }\to 0\). The sequence of \(M=t/{\tau }^{{\prime} }\) unitaries \({\tilde{V}}^{(m)}\) are all independent and are each of them different from I with probability
In probability theory, a process where events occur randomly and independently with probability ηδt in each time window δt converges by definition to a so-called Poisson process with rate η when δt â 0. Hence, in the limit \({\tau }^{{\prime} }\to 0\), the occurrence of unitaries \({\tilde{V}}^{(m)}\) that are different from I exactly converges to a Poisson process with rate \(\mathop{\sum }\nolimits_{n = 1}^{N}\frac{{c}_{n}}{\sin {\tau }_{n}}\), applied during a time t. Since when \({\tilde{V}}^{(m)}\,\ne\, I\), the conditional probability that \({\tilde{V}}^{(m)}={e}^{i{\tau }_{n}{O}_{n}}\) is
we obtain that in the limit \({\tau }^{{\prime} }\to 0\), the application of gate \({e}^{i{\tau }_{n}{O}_{n}}\) exactly follows a Poisson process with rate \(\frac{{c}_{n}}{\sin {\tau }_{n}}\) during a time t. Since there is at most one gate per Trotter step in \({\tilde{V}}^{(m)}\), the rotations \({e}^{i{\tau }_{n}{O}_{n}}\) drawn from the Poisson process for different nâs are ordered only according to their time of occurrence, not according to n.
Namely, for each rotation \({e}^{i{\tau }_{n}{O}_{n}}\) we obtain a sequence of gate times \(0 < {t}_{n}^{(1)} < \ldots < {t}_{n}^{({m}_{n})}\, <\, t\) drawn from a Poisson process with rate \({c}_{n}/\sin {\tau }_{n}\). From the Poisson process, this number of times mn follows a Poisson distribution with parameter \(\frac{t{c}_{n}}{\sin {\tau }_{n}}\), and once mn has been drawn randomly, the random times \({t}_{n}^{(j)}\) are real numbers drawn uniformly and independently between 0 and t. For a given realization T of all these times for different nâs, we denote \(\vert {\psi }_{T}\rangle\) the wave function obtained by applying the rotations \({e}^{i{\tau }_{n}{O}_{n}}\) ordered by time of occurrence \({t}_{n}^{(i)}\), irrespective of n. Such a configuration \(\vert {\psi }_{T}\rangle\) is depicted schematically in Fig. 1. Then, denoting \({\mathbb{E}}\) the expectation value with respect to two independent configurations \(T,{T}^{{\prime} }\), we have
This is the fundamental relation that defines our algorithm. Remarkably, the exact expectation value at time t under continuous time dynamics can be obtained without discretization error, by only applying rotations with application time Ïn > 0 fixed. The number of rotations Nrot in each circuit is a random variable. Its average is
which is bounded independently of the precision wanted on the result, and only controlled by Ïn. We will study more in details the statistical distribution of this random variable in a section below.
Statement of the algorithm
We state here the algorithm deduced from the previous explanations as a list of instructions that can be implemented by the reader. It takes as input a Hamiltonian H decomposed as in (7), a unitary observable \({\mathcal{M}}\), a time t and an initial state \(\vert \psi (0)\rangle\), and outputs \(\langle \psi (t)| {\mathcal{M}}| \psi (t)\rangle\). It takes 0 < Ïn ⤠Ï/2 as N parameters.
-
1.
Prepare the system in the initial state \(\vert \psi (0)\rangle\).
-
2.
For each n â {1, â¦, N}, draw an integer mn from a Poisson distribution with parameter \(\frac{{c}_{n}t}{\sin {\tau }_{n}}\). Then draw mn real numbers \({t}_{n}^{(i)}\), where i â {1, â¦, mn}, uniformly at random between 0 and t. These \(M\equiv \mathop{\sum }\nolimits_{n = 1}^{N}{m}_{n}\) numbers are collected into a set T.
-
3.
Deduce then the sequence k1, â¦, kM â {1, â¦, N} such that the index n corresponding to the j-th smallest element of T is kj. For m = 1, â¦, M in this order, apply the rotation \({e}^{i{\tau }_{{k}_{m}}{O}_{{k}_{m}}}\) on the system.
-
4.
Apply the observable \({\mathcal{M}}\) on the system.
-
5.
Repeat steps (2) and (3) with Ïn replaced by â Ïn.
-
6.
Measure the overlap with the initial state and divide the result by Î2 given in (13). On a quantum computer, this step requires performing a Hadamard test.
-
7.
Repeat steps (1) to (6) a number of times and average the results.
Measurement of the time evolution operator
The algorithm presented above can be adapted to computing the amplitude ãÏ(0)â£eiHtâ£Ï(0)ã. It is often called Loschmidt amplitude in the condensed matter literature and can be used to determine ground state energies as well as microcanical expectation values7,63. With the same notations as in Eq. (20), we have \(\langle \psi (0)| {e}^{iHt}| \psi (0)\rangle ={\mathbb{E}}[\langle \psi (0)| {\psi }_{T}\rangle ]/\Lambda\). One thus skips steps (4) and (5) in the algorithm above, and replace the attenuation factor Î2 by Î in step (6).
Sampling cost
Computing the expectation value \(\langle \psi (t)| {\mathcal{M}}| \psi (t)\rangle\) through formula (20) involves a statistical average of quantum expectation values. When performing the measurement on a quantum computer, one has to choose a number of shots S to do per configuration, i.e. how to distribute the resources between number of configurations and accuracy of each configuration. If we neglect compilation time, the optimal number of shots can be seen to be S = 1 (see Supplementary Information). In that case, denoting o â { + 1, â 1} the measurement outcome for every single-shot circuit, the expectation values and variances are
Hence, after dividing by Î2 the measured values, denoting \(\bar{o}\in \{+{\Lambda }^{-2},-{\Lambda }^{-2}\}\) the random variable that is an unbiased estimator for \(\langle \psi (t)| {\mathcal{M}}| \psi (t)\rangle\), we have
This means that to reach precision ϵ on the result, one has to perform around 1/(ϵ2Î4) shots at most. These expressions can moreover be used to derive confidence intervals on the expectation value of the observable \({\mathcal{M}}\) using Hoeffdingâs inequality. Our algorithm can thus provide guarantees that the expectation value of a time-evolved observable has a given high probability of being comprised in a window of some size ϵ.
Optimal angle on a perfect quantum computer
Formula (20) holds for any choice of gate angle Ïn. Increasing the gate angle Ïn decreases the number of rotations per circuit \({\mathbb{E}}[{N}_{{\rm{rot}}}]\), but also decreases the attenuation factor Î2, and so increases the number of shots Îâ4 that have to be performed to reach a given precision. The total number of rotations \(R(\{{\tau }_{n}\})={\mathbb{E}}[{N}_{{\rm{rot}}}]{\Lambda }^{-4}\) to perform to get a precision of order \({\mathcal{O}}(1)\) on the result is thus a non-trivial function of the gate angles Ïn, that reaches a minimum at some \({\tau }_{n}^{* }\) that can be computed numerically. At large t, we find analytically that the optimal angle is
for all n = 1, â¦, N, where \(\mu =\mathop{\sum }\nolimits_{n = 1}^{N}{c}_{n}\) is the sum of the coefficients of the Hamiltonian (see Supplementary Information). In that limit we find Î4 = eâ1 which remains finite and \({\mathbb{E}}[{N}_{{\rm{rot}}}]=4{t}^{2}{\mu }^{2}\). Hence the average gate count per circuit scales as
The number of shots to perform to reach precision ϵ scales as \({\mathcal{O}}({\epsilon }^{-2})\), and so the runtime (number of shots times number of gates per shot) to reach precision ϵ scales as \({\mathcal{O}}({t}^{2}{\mu }^{2}{\epsilon }^{-2})\). But remarkably, the precision ϵ does not enter the average gate count per circuit, which remains finite even when ϵ â 0. This differs from usual algorithms such as Trotter, qDRIFT or LCU, whose gate count per circuit we recall in Table 1.
Optimal angle on a noisy quantum computer
The optimal angle value (24) holds for a perfect, noiseless quantum computer. We now consider the large-t optimal angle for a reasonable model of hardware noise, that consists in assuming that the application of each gate \({e}^{i{\tau }_{n}{O}_{n}}\) comes with a signal attenuation \({e}^{-{r}_{n}}\) with some rn > 0. This kind of noise model is known to be a rough first approximation of the effect of the noise on current hardware. Moreover, other types of noise can be converted into this global depolarizing noise through some noise mitigation techniques6,64,65. In state-of-the-art trapped-ion devices typical two-qubit gate fidelities are eâr ~ 99.8%, in which case r ~ 2 â 10â3 66. There are on average \(\frac{t{c}_{n}}{\sin {\tau }_{n}}\) such rotations per configuration. The damping due to hardware imperfection is thus
The total attenuation of the signal is Î2q. Assuming rn small, the optimal times \({\tau }_{n}^{* }\) that minimize the total gate count per circuit at large t are now (see Supplementary Information)
The total attenuation Î2q scales then as
To reach precision ϵ on the result, our algorithm requires thus \(1/{({\Lambda }^{2}q\epsilon )}^{2}\) shots. Importantly, the dependence in the precision is only polynomial. As a comparison, let us consider the same noise model for a Trotter implementation of the time evolution. For simplicity, we take rn = r constant. In the Trotter algorithm, absent hardware error, a Trotter step of size Ï has error N2tCÏ for the first order Trotter formula, with some coefficient C. To reach precision ϵ, we thus necessarily have to take Ï < ϵ/(N2tC), hence run a circuit with more than N3t2C/ϵ gates. For local Hamiltonians, this scaling improves to N2t2C/ϵ. Then, under the same noise model, the imperfections in the hardware incur a signal attenuation of order \({e}^{-{N}^{3}{t}^{2}Cr/\epsilon }\). The number of shots MTrotter thus has to be at least
In contrast with our algorithm the dependence in the precision is exponential. In particular, our algorithm requires exponentially fewer shots for a given precision ϵ when
For local Hamiltonians, the bound would be N2 instead of N3. In both cases, we see that our algorithm always outperforms Trotter at high precision in presence of noise.
Distribution of the number of gates per circuit
We stated in Eq (25) that the average gate count in the random circuits generated by our algorithm is \({\mathcal{O}}({t}^{2}{\mu }^{2})\), without any dependence on the precision ϵ achieved on the result. In our algorithm, the number of rotations is a random variable
with Ï(λ) a Poisson distribution with parameter λ. Indeed, the number of each rotation \({e}^{i{\tau }_{n}{O}_{n}}\) follows a Poisson distribution with parameter \(\frac{t{c}_{n}}{\sin {\tau }_{n}}\), and the sum of Poisson random variables follows a Poisson distribution with parameter given by the sum of their parameters. The variance of this random variable is
At the optimal angle on a perfect quantum computer (24), this is
Hence the gate count per circuit is in average \({\mathcal{O}}({t}^{2}{\mu }^{2})\) with standard deviation \({\mathcal{O}}(t\mu )\).
A particularity of our algorithm is that, because the Poisson distribution is unbounded, the random number of gates in the random circuits is unbounded. Using Chernovâs bound, we have that the probability that the number of rotations Nrot is k + 1 times the average \({\mathbb{E}}[{N}_{{\rm{rot}}}]\) is67
The number of gates is thus strongly concentrated around the average value. Let us assume that one imposes a maximal number of rotations that can be run on the quantum computer, namely one discards all circuits with more than \((k+1){\mathbb{E}}[{N}_{{\rm{rot}}}]\) rotations for a given k. We note first that for all practical purposes, taking say \(k=7/\sqrt{{\mathbb{E}}[{N}_{{\rm{rot}}}]}=3.5/(t\mu )\) at optimal gate angle and large tμ ensures a negligible discard probability < 10â10. More theoretically, one will have to draw \({\mathcal{O}}({e}^{{k}^{2}{\mathbb{E}}[{N}_{{\rm{rot}}}]/(2(k+1))})\) circuits before discarding one circuit and biasing the result. Before that, one achieves a precision \({\mathcal{O}}({e}^{-{k}^{2}{\mathbb{E}}[{N}_{{\rm{rot}}}]/(4(k+1))})\) on the expectation value measured. For this to be smaller than some required precision η, with \({\mathbb{E}}[{N}_{{\rm{rot}}}]=4{t}^{2}{\mu }^{2}\) at the optimal gate angle, one can take \(k={\mathcal{O}}(\sqrt{\log (1/\eta )}/(t\mu ))\) at large tμ and small η, in the regime \(\sqrt{\log (1/\eta )}/(t\mu )\to 0\). It follows that the maximal number of rotations that one needs to include to guarantee precision η is
We thus see that the maximal gate count per circuit of our algorithm possesses a dependence in the precision. However, the average gate count per circuit is independent of the precision, contrary to algorithms like LCU where the gate count per circuit is deterministic with a dependence in the precision, for all circuits. This translates into the fact that the runtime of our algorithm, i.e. number of shots times number of gates per shot, scales as
to reach precision ϵ on the expectation value measured. This is in contrast with previously known algorithms whose runtime would be the gate count per circuit in Table 1 times ϵâ2. Our algorithm has thus a better performance in runtime with respect to the precision ϵ.
Time-dependent Hamiltonian
The algorithm can be generalized straightforwardly to dynamics with a time-dependent Hamiltonian H(t). Let us decompose
with time-dependent coefficients cn(t). For simplicity and without loss of generality, we will assume cn(t) > 0 for all times. We denote \(\vert \psi (t)\rangle ={\mathcal{T}}\exp (i\mathop{\int}\nolimits_{0}^{t}{\rm{d}}sH(s))\vert \psi (0)\rangle\), namely the state \(\vert \psi (t)\rangle\) that satisfies the differential equation
As in the time-independent case, this time-ordered exponential can be obtained as the limit of a product formula
For each n = 1, â¦, N, we choose 0 < Ïn ⤠Ï/2 and implement \({e}^{i{\tau }^{{\prime} }{c}_{n}(j{\tau }^{{\prime} }){O}_{n}}\) using (1) with angle Ïn. In the sequence of rotations appearing on the right-hand side of (39), instead of each rotation \({e}^{i{\tau }^{{\prime} }{c}_{n}(s){O}_{n}}\) we thus apply \({e}^{i{\tau }_{n}{O}_{n}}\) with probability pn given by \({p}_{n}=\frac{{\tau }^{{\prime} }{c}_{n}(s)}{\sin {\tau }_{n}}+{\mathcal{O}}({({\tau }^{{\prime} })}^{2})\), that now depends on time s. As in the time-independent case, this probability is proportional to \({\tau }^{{\prime} }\) as \({\tau }^{{\prime} }\to 0\), and this random choice is tried \({\mathcal{O}}(1/{\tau }^{{\prime} })\) times. The limit \({\tau }^{{\prime} }\to 0\) thus exactly defines a Poisson process but with a time-dependent rate given by \(\frac{{c}_{n}(s)}{\sin {\tau }_{n}}\). This Poisson process with a time-dependent rate can be mapped to an ordinary Poisson process after a mapping of the time. Namely, we introduce the function
as well as \({z}_{n}^{-1}(u)\) its reciprocal, i.e. such that \({z}_{n}^{-1}({z}_{n}(u))=u\). The random times generated by the Poisson process with time-dependent rate \(\frac{{c}_{n}(s)}{\sin {\tau }_{n}}\) are obtained by applying the function \({z}_{n}^{-1}\) to the times generated by a Poisson process with constant rate \(\frac{1}{\sin {\tau }_{n}}\) during a time zn(t).
As for the product of the attenuation factors λnâs, it becomes
To summarize, the algorithm for the time-independent case is modified as follows. We replace step (2) of the time-independent case by
-
2.
For each n â {1, â¦, N}, draw an integer mn from a Poisson distribution with parameter \(\frac{1}{\sin {\tau }_{n}}\). Then draw mn real numbers \({\tilde{t}}_{n}^{(i)}\), where i â {1, â¦, mn}, uniformly at random between 0 and zn(t), and set \({t}_{n}^{(i)}={z}_{n}^{-1}({\tilde{t}}_{n}^{(i)})\). These \(M\equiv \mathop{\sum }\nolimits_{n = 1}^{N}{m}_{n}\) numbers \({t}_{n}^{(i)}\) are collected into a set T.
Then in step (3), the rotation applied is \({e}^{i{\tau }_{{k}_{m}}{O}_{{k}_{m}}}\) where \(s\in \{{t}_{n}^{(i)}\}\) is the time at which the rotation is applied. For the backward propagation in step (5) one has to reverse the order of the gates (this was not necessary in the time-independent case as the Poisson process was invariant by time reversal). The total attenuation Î of step (6) is replaced by (41).
This algorithm produces the exact expectation value of an observable under the time-dependent Hamiltonian H(t), without errors arising from discretizing the continuous function cn(t) or Trotter errors.
Background Hamiltonian
In our algorithm, one can choose an arbitrary different gate angle Ïn for every rotation \({e}^{i{\tau }_{n}{O}_{n}}\). This possibility of choosing different gate angles can be used to improve the algorithm the following way. Let us consider a subset of gates \({\mathcal{O}}\subset {\{{O}_{n}\}}_{n = 1,..,N}\) that all commute among themselves, i.e. \([O,{O}^{{\prime} }]=0\) for all \(O,{O}^{{\prime} }\in {\mathcal{O}}\), and denote
The gate angle Ïp of the rotations corresponding to these terms involved in (20) can be chosen freely. Let us investigate what happens if we take the limit Ïp â 0 for \({O}_{p}\in {\mathcal{O}}\). We note that this limit is not the Trotter limit \({\tau }^{{\prime} }\to 0\) appearing in (8). Here, the limit \({\tau }^{{\prime} }\to 0\) has been taken first at fixed Ïp, and we then take the limit Ïp â 0. In this limit, the rate of the Poisson process corresponding to these gates \(\frac{{c}_{p}}{\sin {\tau }_{p}}\) diverges to â, while the rates of the Poisson process of the other gates \(\frac{{c}_{n}}{\sin {\tau }_{n}}\) not in \({\mathcal{O}}\) are not affected. Hence, when drawing the times at which the rotations are applied, almost all of them will correspond to gates in \({\mathcal{O}}\). If we take two times t1 < t2 at which rotations of terms not in \({\mathcal{O}}\) are applied, there will be typically \({\mathcal{O}}(\frac{{c}_{p}}{{\tau }_{p}})\) rotations \({e}^{i{\tau }_{p}{O}_{p}}\) applied in between when Ïp â 0. Hence, since all these terms commute among themselves, when we take Ïp â 0, these rotations will combine into \({e}^{i({t}_{2}-{t}_{1}){H}_{{\rm{background}}}}\).
Therefore the algorithm can be modified as follows. In step (2) we only draw times corresponding to terms not in \({\mathcal{O}}\), and use them to produce an ordered sequence of gates as in step (3). However, between each application of these rotations \({e}^{i{\tau }_{n}{O}_{n}}\) and \({e}^{i{\tau }_{m}{O}_{m}}\) at times sn < sm, we apply \({e}^{i\delta s{H}_{{\rm{background}}}}\) with δs = sm â sn the difference between the two instants at which the rotations are applied. The Hamiltonian Hbackground thus plays the role of a âbackground" Hamiltonian that is always applied on the system, but on top of which rotations \({e}^{i{\tau }_{n}{O}_{n}}\) with \({O}_{n}\,\notin\, {\mathcal{O}}\) are applied.
The upshot of this improvement is that it reduces the attenuation factor Î, which is now computed only including the terms not in \({\mathcal{O}}\)
Comparison with previous work
After completion and online publication of this work, the recent paper68 was brought to our attention, where a simulation algorithm called URCC was introduced. It uses a completely different decomposition and sampling strategy, based on Dyson expansion and leading order rotations, and is distinct from ours. However, it has a gate count per circuit \({\mathcal{O}}({t}^{2}{\mu }^{2})\) independent of the precision like this present work.
Let us perform a comparison of runtime and gate count between this algorithm and ours68. The URCC algorithm takes as a parameter a number of segment Nseg. The variance of the measured expectation value after M shots is
with a function f(x) that behaves at small x as \(f(x)=1+{x}^{2}+{\mathcal{O}}({x}^{3})\). To ensure that the variance remains of order \({\mathcal{O}}(1)\) at large tμ, one needs to choose \({N}_{{\rm{seg}}}={\mathcal{O}}({t}^{2}{\mu }^{2})\). The random circuits generated contain always \({{\mathcal{N}}}_{{\rm{URCC}}}=2{N}_{{\rm{seg}}}\) rotations (possibly of angle Ï/2). In our case, taking Ïn = Ï independent of n, the variance of the measured expectation value is
The number of rotations in our case is a random variable that is unbounded, whose average is \({{\mathcal{N}}}_{{\rm{ours}}}=\frac{2t\mu }{\sin \tau }\). To ensure that the variance remains of order \({\mathcal{O}}(1)\), one needs to choose \(\tau ={\mathcal{O}}({(t\mu )}^{-1})\). Choosing N, Ï so as to equate the variances Vours = VURCC at large tμ gives Ï = 2tμ/Nseg. Then one obtains an average number of gates per circuit \({{\mathcal{N}}}_{{\rm{ours}}}={{\mathcal{N}}}_{{\rm{URCC}}}/2\). Although the gate count is a random variable that is unbounded in our case, the average gate count is twice as small in our algorithm, at same overhead in terms of shots. The interpretation of this difference is that our algorithm allows for optimizing the gate angles so as to minimize the average gate count. Although the decomposition in terms of rotations and Pauli strings (which are rotations with angle Ï/2) used in the URCC algorithm cleverly guarantees bounded gate count, it is suboptimal in terms of average gate count.
Numerical tests
We now show numerical simulations of our algorithm. We start with a noisy implementation on a non-sparse Hamiltonian as routinely encountered in quantum chemistry. Hamiltonians describing molecular electronic structure problems are typically decomposed in a basis set of fermionic orbitals, where they read
with câs canonical fermionic operators and N the number of orbitals considered. Written in terms of Pauli gates through a Jordan-Wigner transformation, these molecular Hamiltonians involve thus \({\mathcal{O}}({N}^{4})\) terms with each \({\mathcal{O}}(N)\) Pauli gates, incurring a prohibitively large gate count per circuit for any Trotter implementation of the dynamics on near-term hardware. However, the sum of the coefficients of the Hamiltonian when written in terms of Pauli strings has a lower scaling with N, which makes our algorithm attractive69,70,71. We consider a H2O water molecule in a STO-6G basis with 14 orbitals, and use our algorithm to study the Loschmidt amplitude
where \(\vert HF\rangle\) denotes the Hartree-Fock ground state. We take a geometry with an angle H-O-H of 105o and with an elongated distance H-O of 2.2 Ã , to have a significant difference between the exact ground state and \(\vert HF\rangle\). On a noisy hardware, we propose to compute the ratio
with â and â denoting the real and imaginary parts. This ratio is not sensitive to the depolarizing part of the hardware noise, but still contains information about the energy levels in the oscillation frequencies. In panel (a) of Fig. 2 we show a noisy simulation of our algorithm that includes a depolarizing channel with probability 0.002 after each 2-qubit gate, and observe very good agreement.
Next, we show numerical results for the time-dependent case of our algorithm. For this case, we consider the 2D square lattice Ising model in a transverse field h, whose Hamiltonian is
with periodic boundary conditions. We implement the adiabatic preparation of the ground state at hf = 2.5, starting from h = 0. To connect these magnetic field values adiabatically we use the schedule \(h(t)={h}_{{\rm{f}}}\sin {(\frac{\pi }{2}\sin {(\frac{\pi t}{2{T}_{{\rm{f}}}})}^{2})}^{2}\) for 0 ⤠t ⤠Tf. In panel (b) of Fig. 2 we show the final energy per site obtained at t = Tf, as a function of Tf, together with the error bars for a fixed number of shots. We see the agreement with the exact values, as well as the broadening of the error bars as Tf grows at fixed Ï, as imposed by (13).
Finally, we compare our algorithm to first-order Trotter and qDRIFT in the case of the 2D square lattice Ising model (49) with magnetic field h = 2, which is a typical model studied in condensed matter. In Fig. 3 we plot the Loschmidt amplitude ã0â£eiHtâ£0ã starting from the Z = + 1 product state \(\vert 0\rangle\), using a same gate angle Ï = 0.1 for our algorithm and for qDRIFT, and with a same time step 0.1 for Trotter. We take 105 shots per point (which is necessary to have a precision < 0.01 with high confidence), and distribute them over 103 random circuits with 102 shots per circuit for our case and for qDRIFT. On the time points in Fig. 3, the average difference in absolute value between exact expectation value and simulated is 0.064 for Trotter, 0.135 for qDRIFT and 0.009 for our method, at equal runtime. We see that our algorithm performs better than both qDRIFT and Trotter. This demonstrates our algorithm can perform well in problems of scientific relevance.
Between quantum evolution and path integral representation
In formula (20), the gate angle Ï of the rotations is a free parameter that does not modify the precision of the result, but that modifies the attenuation factor Î. Let us comment on two interesting limits of this gate angle Ï, that are Ï â 0 and Ï = Ï/2. When Ï â 0, the different rotations \({e}^{i\tau {O}_{n}}\) commute at order Ï and there are increasingly more rotations drawn from the Poisson process. With probability 1 in this limit, a single configuration \(\vert {\psi }_{T}\rangle\) is equal to the exact time evolution \({e}^{iHt}\vert 0\rangle\), and the attenuation factor Î is equal to 1. This corresponds to a âpurely quantum" limit of (20), where no averaging over circuits is required and where the exact quantum state is prepared.
On the other hand, when Ï = Ï/2, we have \({e}^{i\tau {O}_{n}}=i{O}_{n}\), and so when On is a string of Pauli operators, the rotations can be implemented with only one-qubit gates. Each configuration \(\vert {\psi }_{T}\rangle\) remains thus a product state in the Z basis and describes a classical trajectory in discrete time. It is thus a âpurely classical" limit. For that value of Ï, formula (20) becomes thus akin to a path integral representation, in the sense that a quantum expectation value is expressed as an averaging over exponentially many classical trajectories.
These two limits provide thus an interesting interpretation of our algorithm as an interpolation between a single quantum trajectory for Ï = 0, and exponentially many classical trajectories for Ï = Ï/2. Intermediate values of Ï near Ï/2 allow one to decrease the entanglement present in each configuration \(\vert {\psi }_{T}\rangle\) compared to the exact time-evolved state \({e}^{iHt}\vert 0\rangle\), but at the cost of increasing the number of samples required through the attenuation factor (13). This could find applications in the classical simulation of quantum dynamics.
Simulation of Clifford circuits + T gates
It is well-known that Clifford circuits (i.e. circuits built out of S, H, CNOT gates) can be simulated in polynomial time72. These circuits are however not universal, in the sense that it is not possible to approximate arbitrary unitary operators with arbitrary precision with Clifford circuits73. To obtain a universal gate set, one has to include as well T gates, defined as \(T=\left(\begin{array}{cc}1&0\\ 0&{e}^{i\pi /4}\end{array}\right)\). It is then natural to ask the complexity of simulating circuits that contain Clifford gates and t-many T gates.
The basic mechanism of gate angle reduction in (1) can be applied as well here. Let us consider a unitary U that we decompose as
where the T gates are applied on some qubits (whose index we omit for readability), and each unitary Uj can be implemented with only Clifford gates. We would like to evaluate the runtime for computing ã0â£Uâ£0ã classically. Writing
we can express
with n the number of times S is chosen in the sum. Each circuit U1V1U2â¦UtVtUt+1 is a Clifford circuit and so can be simulated in polynomial time. To compute exactly the sum, one has to compute 2t such Clifford circuits. However, one can also approximate probabilistically the sum with the 1/2t prefactor by drawing each term I or S with probability 1/2. To obtain precision ϵ on the amplitude ã0â£Uâ£0ã, because of the factor \(1/\cos {(\pi /8)}^{t}\), one will have to include of order \(1/({\epsilon }^{2}\cos {(\pi /8)}^{2t})\) terms. Hence the cost in simulating the Clifford circuit with t-many T gates up to precision ϵ is exponential in t, with exponent \({e}^{2| \log \cos (\pi /8)| t}\). The exponent turns out to match the currently best known exponent74.
DISCUSSION
We presented a quantum algorithm to compute expectation values of time-evolved observables that requires an average circuit depth that is independent of the desired precision ϵ, contrary to many algorithms that require an infinite depth when ϵ â 0. We show that by applying randomly gates on the initial state according to a well-chosen distribution, and multiplying the result by a known amplification factor, one achieves zero discretization error while having bounded-depth circuits in average. This amplification factor increases the number of shots required to reach a given precision. The algorithm is particularly adapted to non-sparse Hamiltonians as the gate count per circuit only depends on the sum of the coefficients of the Hamilotnian and not the number of terms. The gate count per circuit for simulation time t is \({\mathcal{O}}({t}^{2}{\mu }^{2})\), and with shot cost \({\mathcal{O}}({\epsilon }^{-2})\) to reach precision ϵ. We also showed that the algorithm generalizes to time-dependent Hamiltonian, achieving again no time-discretization error or no Trotter error with a finite gate count per circuit. We finally showed that the same mechanism can be used to simulate classically Clifford circuits with T gates, with a runtime as good as the best known simulation algorithms.
We expect this algorithm for implementing time evolution on a quantum computer to have particular impact on certain applications. For example, the absence of discretization error is particularly adapted to quantum chemistry applications where extreme precision is required. It is also particularly relevant for the quantum adiabatic algorithm due to the absence of time discretization errors. One shortcoming of the approach is that it is not adapted to perform sampling (instead of computing expectation values), as occurs for example in using quantum computers to solve classical optimization problems. It would be valuable to see if an algorithm with similar characteristics could be devised for performing sampling. We will study more in depth these applications in further works.
Data availability
Numerical data is available in the Supplementary Information.
Code availability
Numerical codes are available upon request.
References
Feynman, R. P. Simulating physics with computers. In Feynman and computation, 133â153 (CRC Press, 2018).
Lloyd, S. Universal quantum simulators. Science 273, 1073â1078 (1996).
Abrams, D. S. & Lloyd, S. Quantum algorithm providing exponential speed increase for finding eigenvalues and eigenvectors. Phys. Rev. Lett. 83, 5162 (1999).
Harrow, A. W., Hassidim, A. & Lloyd, S. Quantum algorithm for linear systems of equations. Phys. Rev. Lett. 103, 150502 (2009).
Farhi, E., Goldstone, J., Gutmann, S. & Sipser, M. Quantum computation by adiabatic evolution. quant-ph/0001106 (2000).
Yang, Y. et al. Simulating prethermalization using near-term quantum computers. PRX Quantum 4, 030320 (2023).
Lu, S., Banuls, M. C. & Cirac, J. I. Algorithms for quantum simulation at finite energies. PRX Quantum 2, 020321 (2021).
Schuckert, A., Bohrdt, A., Crane, E. & Knap, M. Probing finite-temperature observables in quantum simulators of spin systems with short-time dynamics. Phys. Rev. B 107, L140410 (2023).
Suzuki, M. Fractal decomposition of exponential operators with applications to many-body theories and monte carlo simulations. Phys. Lett. A 146, 319â323 (1990).
Suzuki, M. General theory of fractal path integrals with applications to many-body theories and statistical physics. J. Math. Phys. 32, 400â407 (1991).
Berry, D. W., Ahokas, G., Cleve, R. & Sanders, B. C. Efficient quantum algorithms for simulating sparse hamiltonians. Comm. Math. Phys. 270, 359â371 (2007).
Childs, A. M. & Su, Y. Nearly optimal lattice simulation by product formulas. Phys. Rev. Lett. 123, 050503 (2019).
Childs, A. M., Su, Y., Tran, M. C., Wiebe, N. & Zhu, S. Theory of trotter error with commutator scaling. Phys. Rev. X 11, 011020 (2021).
Low, G. H., Su, Y., Tong, Y. & Tran, M. C. Complexity of implementing trotter steps. PRX Quantum 4, 020323 (2023).
Zeng, P., Sun, J., Jiang, L. & Zhao, Q. Simple and high-precision hamiltonian simulation by compensating trotter error with linear combination of unitary operations. Preprint at https://arxiv.org/abs/2212.04566 (2022).
Cho, C.-H., Berry, D. W. & Hsieh, M.-H. Doubling the order of approximation via the randomized product formula. Phys. Rev. A 109, 062431 (2024).
Tomesh, T. et al. Optimized quantum program execution ordering to mitigate errors in simulations of quantum systems. In 2021 International Conference on Rebooting Computing (ICRC), 1â13 (IEEE, 2021).
Aharonov, D. & Ta-Shma, A. Adiabatic quantum state generation and statistical zero knowledge. In Proc. ACM Symp. Th. Comp., 20â29 (2003).
Childs, A. M. & Wiebe, N. Hamiltonian simulation using linear combinations of unitary operations. Preprint at https://arxiv.org/abs/1202.5822 (2012).
Haah, J., Hastings, M. B., Kothari, R. & Low, G. H. Quantum algorithm for simulating real time evolution of lattice hamiltonians. SIAM Journal on Computing FOCS18â250 (2021).
Leadbeater, C. N., Fitzpatrick, N., Muñoz Ramo, D. & Thom, A. Non-unitary trotter circuits for imaginary time evolution. Quantum Science and Technology (2023).
Fitzpatrick, N., Apel, H. & Ramo, D. M. Evaluating low-depth quantum algorithms for time evolution on fermion-boson systems. Preprint at https://arxiv.org/abs/2106.03985 (2021).
Ostmeyer, J. Optimised trotter decompositions for classical and quantum computing. J. Phys. A (2022).
Mc Keever, C. & Lubasch, M. Classically optimized hamiltonian simulation. Phys. Rev. Res. 5, 023146 (2023).
Campbell, E. Shorter gate sequences for quantum computing by mixing unitaries. Phys. Rev. A 95, 042306 (2017).
Childs, A. M., Ostrander, A. & Su, Y. Faster quantum simulation by randomization. Quantum 3, 182 (2019).
Faehrmann, P. K., Steudtner, M., Kueng, R., Kieferová, M. & Eisert, J. Randomizing multi-product formulas for hamiltonian simulation. Quantum 6, 806 (2022).
Wan, K., Berta, M. & Campbell, E. T. Randomized quantum algorithm for statistical phase estimation. Phys. Rev. Lett. 129, 030503 (2022).
Wang, S., McArdle, S. & Berta, M. Qubit-efficient randomized quantum algorithms for linear algebra. PRX Quantum 5, 020324 (2024).
Gong, W., Kharkov, Y., Tran, M. C., Bienias, P. & Gorshkov, A. V. Improved digital quantum simulation by non-unitary channels. Preprint at https://arxiv.org/abs/2307.13028 (2023).
Campbell, E. Random compiler for fast hamiltonian simulation. Phys. Rev. Lett. 123, 070503 (2019).
Ouyang, Y., White, D. R. & Campbell, E. T. Compilation by stochastic hamiltonian sparsification. Quantum 4, 235 (2020).
Chen, C.-F., Huang, H.-Y., Kueng, R. & Tropp, J. A. Concentration for random product formulas. PRX Quantum 2, 040305 (2021).
Nakaji, K., Bagherimehrab, M. & Aspuru-Guzik, A. High-order randomized compiler for hamiltonian simulation. PRX Quantum 5, 020330 (2024).
Kiss, O., Grossi, M. & Roggero, A. Importance sampling for stochastic quantum simulations. Quantum 7, 977 (2023).
Hagan, M. & Wiebe, N. Composite quantum simulations. Quantum 7, 1181 (2023).
Huggins, W. J. et al. Virtual distillation for quantum error mitigation. Phys. Rev. X 11, 041036 (2021).
Berry, D. W., Childs, A. M., Su, Y., Wang, X. & Wiebe, N. Time-dependent hamiltonian simulation with l1-norm scaling. Quantum 4, 254 (2020).
Pocrnic, M., Hagan, M., Carrasquilla, J., Segal, D. & Wiebe, N. Composite qdrift-product formulas for quantum and classical simulations in real and imaginary time. Phys. Rev. Res. 6, 013224 (2024).
Aspuru-Guzik, A., Dutoi, A. D., Love, P. J. & Head-Gordon, M. Simulated quantum computation of molecular energies. Science 309, 1704â1707 (2005).
Albash, T. & Lidar, D. A. Adiabatic quantum computation. Rev. Mod. Phys. 90, 015002 (2018).
Su, Y., Berry, D. W., Wiebe, N., Rubin, N. & Babbush, R. Fault-tolerant quantum simulations of chemistry in first quantization. PRX Quantum 2, 040332 (2021).
Reiher, M., Wiebe, N., Svore, K. M., Wecker, D. & Troyer, M. Elucidating reaction mechanisms on quantum computers. Proc. Nat. Ac. Sc. 114, 7555â7560 (2017).
Bauer, B., Bravyi, S., Motta, M. & Chan, G. K.-L. Quantum algorithms for quantum chemistry and quantum materials science. Chem. Rev. 120, 12685â12717 (2020).
McArdle, S., Endo, S., Aspuru-Guzik, A., Benjamin, S. C. & Yuan, X. Quantum computational chemistry. Rev. Mod. Phys. 92, 015003 (2020).
Motta, M. & Rice, J. E. Emerging quantum computing algorithms for quantum chemistry. Wiley Interdisc. Rev.: Comp. Mol. Sc. 12, e1580 (2022).
Kim, I. H. et al. Fault-tolerant resource estimate for quantum chemical simulations: Case study on li-ion battery electrolyte molecules. Phys. Rev. Res. 4, 023019 (2022).
Berry, D. W., Childs, A. M., Cleve, R., Kothari, R. & Somma, R. D. Simulating hamiltonian dynamics with a truncated taylor series. Phys. Rev. Lett. 114, 090502 (2015).
Berry, D. W., Childs, A. M., Cleve, R., Kothari, R. & Somma, R. D. Exponential improvement in precision for simulating sparse hamiltonians. In Proc. ACM Symp. Th. Comp., 283â292 (2014).
Low, G. H. & Chuang, I. L. Optimal hamiltonian simulation by quantum signal processing. Phys. Rev. Lett. 118, 010501 (2017).
Low, G. H. & Chuang, I. L. Hamiltonian simulation by qubitization. Quantum 3, 163 (2019).
Low, G. H. & Wiebe, N. Hamiltonian simulation in the interaction picture. Preprint at https://arxiv.org/abs/1805.00675 (2018).
Low, G. H., Yoder, T. J. & Chuang, I. L. Methodology of resonant equiangular composite quantum gates. Phys. Rev. X 6, 041067 (2016).
Kikuchi, Y., Mc Keever, C., Coopmans, L., Lubasch, M. & Benedetti, M. Realization of quantum signal processing on a noisy quantum computer. npj Quantum Inf. 9, 93 (2023).
Berry, D. W. & Childs, A. M. Black-box Hamiltonian simulation and unitary implementation. Quant. Inf. Comput. 12, 0029â0062 (2012).
Childs, A. M. On the relationship between continuous-and discrete-time quantum walk. Comm. Math. Phys. 294, 581â603 (2010).
Buluta, I. & Nori, F. Quantum simulators. Science 326, 108â111 (2009).
Blatt, R. & Roos, C. F. Quantum simulations with trapped ions. Nat. Phys. 8, 277â284 (2012).
Aspuru-Guzik, A. & Walther, P. Photonic quantum simulators. Nat. Phys. 8, 285â291 (2012).
Bloch, I., Dalibard, J. & Nascimbene, S. Quantum simulations with ultracold quantum gases. Nat. Phys. 8, 267â276 (2012).
Georgescu, I. M., Ashhab, S. & Nori, F. Quantum simulation. Rev. Mod. Phys. 86, 153 (2014).
Koczor, B., Morton, J. J. & Benjamin, S. C. Probabilistic interpolation of quantum rotation angles. Phys. Rev. Lett. 132, 130602 (2024).
Hémery, K. et al. Measuring the loschmidt amplitude for finite-energy properties of the fermi-hubbard model on an ion-trap quantum computer (2023).
Temme, K., Bravyi, S. & Gambetta, J. M. Error mitigation for short-depth quantum circuits. Phys. Rev. Lett. 119, 180509 (2017).
Self, C. N., Benedetti, M. & Amaro, D. Protecting expressive circuits with a quantum error detection code. Nat. Phys. 20, 219â224 (2024).
Moses, S. A. et al. A race-track trapped-ion quantum processor. Phys. Rev. X 13, 041052 (2023).
Pollard, D. MiniEmpirical. Preprint at http://www.stat.yale.edu/~pollard/Books/Mini/. See also https://github.com/ccanonne/probabilitydistributiontoolbox/blob/master/poissonconcentration.pdf (2015).
Zhang, X.-M., Huo, Z., Liu, K., Li, Y. & Yuan, X. Unbiased random circuit compiler for time-dependent hamiltonian simulation. Preprint at https://arxiv.org/abs/2212.09445 (2022).
Rubin, N. C., Babbush, R. & McClean, J. Application of fermionic marginal constraints to hybrid quantum algorithms. N. J. Phys. 20, 053020 (2018).
Lee, J. et al. Even more efficient quantum computations of chemistry through tensor hypercontraction. PRX Quantum 2, 030305 (2021).
Koridon, E. et al. Orbital transformations to reduce the 1-norm of the electronic structure hamiltonian for quantum computing applications. Phys. Rev. Res. 3, 033127 (2021).
Gottesman, D. The Heisenberg Representation of Quantum Computers. In Group22: Proceedings of the XXII International Colloquium on Group Theoretical Methods in Physics (eds. Corney S. P. et al.) 32â43 (MA, International Press, Cambridge, 1999).
Nielsen, M. A. & Chuang, I. L. Quantum computation and quantum information (Cambridge University Press, 2010).
Bravyi, S. & Gosset, D. Improved classical simulation of quantum circuits dominated by clifford gates. Phys. Rev. Lett. 116, 250501 (2016).
Acknowledgements
We thank David Zsolt Manrique for helpful comments on the draft. E.G. acknowledges support by the Bavarian Ministry of Economic Affairs, Regional Development and Energy (StMWi) under project Bench-QC (DIK0425/01).
Author information
Authors and Affiliations
Contributions
The original idea (1) as well its convergence to a Poisson process was conceived by E.G. E.G. also had the idea of Clifford + T simulations, and found the optimal angle on both noisy and noiseless quantum computers. Ideas for the applications (Loschmidt amplitudes, spin and electronic structure models, adiabatic state preparations) and comparisons with Trotter arose from discussions with both authors. The manuscript contains contributions from both authors, with E.G. providing the majority of the text.
Corresponding author
Ethics declarations
Competing interests
H.D. is a shareholder of Quantinuum. E.G. is a small shareholder of Quantinuum. The Authors declare no other Competing Financial or Non-Financial Interests.
Additional information
Publisherâs note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the articleâs Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the articleâs Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Granet, E., Dreyer, H. Hamiltonian dynamics on digital quantum computers without discretization error. npj Quantum Inf 10, 82 (2024). https://doi.org/10.1038/s41534-024-00877-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41534-024-00877-y