Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
  • Open Access

Cross-Verification of Independent Quantum Devices

C. Greganti, T. F. Demarie, M. Ringbauer, J. A. Jones, V. Saggio, I. Alonso Calafell, L. A. Rozema, A. Erhard, M. Meth, L. Postler, R. Stricker, P. Schindler, R. Blatt, T. Monz, P. Walther, and J. F. Fitzsimons
Phys. Rev. X 11, 031049 – Published 2 September 2021

Abstract

Quantum computers are on the brink of surpassing the capabilities of even the most powerful classical computers, which naturally raises the question of how one can trust the results of a quantum computer when they cannot be compared to classical simulation Here, we present a cross-verification technique that exploits the principles of measurement-based quantum computation to link quantum circuits of different input size, depth, and structure. Our technique enables consistency checks of quantum computations between independent devices, as well as within a single device. We showcase our protocol by applying it to five state-of-the-art quantum processors, based on four distinct physical architectures: nuclear magnetic resonance, superconducting circuits, trapped ions, and photonics, with up to six qubits and up to 200 distinct circuits.

  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
4 More
  • Received 17 November 2020
  • Revised 22 April 2021
  • Accepted 6 July 2021

DOI:https://doi.org/10.1103/PhysRevX.11.031049

Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.

Published by the American Physical Society

Physics Subject Headings (PhySH)

Quantum Information, Science & TechnologyAtomic, Molecular & Optical

Authors & Affiliations

C. Greganti1,2,*, T. F. Demarie3,4,5,*, M. Ringbauer6,†, J. A. Jones7, V. Saggio1, I. Alonso Calafell1, L. A. Rozema1, A. Erhard6, M. Meth6, L. Postler6, R. Stricker6, P. Schindler6, R. Blatt6,8, T. Monz6,9, P. Walther1,10, and J. F. Fitzsimons4,5,11,‡

  • 1Faculty of Physics, Vienna Center for Quantum Science and Technology (VCQ), University of Vienna, Boltzmanngasse 5, 1090 Vienna, Austria
  • 2VitreaLab GmbH, Gutheil-Schoder-Gasse 17, 1230 Vienna, Austria
  • 3Entropica Labs, 186b Telok Ayer Street, Singapore 068632
  • 4Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543
  • 5Singapore University of Technology and Design, 8 Somapah Road, Singapore 487372
  • 6Institut für Experimentalphysik, Universität Innsbruck, Technikerstrasse 25, 6020 Innsbruck, Austria
  • 7Clarendon Laboratory, Department of Physics, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
  • 8Institut für Quantenoptik und Quanteninformation, Österreichische Akademie der Wissenschaften, Otto-Hittmair-Platz 1, 6020 Innsbruck, Austria
  • 9Alpine Quantum Technologies GmbH, 6020 Innsbruck, Austria
  • 10Christian Doppler Laboratory for Photonic Quantum Computing, Faculty of Physics, University of Vienna, 1090 Vienna, Austria
  • 11Horizon Quantum Computing, 79 Ayer Rajah Crescent, #03-01 BASH, Singapore 139955

  • *The author contribute equally to this work.
  • martin.ringbauer@uibk.ac.at
  • joe@horizonquantum.com

Popular Summary

Quantum computers are advancing at a rapid pace and are already starting to outperform the world’s largest supercomputers. Yet, these devices are prone to errors in a way that their classical counterparts are not. To use them in applications, we thus need to verify that they perform as intended, even when we cannot check them with our trusted classical computers. Here, we introduce a technique that allows cross-checking one noisy quantum device against another in a way that devices can pass the test only if they produce close-to-correct results. This procedure does not rely on classical computers and can therefore be applied even to the next generation of quantum devices.

Our technique constructs a hidden connection between two different and seemingly random quantum computations that are executed on two independent devices. Yet, because of their hidden connection, the two devices must agree on certain outcomes of their random computations. Since the two computations are so different, a simple error in one would correspond to a complicated sequence of errors in the other. Hence, as long as the two devices act honestly, the only way that their outputs can agree is if they perform the correct computation.

This simple cross-check procedure establishes full system performance even once the quantum devices surpass the capabilities of classical computers. With more and more quantum computers accessible through the cloud, scalable verification procedures that require no overhead or hardware access will be crucial for widespread use of these devices.

Key Image

Article Text

Click to Expand

References

Click to Expand
Issue

Vol. 11, Iss. 3 — July - September 2021

Subject Areas
Reuse & Permissions
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×

Images

  • Figure 1
    Figure 1

    Cartoon representation of the quantum processing architectures used in this work. (a) NMR device at the University of Oxford [7], (b) superconducting circuits at IBM [31] and Rigetti Computing [32], (c) a trapped-ion quantum processor at the University of Innsbruck [8], and (d) a photonic quantum processor at the University of Vienna [33].

    Reuse & Permissions
  • Figure 2
    Figure 2

    Schematic representation of equivalent computations in MBQC (left) and in the circuit model (right). The underlying MBQC is based on a six-qubit H-shaped graph state with two different g-flows, consisting of (a) two and (b) three input/output qubits (other choices are possible). The direction of the flow is indicated by the arrows on the graph, with edges not involved in the flow shown as dashed lines. The qubits are measured according to the order of the labeling numbers. In panel (a), the input state of the circuit Ca is |++12, associated with qubits 1 and 2 of the cluster state, whereas in panel (b), the input state of the circuit Cb is |+++251, associated with qubits 1, 2, and 5 of the cluster state. The qubit ordering in the circuit is chosen according to the output labels, and the detailed procedure for going from the graph and flow to the circuit can be found in Appendix pp2. Note that both quantum circuits on the right correspond to the same MBQC graph state on the left, albeit with a different flow. The basic single-qubit gate J^(αi)=H^R^z(αi) [for brevity, J^i with i=(1,,6)] can be decomposed into a Hadamard gate H^ and a rotation R^z(α) around the z axis of the Bloch sphere.

    Reuse & Permissions
  • Figure 3
    Figure 3

    Experimental comparison of quantum devices. (a) Experimental squared 2 distances between pairs of independent quantum devices, averaged over 34 instances. Only computations using different numbers of physical qubits are compared, so the implementations represent fundamentally different sampling problems. Uncertainties in parentheses correspond to 1 standard deviation of statistical noise. (b) Squared 2 distances for each quantum device averaged over comparisons to all other devices (blue). To illustrate that this metric qualitatively captures the performance of the individual devices, we also show the squared 2 distances of each device against the (nonscalable) theory prediction (red). These two quantities are not expected to coincide. However, arranging devices according to either metric yields the same order in our experiments.

    Reuse & Permissions
  • Figure 4
    Figure 4

    Experimental comparison between outputs of computations for pairs of different devices. Data points represent the (rescaled) outcome probabilities of one device (horizontal axis, bottom label) versus another (vertical axis, top label), for MBQC-related instances. We compare all pairs of independent devices, implementing different-size circuits and based on different physical systems. In the ideal case, all points should, up to unavoidable shot noise, lie on the diagonal. For each data set containing 136 outcome probabilities (34 circuits and four outcome combinations each), we perform linear total least-squares regression (solid blue line) to quantify the deviation from the ideal correlation (red dashed line), yielding regression slopes with 1σ uncertainties of (top, left to right) 1.04(2), 0.88(3), 0.61(2), 0.84(3), (bottom, left to right) 1.05(2), 0.80(3), 0.56(2), and 0.58(2), respectively. Experimental error bars correspond to 1σ statistical uncertainty associated with the data points, and the blue shaded bands represent 3σ mean prediction intervals for the regression.

    Reuse & Permissions
  • Figure 5
    Figure 5

    Comparison between experimental outcome probabilities and theoretical expected values per single device. From left to right: Oxford, Vienna, IBM, Rigetti, and Innsbruck. As in Fig. 4, data points represent the (rescaled) outcome probabilities obtained from the respective device (Experiment), against the corresponding theory value (Theory) obtained from direct circuit simulation. For each data set, we perform linear least-squares regression (blue line) to quantify the deviation from the ideal correlation (red dashed line), yielding regression slopes with 1σ uncertainties of 0.869(6), 0.85(3), 0.70(3), 0.50(2), and 0.94(1), respectively. Experimental error bars correspond to 1σ statistical uncertainty, and the blue shaded bands represent 3σ mean prediction intervals for the regression.

    Reuse & Permissions
  • Figure 6
    Figure 6

    Comparison between experimental outcome probabilities for the circuits Ca and Cb on the Innsbruck device. Linear total least-squares regression (solid blue line) yields a regression slope of 1.09(2) compared to the ideal value of 1 (dashed red line). Experimental error bars correspond to 1σ statistical uncertainty, and the blue shaded bands represent 3σ mean prediction intervals for the regression.

    Reuse & Permissions
  • Figure 7
    Figure 7

    Expectation values of the measured stabilizer operators for the two identity products related to the six-qubit H-shaped cluster state.

    Reuse & Permissions
  • Figure 8
    Figure 8

    Box cluster 2×5 with two choices of g-flow. In diagram (a), the “left-to-right” flow maps to the two-qubit, depth-5 circuit C2×5, whereas in diagram (b), “top-to-bottom” flow maps to a five-qubit, depth-2 circuit C5×2. The construction for the box cluster 2×4 is equivalent to the elements in dashed borders being removed.

    Reuse & Permissions
  • Figure 9
    Figure 9

    Averaged squared 2 distances for different numbers of averaged instances. We show data for the Innsbruck self-verification (blue), the cross-verification between Innsbruck and Oxford (orange), and the cross-verification between Innsbruck and IBM (green). The solid lines correspond to the mean values over all samples of a fixed size, and the shaded regions depict the 1σ spread of these values. Statistical uncertainties are not taken into account in this analysis.

    Reuse & Permissions
  • Figure 10
    Figure 10

    Examples of the required error correlations between the two MBQC-derived circuits for the protocol to fail. The left circuit corresponds to a horizontal flow in a 3×4 lattice cluster, where either a (a) single-qubit or a (b) two-qubit error occurs. The effect of either error in the underlying MBQC pattern is shown in the middle. Translating to the circuit picture for a vertical flow, the right circuit then shows the error that would be required on the other device (correlated with the first) for the two circuits to still produce consistent results.

    Reuse & Permissions
  • Figure 11
    Figure 11

    Data for cross-verification using the box clusters 2×4 (left) and 2×5 (right). (a) Scatter plots of outcome probabilities compared to the theoretical expectation for the following (from left to right): Oxford, Innsbruck, IBM for the 2×4-cluster, and Oxford, IBM2 for the 2×5 cluster. The squared 2 distances p1p22 for these theory comparisons are 0.00200(3), 0.0578(5), 0.0123(6), 0.00400(8), 0.145(1). This is in good agreement with the trends seen from a linear least-squares regression (blue line) quantifying the deviation from the ideal correlation (red dashed line), with the resulting regression slopes R with 1σ uncertainties given in the top-left corner of the respective figure panel. (b) Scatter plots of the outcome probabilities for 2×2 cross-check verification between Oxford-IBM, Innsbruck-IBM, and Innsbruck-Oxford for the 2×4 cluster, and Oxford-IBM2 for the 2×5 cluster. The squared 2 distances p1p22 for the cross-validations are 0.0504(6), 0.060(2), 0.0115(6), 0.118(1). This is in good agreement with the regression coefficients R obtained from linear, total, least-squares regression (blue line) given in the top-left corner of the respective figure panel. Experimental error bars correspond to 1σ statistical uncertainty, and the blue shaded bands represent 3σ mean prediction intervals for the regression.

    Reuse & Permissions
×

Sign up to receive regular email alerts from Physical Review X

Reuse & Permissions

It is not necessary to obtain permission to reuse this article or its components as it is available under the terms of the Creative Commons Attribution 4.0 International license. This license permits unrestricted use, distribution, and reproduction in any medium, provided attribution to the author(s) and the published article's title, journal citation, and DOI are maintained. Please note that some figures may have been included with permission from other third parties. It is your responsibility to obtain the proper permission from the rights holder directly for these figures.

×

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×