Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

The Effect of Penalty Factors of Constrained Hamiltonians on the Eigenspectrum in Quantum Annealing

Published: 02 March 2023 Publication History

Abstract

Constrained optimization problems are usually translated to (naturally unconstrained) Ising formulations by introducing soft penalty terms for the previously hard constraints. In this work, we empirically demonstrate that assigning the appropriate weight to these penalty terms leads to an enlargement of the minimum spectral gap in the corresponding eigenspectrum, which also leads to a better solution quality on actual quantum annealing hardware. We apply machine learning methods to analyze the correlations of the penalty factors and the minimum spectral gap for six selected constrained optimization problems and show that regression using a neural network allows to predict the best penalty factors in our settings for various problem instances. Additionally, we observe that problem instances with a single global optimum are easier to optimize in contrast to ones with multiple global optima.

1 Introduction

Quantum computing hardware has increasingly emerged in the past years, and there is now the possibility to tackle highly complex problems in a completely different way to classical solution methods. In particular, the computing devices of D-Wave Systems, which implement a quantum annealing algorithm in hardware, have been extensively studied to solve diverse kinds of optimization problems [8, 13, 15, 24, 30, 31].
The solution quality when using quantum hardware for highly complex problems depends on both the hardware’s maturity and the problem formulation, respectively the quantum algorithm to be executed. For the latter, much research has been put into the development of Ising spin glass model formulations for various problem classes [4, 16, 22], which is the input type for D-Wave Systems’ hardware. However, as is typical of NISQ (noisy intermediate-scale quantum) devices, many hardware and software-related hyperparameters can be optimized to achieve a potentially better solution quality [18, 23, 27, 32]. Since applied quantum computing is still in its infancy, not much is known about the hyperparameters of the Ising model and how to adjust them to execute the problem formulations efficiently on hardware.
One set of these parameters is the penalty factors, respectively weights of constraint Ising Hamiltonians [19, 22]. Those penalty factors ensure that the constraints of a corresponding optimization problem are satisfied. When this is not the case, a penalty value (or factor) is added to the solution energy that is intended to be minimized within the quantum annealing process. Lucas [22] presents many Ising Hamiltonians of constrained optimization problems. However, so far there is no rule of thumb on how to set the penalty factors efficiently (i.e., without trial and error or guessing them). In previous work, we showed that optimizing those parameters with cross entropy leads to a scaling of the minimum spectral gap such that the solution quality of three problem classes when using the D-Wave Systems’ quantum annealing algorithm increased [26]. However, since we did not see a correlation between the optimized penalty factors for the problem instances, one would have to apply the time-consuming cross-entropy method for each instance individually.
This motivates our investigation on the effect of penalty factors on the minimum spectral gap of six selected constrained Hamiltonians using Machine Learning (ML) techniques. We want to remedy the need for intensive parameter testing by using a Neural Network (NN)-based regression model for predicting useful penalty factors and investigate correlations between the minimum spectral gap and the penalty factors, to give guidelines on how to set them for the investigated problem classes. The results show that predicting the best penalty factors in our setting is indeed possible and that there are correlations to the minimal spectral gap between and within the corresponding problem classes.
The article is structured as follows. Section 2 provides background information on quantum annealing, as well as Ising-type Hamiltonians and their eigenspectra. In addition, the investigated constrained Hamiltonians are stated. Section 3 discusses related work and previous investigations on constrained Hamiltonians. Section 4 explains the experimental setup, which includes the data generation and applied ML evaluation methods. In Section 5, the results are shown and discussed, and finally, in Section 6, a summary and future work are presented.

2 Background

2.1 Quantum Annealing

Quantum annealing is a meta-heuristic most commonly known for solving optimization and decision problems [19, 21, 28]. Although this meta-heuristic can also be simulated classically, it has been implemented in quantum hardware by companies such as D-Wave Systems. Those quantum annealers are designed to minimize a spin glass system, described by an Ising Hamiltonian in the following form:
\begin{equation} \mathcal {H} = \sum _{i} h_{i}\sigma _z^{(i)} + \sum _{i\gt j} J_{i,j}\sigma _z^{(i)}\sigma _z^{(j)}, \end{equation}
(1)
where \(\sigma _z^{(i)}\) is the Pauli z-matrix operating on qubit i, \(h_{i}\) is the independent energy or bias of qubit \(i,\) and \(J_{ij}\) are the interaction energies or couplings of qubits i and j.
Within the fundamental process of quantum annealing, an initial Hamiltonian \(\mathcal {H}_{I}\) with an easy-to-prepare minimal energy configuration (or ground state) is physically interpolated to a problem Hamiltonian \(\mathcal {H}_{P}\) whose minimal energy configuration is sought (see Equation (2)). The minimal energy configuration of the problem Hamiltonian corresponds to the best solution of the defined problem. The physical principle on which the D-Wave computation process is based on can be described by a time-dependent Hamiltonian as follows
\begin{equation} \mathcal {H}(t) = \underbrace{-\frac{\mathcal {A}(t)}{2}\left(\sum _{i}\sigma _x^{(i)}\right)}_{\mathcal {H}_I} + \underbrace{\frac{\mathcal {B}(t)}{2}\left(\sum _{i}h_{i}\sigma _z^{(i)} + \sum _{i\gt j}J_{i,j}\sigma _z^{(i)}\sigma _z^{(j)}\right)}_{\mathcal {H}_P}. \end{equation}
(2)
\(\mathcal {A}(t)\) and \(\mathcal {B}(t)\) are the anneal functions of D-Wave machines, with \(\mathcal {A}(t)\) stating the tunneling energy and \(\mathcal {B}(t)\) being the energy of the problem Hamiltonian at time t in units of joules. The anneal functions must satisfy \({\mathcal {B}(t = 0) = 0}\) and \({\mathcal {A}(t = \tau) = 0}\), with \(\tau\) being the total evolution time. As the state evolution changes from \(t=0\) to \(t=\tau\), the annealing process, described by \(\mathcal {H}(t),\) leads to the final form of the Hamiltonian corresponding to the objective Ising problem that needs to be minimized. Therefore, the ground state of the initial Hamiltonian \(\mathcal {H}(0) = \mathcal {H}_I\) evolves to the ground state of the problem Hamiltonian \({\mathcal {H}(\tau) = \mathcal {H}_P}\). The measurements performed at time \(\tau\) deliver low energy states of the Ising Hamiltonian as stated in Equation (1).
According to the adiabatic theorem [2], if this process is executed sufficiently slow and smooth (i.e., \(\tau\) is large) and the coherence is preserved long enough, the probability to acquire the ground state of the problem Hamiltonian is close to 1 [1]. However, since no real-world computation can run in perfect isolation, the annealing process can suffer from non-adiabatic effects (i.e., thermal fluctuations), which can lead the system to jump from the ground state to an excited state. The minimum distance between the ground state and the first excited state—the one with the lowest energy apart from the ground state—throughout any point in the anneal process is called the minimum spectral gap \(g_\text{min}\) of \(\mathcal {H}(t)\) and is defined as
\begin{equation} g_\text{min} = \min \limits _{0 \le t \le \tau ; \text{ } j \ne 0} \;\; [E_j(t) - E_0(t)], \end{equation}
(3)
where \(E_j(t)\) is the energy of any excited state and \(E_0(t)\) is the energy of the ground state at time t [17]. By computing all those energy states and their corresponding eigenenergies, one can analyze the eigenspectrum of a (relatively small) Hamiltonian and assess its minimum spectral gap (Figure 1 presents an example of an eigenspectrum). However, every problem that one can specify has a different Hamiltonian and therefore a different corresponding eigenspectrum. According to D-Wave Systems, the most difficult problems in terms of quantum annealing are generally those with the smallest spectral gaps [11]. For completeness, it should be noted that there is an alternative formulation to the Ising spin glass system that is used frequently. The so-called Quadratic Unconstrained Binary Optimization (QUBO) formulation is mathematically equivalent to the Ising model and replaces each Pauli z-operator \(\sigma _z^{(i)}\) with a Boolean variable \(x_i\); the conversion is as simple as setting \(\sigma _z^{(i)} \rightarrow 2x_i - 1\) [3, 29]. The D-Wave Systems annealer is also able to minimize the functional form of the QUBO formulation \(x^TQx\), with \(x \in \lbrace 0,1\rbrace ^n\) being a vector of size n of binary variables and \(Q \in \mathbb {R}^{n \times n}\) being a symmetric \(n \times n\) real-valued matrix describing the interactions between the variables. Given matrix Q, the annealing process tries to find binary variable assignments x that minimize the objective function.
Fig. 1.
Fig. 1. Simplified representation of an eigenspectrum, where the ground state is the blue line at the bottom and the excited states are the ones above. The red circle marks the minimum spectral gap \(g_\text{min}\). Adapted from D-Wave Systems Inc. [11].

2.2 Constrained Hamiltonians

In this section, the six investigated problem Hamiltonians with constraints are stated. The constraints are weighted with the penalty factors to ensure valid solutions for the corresponding problem.

2.2.1 Minimum Exact Cover Problem.

Within the Minimum Exact Cover Problem (MECP), a set \(U = \lbrace 1,\ldots , n\rbrace\) and subsets \(V_i \subseteq U\) with \(i = 1,\ldots , N\) are given such that \(U = \bigcup \limits _{i} V_i\). The task is to find the smallest subset of the set of sets \(V_i\), called R, for which the elements of R are disjoint sets and the union of the elements of R is U. The QUBO Hamiltonian is stated in the work of Lucas [22].
\begin{equation} \text{min } B \sum _{i} x_i + A \sum _{\alpha }^{n} \left(1 - \sum _{i:\alpha \in V_i} x_i \right)^2 \end{equation}
(4)
Here, \(\alpha\) denotes the elements of U, whereas i denotes the subsets \(V_i\). The second term, which represents the constraint, equals 0 if every element of U is included exactly once, which implies that the subsets \(V_i\) of R are disjoint but their union includes every element of U. With the first term, representing the objective function, the smallest number of subsets is sought. The eigenvalue of the ground state of this Hamiltonian will be \(m \cdot B\), where m is the smallest number of subsets required for the complete union. The ratio of the penalty factors A and B can be determined by considering the worst-case scenario—that is, a small number of subsets with a single common element and whose union is U. To ensure this does not happen, one can set
\begin{equation} n \cdot B \lt A. \end{equation}
(5)
The number of variables, respectively logical qubits needed for the Hamiltonian, scales linearly with the number of available subsets \(|V|\).

2.2.2 Set Packing Problem.

Within the Set Packing Problem (SPP), a set \(U = \lbrace 1,\ldots , n\rbrace\) and subsets \(V_i \subseteq U\) with \(i = 1,\ldots , N\) are given. The difficulty lies in finding the maximum number of subsets \(V_i\) which are all disjoint. Lucas [22] gives the following QUBO Hamiltonian.
\begin{equation} \text{min } -B \sum _{i} x_i + A \sum _{i,j:V_i \cap V_j \ne \emptyset } x_ix_j \end{equation}
(6)
The second term is minimized only when all subsets are disjoint, whereas the first term simply counts the number of included sets. Choosing the penalty factors
\begin{equation} B \lt A \end{equation}
(7)
ensures that it is never favorable to violate the constraint, represented by the second term. Note that there will always be a penalty of at least A per extra set included. Just as the MECP, the SPP requires N logical qubits.

2.2.3 Minimum Vertex Cover Problem.

The Minimum Vertex Cover Problem (MVCP) is defined as finding the minimal set of vertices, which include at least one endpoint of every edge of a graph. Given a graph \(G=(V,E)\) with a set of vertices \(V =\lbrace v_0,\ldots ,v_n\rbrace\) and their respective edges E, then \(x_i = 1\) iff \(v_i\) is in the desired minimal set of vertices and \(x_i = 0\) otherwise. Following Glover et al. [16], the QUBO formulation is given as
\begin{equation} \text{min } B \sum _{i=0}^{n} x_i + A \sum _{(i,j)\in E} 1-x_i-x_j+x_ix_j. \end{equation}
(8)
The first term represents the objective function (i.e., it counts the number of vertices in the solution), whereas the second term ensures the constraint that every edge in the graph is at least connected to one vertex of the minimal set of vertices in the solution. The number of variables, respectively logical qubits needed, scales linearly with \(|V|\). To ensure valid solutions, the penalty factors must be chosen accordingly:
\begin{equation} B \lt A. \end{equation}
(9)

2.2.4 Maximum Clique Problem.

A clique is a subset of the vertices of a graph that are all connected to each other. The Maximum Clique Problem (MCP) is defined as finding the clique of a graph \(G=(V,E)\) that has the largest number of vertices of all cliques. Following Chapuis et al. [5], the QUBO formulation is given as
\begin{equation} \text{min } -B \sum _{i=0}^{n} x_i + A \sum _{(i,j)\in \bar{E}} x_ix_j. \end{equation}
(10)
\(x_i = 1\) iff vertex \(v_i\) is included in the clique and \(x_i = 0\) otherwise. n is the order of the graph, and E denotes the set of edges of the graph. The first term represents the objective function (i.e., it counts the number of vertices in the solution), whereas the second term ensures the constraint that an edge of the complement set of edges \(\bar{E}\) is not in the solution. The number of variables, respectively logical qubits needed, scales linearly with the number of vertices \(|V|\) in the graph. To guarantee valid solutions, the penalty factors must be chosen accordingly:
\begin{equation} B \lt A. \end{equation}
(11)

2.2.5 Knapsack Problem.

In the Knapsack Problem (KP), n items are given, each having a certain weight \(w_i\) and a certain value \(c_i\). The items must be picked in a way that the total weight of the items is less than or equal to the knapsack capacity W and the sum of the corresponding item values is maximized. The QUBO Hamiltonian is stated in the work of Lucas [22].
\begin{equation} \text{min } - B \sum _{i = 0}^{n} c_i x_i + A \left(1 - \sum _{j=0}^{W} y_j \right)^2 + A \left(\sum _{j=0}^{W} j y_j - \sum _{i = 0}^{n} w_i x_i \right)^2 \end{equation}
(12)
Here, \(y_j\) for \(1 \le j \le W\) is a binary variable, which is set to 1 iff the final weight of the knapsack is j and 0 otherwise. In addition, the binary variable \(x_i\) is 1 iff item i is part of the solution and 0 otherwise. The second and third terms enforce that the weight can only take exactly one value and that the weight of the items in the knapsack equals the value we claimed it did. The first term sums the values of the items in the knapsack. The penalty parameters \(A,B\) are chosen according to
\begin{equation} B \cdot \text{sum}(c_i) \lt A \end{equation}
(13)
to penalize violations of the weight constraint. Note that the penalty parameter range for KP given here varies from Lucas [22], which was revised in the work of Quintero and Zuluaga [25]. The number of variables required is n + W.

2.2.6 Binary Integer Linear Problem.

Within the Binary Integer Linear Problem (BILP), a vector x of binary variables \(x = (x_1\ldots ,x_N)\) is given. BILP tries to find the largest value of \(c \cdot x\), for some integer vector c, given the constraint \(S\cdot x = b\), with S being an \(m \times N\) matrix and b being a vector with m components. The corresponding QUBO Hamiltonian is given as in the work of Lucas [22]:
\begin{equation} \text{min } -B \sum _{i=1}^{N} c_ix_i + A \sum _{j=1}^{m} \left(b_j - \sum _{i=1}^{N} S_{ji}x_i \right)^2. \end{equation}
(14)
The second term enforces the constraint \(S \cdot x = b\), whereas the first term maximizes the scalar product of the vectors c and x. When the coefficients \(S_{ij}\) and \(c_i\) are integers, the penalty factors must be chosen accordingly:
\begin{equation} B\cdot N \lt A. \end{equation}
(15)
The number of binary variables, respectively logical qubits needed, scales linearly with the size of the vectors c and x. For more details, see the work of Lucas [22].

3 Related Work

Coffey [7] came up with an adiabatic quantum computing) framework to study the KP, in which he transformed the optimization problem to an Ising Hamiltonian and used small problem instances to assess the approach. He concludes that numerical and theoretical analysis of the minimum spectral gap in the anneal path is of great importance to improve adiabatic quantum computing [7].
Later, Choi [6] theoretically showed that by adjusting the energy penalty value of the Maximum Weighted Independent Set (MWIS) Ising Hamiltonian, one may change the quantum evolution from one that has an anti-crossing to one that does not have one, or vice versa, and therefore significantly influence the minimum spectral gap.
Following these insights, Roch et al. [26] proposed a cross-entropy optimization method for adjusting the penalty factors of three constrained Hamiltonians (KP, MECP, SPP) to scale their minimum spectral gaps [26]. By doing so, an improved solution quality of finding the global optimum on D-Wave quantum annealers could be achieved (i.e., the probability of measuring the global optimum increased). However, the authors did not observe any correlations between the optimized penalty factors of individual problem instances, meaning that one has to apply the time-consuming cross-entropy method for each instance separately. That is the main motivation for us to analyze the energy spectrum, respectively the minimum spectral gap, of different constrained problem Hamiltonian classes with ML techniques to find patterns and guidelines on how to set the corresponding penalty factors such that an overall improvement in solution quality can be achieved.

4 Experimental Setup

4.1 Data Preparation

In this section, we describe the procedure of generating the data and computing the necessary information—that is, penalty factor ratio, the minimum spectral gap, and its location in time w.r.t. the anneal path—as input for the ML analysis methods. As mentioned in Section 2.2, each constrained Hamiltonian has a certain valid half-open interval for its penalty factor A. Although we fixed the penalty factor \(B=1\), the sampling interval of penalty factor A was computed according to the problem-specific constraint (see Equations (5), (7)–(15)).
Using the MECP as an example, the sampling interval for A was set to \(\left[B\cdot n+0.1, B\cdot n+5.0\right]\), with n being the number of sets of the problem instance. The MECP penalty factor ratio was calculated via \(\frac{B\cdot n}{A}\). Note that we restricted the in general half-open interval of A to \(\left[B\cdot n+0.1, B\cdot n+5.0\right]\). Although one could theoretically use a larger sampling range, one has to consider that D-Wave Systems’ auto-scaling feature scales every Ising model weight and bias to a given hardware solver dependent range of \(h_{i}\) and \(J_{ij}\) [12]. This means that the largest value in the Ising model (probably affected by the constraint penalty factor A) is scaled to the upper bound of the range and everything else proportionally smaller. In conclusion, to get good and meaningful solutions, one should set the Ising model coefficients of the constraints large enough to ensure valid solutions but small enough to maintain the importance of the individual Ising terms.
For each problem instance, we draw 50 evenly spaced penalty factors for A from the interval to see the rate of change of the minimum spectral gap and its location in the anneal path. The minimum spectral gap of each constraint Hamiltonian with its unique sampled penalty factors was then calculated according to D-Wave Systems’ anneal functions \(\mathcal {A}(t)\) and \(\mathcal {B}(t)\) of their Advantage 4.1 system [10]. Note that we also incorporated D-Wave Systems’ auto-scaling feature during the generation of the data, before computing the minimum spectral gap, so that the experiments match the behavior of the D-Wave hardware [12].
In Figure 2, the corresponding preliminary raw plots of 25 random instances for each problem class are visualized, to understand the following experiments in Section 5. However, for analyzing the trends of the minimum spectral gap, its location in the anneal path, and the penalty factor ratio with ML methods, we generated 1,000 unique problem instances per problem class. Note that we restricted the size of the instances to eight variables, respectively logical qubits, since the numerical computation of the eigenspectrum of each instance is quite time consuming and increases with the number of variables.
Fig. 2.
Fig. 2. Visualization of the relation of the minimum spectral gap and its location in the anneal path to the penalty factor ratio of 25 random instances per problem class. For each problem instance (represented by one color), 50 evenly spaced penalty factor pairs (represented by the dots)1 were sampled. Note that within the plots, some trajectories might be overlapping. The x-axis shows the penalty factor ratio, which is calculated problem specific as stated in Section 4.1. The anneal path and the location of the minimum spectral gap is discretized ranging from 0 to 100, with 0 being the start time of the annealing, as given on the y-axis. The size of the minimum spectral gap is stated on the z-axis. As explained in Section 4.2.3, the larger the minimum spectral gap, the higher the probability to stay in the ground state during the annealing.
Note that to generate such large datasets of Ising problem instances and their minimum spectral gaps for training ML methods, we computed the eigenspectra of the comparatively smaller logical Ising problem instances, which differ to some extent from the hardware embedded ones. We address this topic in Appendix A.

4.2 Evaluation Methods

4.2.1 Clustering.

To find patterns and to also determine similarities between and within problem classes in the generated minimum spectral gap trajectories, DBSCAN, a density-based clustering method that identifies arbitrarily shaped clusters of large size and noise in high-dimensional databases, was used [14]. DBSCAN typically clusters dense regions of points in the data space that are separated by regions of low density. The two main hyperparameters of DBSCAN are \(\varepsilon\) and MinPts. For a detailed description of the DBSCAN procedure, see the work of Khan et al. [20]. In a preprocessing step, the data was normalized in the range of 0 and 1 and the pairwise distance matrix of the three-dimensional trajectories (size of the minimum spectral gap, penalty factor ratio, and location) was given as input to DBSCAN. We used the Pearson Correlation Coefficient (PCC) between the pairwise distance matrix and the DBSCAN labeling as the clustering metric.

4.2.2 Regression.

For predicting the penalty factors associated with the largest minimum spectral gap for a given problem instance, NNs with dense layers were used. The architecture was kept simple, with no hidden layer and ReLU being the activation function between the input and output layers. For the input layer, the set of problem instances was flattened and normalized to a range of \([-1;+1]\). Some problem instances differ in size (e.g., different amount of sets of different sizes, which resulted in differing vector length when flattened). Thus, all problem instances were zero-padded to the largest problem instance that was generated. The number of output neurons of the regression network was set to 2, representing the two penalty factors A and B. We selected a stochastic gradient descent method (Adam) for optimizing the weights of the NN. The best results were obtained with a learning rate of 0.001 for MECP, MCP, MVCP, BILP, and KP and 0.0005 for SPP. For training, a batch size of 50 was used for all problem classes except SPP, for which the batch size was set to 32. The data set was split into \(90\%\) training dataset and \(10\%\) testing dataset. To evaluate the regression model, Root Mean Square Error (RMSE) and R-squared (R2) were used as performance indicators to tell how well the model can predict the best penalty factors in absolute terms and how well in general it can predict the value of the response variable in percentage terms, respectively.

4.2.3 Quantum Annealing Hardware.

For assessing the solution quality of the optimized penalty factors on real hardware, D-Wave’s Advantage 4.1 system was used. The solution quality is associated with the approximation ratio, which is calculated as follows: \({\it Approx. ratio} = \frac{{\it \#BKS}}{{\it \#Measurements}}\) with \({\it \#BKS}\) being the number of times the Best Known Solution (BKS) was measured and \({\it \#Measurements}\) being the total number of measurements (default 100). We used a fully connected graph embedding of size 8 of the D-Wave hardware and mapped the logical Ising problems to it. Since all random generated instances are of the same size (eight variables), each one fits into the fully connected hardware embedding. The only difference might be that couplers of the hardware are not used if the logical Ising problem is not fully connected. In that case, the hardware coupler is set to 0.0. Since previous work [26] showed that the embedding also influences the solution quality, we reused this same hardware embedding (with the same qubits and couplers) for each problem class to make them comparable regarding the approximation ratio depending on the quality/noise of the hardware qubits. Furthermore, following the D-Wave Systems’ guidelines, the chain strength parameter was set to \({\it max}({\it Ising model coefficient) + 1}\) to avoid broken qubit chains [9].

5 Evaluation and Discussion

This section provides insight into our empirical evidence showing that optimizing the penalty factors of Ising model formulations leads to a larger minimum spectral gap and furthermore causes a better approximation ratio on the D-Wave Advantage system.

5.1 Clustering Analysis of Penalty Factor Ratio and Minimum Spectral Gap

Since it is hard to visually find trends in the 1,000 generated problem instances of each problem class, DBSCAN was used to cluster the generated trajectories as described in Section 4. In Figure 3, the DBSCAN clustering results of the minimum spectral gap and its location over the set of penalty factor ratios are plotted. The axis labeling is exactly as in Figure 2. In each subplot, the mean trajectory and the \(95\%\) confidence interval, as error bars, of each found cluster are plotted. In addition, the cluster label is annotated in text form. Table 1 shows the associated best found DBSCAN hyperparameters and the achieved PCC for each problem class, which was used as metric for the clustering.
Fig. 3.
Fig. 3. DBSCAN clustering results of the minimum spectral gap and its location over a set of penalty factor ratios. The labeling of the clustering is used to color the original minimum spectral gap trajectories and is additionally annotated to each cluster in text form. To visualize the 1,000 problem instances, in each subplot the mean trajectory and the \(95\%\) confidence interval, as error bars, of each cluster are shown.
Table 1.
Problem Class\(\varepsilon\)MinPtsPCC
MCP1.176–0.962
MVCP0.346–0.929
KP0.5548–0.994
SPP0.3724–0.889
BILP0.4612–0.919
MECP0.22240.911
Table 1. Best Found DBSCAN Hyperparameters for Each Problem Class with the PCC Between the Pairwise Distance Matrix of the Trajectories and the Cluster Labels as the Metric of the Clustering
In general, the clustering of MVCP, MCP, and SPP are similar in that they all exhibit multiple low-lying clusters w.r.t. the size of the minimum spectral gap (trajectories at the very bottom of each plot) and a location of around 60 to 90 in the discretized anneal path ranging from 0 to 100 (see the y-axis). Additionally, they all have a cluster growing in terms of the size of the minimum spectral gap with increasing penalty factor ratio and a comparatively early gap location of 10 to 20 (cf. Figure 3(b), (c), and (e)). This means that in theory the problem instances of these clusters can be optimized with a certain penalty factor ratio w.r.t. the minimal spectral gap size. Moreover, we assume that the early spectral gap of those instances might be favorable, since the coherence time of NISQ computers, like D-Wave Systems, is still limited. With an early minimum spectral gap, the probability to jump from the ground to an exited state due to natural quantum decoherence might be decreased. Note that for MCP, there is one cluster (label \(-1\)) of problem instances with a comparatively large minimum spectral gap trajectory. Investigation showed that those MCP instances represent fully connected graphs/cliques, which is a trivial MCP and results in an omitted constraint, respectively penalty factor \(A,\) in the Ising formulation (see Equation (10)). Therefore, it cannot be optimized with a certain penalty factor ratio and is visualized as a horizontal trajectory in Figure 3(c).
MECP, KP, and BILP all show different clustering. As already seen in the preliminary raw data, MECP has instances with different trends in the clustering (cf. Figures 2(a) and 3(a)). Note that there is one cluster (label \(-1\)) with no slope and three clusters with upward (label 0 and 1) and downward (label 2) trends with an increasing penalty factor ratio. KP shows two clusters with both having a small upward trend with an increasing penalty factor ratio (cf. Figure 3(d)). Within the BILP clustering, all found clusters remain with the same minimum spectral gap size despite varying penalty factor ratios. Three clusters (labels \(-1\), 7, and 8) have a relatively small minimum spectral gap and a late location (80–100) in the anneal path, whereas the remaining clusters gained larger minimum spectral gaps within an early stage (20–40) in the anneal path (cf. Figure 3(f)). Investigation showed that BILP instances do not differ in the on- and off-diagonal values in the Ising formulation, respectively \(h_i\) and \(J_{ij}\) values of the Ising model. Therefore, no change in the ratio of penalty factors is possible (see Equation (15)).

5.2 Regression Analysis of Penalty Factor Ratio and Minimum Spectral Gap

In the next step, we investigate whether an NN is able to predict the penalty factor ratio associated with the largest minimum spectral gap, given the problem instance as input. Figure 4 shows the resulting RMSE and \(R^2\) coefficients while training our model, including the \(95\%\) confidence interval over 10 runs. Trivially, for KP, MVCP, MCP, and SPP, the \(R^2\) coefficient reaches around \(0.981 \pm 0.002\) to \(1.0 \pm 0.0\) at the end of training, since for these problems the best penalty ratio to choose is the maximal one (i.e., where factor A is set to the lowest possible value in our setting). Additionally, the RMSE converges to 0.0, which tells us that the distance between the predicted penalty factors made by our regression model and the actual best penalty factors in our setting is very small or null. Interestingly, training a good NN model for MECP and BILP is more difficult. The best achieved \(R^2\) coefficient was \(0.918 \pm 0.048\) and \(0.959 \pm 0.012\), respectively. This also reflects in a comparatively worse RMSE of \(0.091 \pm 0.048\) and \(0.072 \pm 0.001\). A possible reason for that can be observed in the corresponding clustering (see Figure 3). Whereas KP, MVCP, MCP, and SPP all have no or upward trends with an increasing penalty factor ratio, MECP correspondingly contains upward and downward trends, which leads to different penalty factors being best. We presume that this leads to a slightly worse performance in predicting the best penalty factors in our setting. Regarding BILP, no trajectory cluster has a slope, which leads to all penalty factor ratios being equally good w.r.t. the size of the minimum spectral gap. We assume that this results in a similar behavior to the MECP for the NN model.
Fig. 4.
Fig. 4. \(R^2\) (a) and RMSE (b) coefficients while training our model including the \(95\%\) confidence interval over 10 runs, for the six problem types investigated. A large \(R^2\) and a small RMSE coefficient are preferred.

5.3 Application of Predicted Against Random Penalty Factors on Quantum Annealing Hardware

Subsequently, we test whether choosing the best penalty factor ratio in our predefined specific range influences the approximation ratio on the D-Wave Advantage 4.1 system and whether using a NN for predicting the penalty factors is beneficial compared to choosing them arbitrarily from the valid penalty factor range.
For each given problem class, a set of 100 random problem instances is generated and a penalty ratio is inferred from the NN model. Next, the Ising model of those 100 problem instances is formulated (using the predicted penalty ratio) and run on the D-Wave machine. In Figure 5, the resulting approximation ratio of the model is shown as the red line including the \(95\%\) confidence interval over the 100 problem instances. Next, an experiment is started:. For 50 iterations, a random penalty ratio is sampled from the predefined range and the Ising model is formulated and sent to the D-Wave machine. The running best approximation ratio is kept (green line) with a \(95\%\) confidence interval. Note that even though the approximation ratio was calculated as stated Section 4.2.3, we used the average approximation ratio of the model as the baseline and therefore set it to 1.0, whereas the approximation ratio of the randomly sampled penalty factors was set relative to that of the model, to better visualize at what point the random sampling process reaches or surpasses the baseline.
Fig. 5.
Fig. 5. Iterative process of randomly sampling penalty factors (for 50 iterations) and keeping the rolling best (green) versus the model (red) prediction, including the \(95\%\) confidence interval over 100 problem instances. Note that the average approximation ratio of the model was used as baseline and set to 1.0, whereas the approximation ratio of the randomly sampled penalty factors was set relative to that of the model.
For MVCP, MCP, and SPP, the NN model performs comparatively very well. In the case of MVCP, it takes 43 iterations for the random process to reach the model on average. In the case of SPP, it takes about 50 iterations to nearly reach the model, whereas in the case of MCP, the random process never reaches the model in the 50 samples that we generated. Of course, these three problems that work well are also the ones for which inference is easy (it consists of predicting the highest penalty value ratio). Regarding the other problem classes (MECP, KP, BILP), it takes on average just 5 iterations for the random process to outperform the model. However, it should be noted that the confidence interval of the model performance in these cases completely overlaps with the approximation ratio of the random process. Possible reasons can be found in the clustering results. Even though the trends of the minimum spectral gap trajectories in the KP clustering are similar to MVCP, MCP, and SPP, the performance of the predicted optimal penalty factors is comparatively bad on real hardware (cf. Figures 3 and 5). We believe this is due to the small minimum spectral gap interval of KP problems, which is in the range of 0.0 to 0.05 (cf. Figure 2(d)). We assume that those small differences in the minimum spectral gap size of KP Hamiltonians have no effect when executing it on D-Wave’s hardware. Regarding MECP, a similar problem can be observed. Although the minimum spectral gap interval is by a factor of 10 larger than the one of KP (cf. Figure 2(a)), the difference of the gap size between the worst and best penalty factor ratio of a problem instance is very small (low slope), which leads to the same assumption as for KP. Since in the BILP clustering no slope is present, no optimization can be achieved here in theory, but also practically, on real hardware.
Since in this experiment the running best approximation ratio is kept for the random process, we now compare the average performance of the NN model prediction against the average of all 50 random samples from the iterative process of the 100 problem instances per problem class, as presented in the previous paragraph. As a metric, the approximation ratio is used again. Table 2 shows the corresponding results. It is clear that the model on average outperforms at least \(12.9\% \pm 7.7\%\) and up to \(167.1\% \pm 29.8\%\) against the random process. Thus, from this perspective, the model should always be used for the six problems we analyzed here.
Table 2.
Problem ClassAvg. Superiority of the Model vs. the Random Sampling Process
MCP \(167.1\% \pm 29.8\%\)
MVCP \(97.4\% \pm 26.2\%\)
KP \(82.8\% \pm 55.7\%\)
SPP \(87.3\% \pm 32.9\%\)
BILP \(12.9\% \pm 7.7\%\)
MECP \(18.4\% \pm 11.2\%\)
Table 2. Average Superiority of the Model Against the Average of All 50 Random Samples from the Iterative Sampling Process over the 100 Problem Instances Per Problem Class in Terms of Approximation Ratio

5.4 Correlation of Trajectory Clusters to the Number of Global Optima and to the Sparseness of the Problem Instances

Furthermore, an interesting observation in terms of the number of optimal solutions could be made. Using the DBSCAN clustering that we found previously and grouping the clusters with no slope around the gap size of 0.0 (cf. Figure 3) and correlating it to each problem instance having either exactly one optimal solution or multiple, the PCC shows in general very high correlations. As seen in Table 3, MCP, MVCP, SPP, and BILP all have extremely strong (either positive or negative) correlations, with MECP still having a rather highly negative correlation. Only KP is an exception, with a comparatively lower correlation with the clustering. It is quite interesting that having exactly one optimal solution leads to minimum spectral gap trajectories that can be optimized well, whereas having two or more optimal solutions results in a flat minimum spectral gap trajectory with a gap size of around 0.0. This is the case for all problems with high correlations. It should be noted that correlating the clustering with the exact number of optimal solutions (e.g., with exactly three optimal solutions) reduces the overall PCC for all problems. This in turn shows that the exact number of optimal solutions is comparatively rather insignificant for the correlation.
Table 3.
Problem ClassPCCMerged Clusters
MCP1.0{1, –1}, {0}
MVCP0.990{1}, {0, 1, 2}
KP0.419{0}, {–1}
SPP0.972{0}, {1, –1}
BILP–0.958{0, 1, 2, 4, 5,6}, {–1, 3, 8, 7}
MECP–0.799{0, 1, 2}, {–1}
Table 3. Correlation Results of the Merged DBSCAN Trajectory Clusters, Having No Slope and Being Located Around the Gap Size of 0.0, of Figure 3 to the Problem Instances Having Either Exactly One or Multiple Optimal Solutions
The PCC is shown. The numbers in the braces of the last table column represent the corresponding annotated cluster labels of Figure 3.
In addition, we investigated the sparseness of the problem instances, to check if the trajectory clusters with their different trends also correlate with the sparseness of the problems. The sparseness of an problem instance (Ising matrix) was calculated as \(Sparseness=\frac{\#Elements-\#NonZeroElements}{\#Elements}\). Even though sparse problems are easier to embed into the sparse connectivity of D-Wave hardware and should therefore be easier to solve, since smaller (embedded) problems are produced, the minimum spectral gap trajectories and their trends did not correlate with the sparseness of the problems. The PCC was around 0 for each problem class. In general, each cluster (with upward, downward, and also the flat trends) of the problem classes contained problem instances with different sparseness.

5.5 Correlation of Scaled Problem Instances to the Minimum Spectral Gap

Last, we investigate the minimum spectral gap trajectories when scaling the problem instances up and down in terms of variables. Since for all previous experiments the problem instances were restricted to 8 variables (due to the time-consuming computation of the eigenspectra), we now look into instances with 6, 8, and 10 variables for only the comparatively well-performing problem classes MVCP, MCP, and SPP. Note that for problem instances of size 6 and 8, we generated 1,000 instances per problem class, and for size 10, we generated only 500 instances per problem class. Figure 6 shows the composed DBSCAN clustering of problem instances of different size. Clusters of same problem size are identically colored. The achieved PCC (with optimized DBSCAN hyperparameters) was \(-0.900\), \(-0.992\), and \(-0.901\) for MVCP, MCP, and SPP, respectively. One can clearly see that the same trajectory trends occur regardless of the problem size. We therefore assume that the observed trends are size independent and will also hold for other problem sizes.
Fig. 6.
Fig. 6. DBSCAN clustering of the minimum spectral gap and its location over a set of penalty factor ratios. The problem instances of each problem class differ in size of 6 (blue), 8 (red), and 10 (yellow) variables. To visualize the 2,500 problem instances, in each subplot the mean and the \(95\%\) confidence interval, as error bars, of each cluster are plotted.

6 Conclusion

In this work, we analyzed the effect of the penalty factors on the minimum spectral gap of different constrained Hamiltonians. We showed that specific penalty factor ratios can enlarge the minimum spectral gap, which in turn is reflected by an improved approximation ratio on real quantum hardware. We were able to train regression models to predict the most suitable penalty factors in our setting, which on average performed at least \(12.9\% \pm 7.7\%\) and up to \(167.1\% \pm 29.8\%\) better than the ones of the random process. This leads us to the conclusion that such learned models should always be used for the investigated problem classes to reach a better solution quality for the corresponding Hamiltonians.
An additional finding was the high Pearson correlation of the number of optimal solutions with the clusters that were found with DBSCAN. The results showed that problem instances with exactly one optimal solution can be optimized well using a fitting penalty factor ratio, whereas the ones with multiple equivalent optimal solutions cannot. Besides that, the ones which can be optimized well tend to have their minimum spectral gap early in the anneal path, compared to the ones, which cannot be optimized.
Since in this work we restricted ourselves to constrained Hamiltonians with only one constraint, optimization problems with multiple constraints (as they appear in the real world more commonly) are of interest for future work. Moreover, w.r.t. the step of data generation, efficient methods for preselecting promising penalty factor ratios need to be investigated, since the possible penalty factor ratio combinations increase (in the worst case) exponentially with the number of factors. Another interesting aspect for future work would be to investigate the qubit precision capabilities of D-Wave Systems’ quantum annealing hardware from a practical point of view. Since the experiments showed that for some problem classes (MVCP, MCP, SPP) the best penalty factor ratio was the largest one (i.e., where the factor A was set to the minimal possible value in our setting), it would be interesting to see the rate of change in solution quality before quantization error effects of the digital analog converts occur [10] and no improvement is possible anymore.

Footnote

1
Note that in omitted experiments, we successfully double-checked with random uniform sampled factors to ensure that the same results/trajectory trends are obtained.

A Comparison of Minimum Spectral Gaps of Logical and Embedded Instances

Since we computed the minimum spectral gaps of the non-embedded Ising problems, due to computational limitations (embedded problems might require more variables/qubits), we analyzed if the eigenspectra differ from each other on a small subset of problem instances with six logical variables for SPP, MCP, and MVCP (the problem classes that worked well in our evaluation).
As already mentioned in Section 4.1, the D-Wave scaling function to scale the logical Ising coefficients to the precision range of the hardware qubits and the D-Wave Anneal functions of the corresponding hardware were used. Thus, the only difference between the logical Ising problems and the embedded ones are the physical qubit chains, which occur when a logical problem is not directly embeddable in hardware.
To analyze the difference of the logical and embedded instances, we read out the actual embedding of the D-Wave hardware graph. The six logical variables of the Ising problem instances led to eight physical qubits on the hardware. Afterward, the minimum spectral gaps (for different penalty factors) of the now eight variable Ising problems were computed with the same method as for the logical Ising problems. In Figure 7, the corresponding minimum spectral gap trajectories are plotted for 25 instances of each of the three problem classes. The results show that there is a difference in the overall size of the MSG; however, the trends of the trajectories seem to stay the same as with the original (not embedded) Ising problem instances of Figure 2. We assume that the qubit chains (at least for such small problem sizes) should not alter the distribution of eigenvalues of the corresponding eigenspectrum.
Fig. 7.
Fig. 7. Visualization of the relation of the minimum spectral gaps and its location in the anneal path to the penalty factor ratio of 25 random instances per problem class. For each simulated embedded problem instance (represented by one color), 50 evenly spaced penalty factor pairs (represented by the dots) were sampled. Note that within the plots, some trajectories might be overlapping.

References

[1]
Tameem Albash and Daniel A. Lidar. 2018. Adiabatic quantum computation. Reviews of Modern Physics 90, 1 (Jan.2018), 015002.
[2]
M. Born and V. Fock. 1928. Beweis des adiabatensatzes. Zeitschrift Fur Physik 51, 3–4 (March1928), 165–180.
[3]
Endre Boros, Peter L. Hammer, and Gabriel Tavares. 2007. Local search heuristics for quadratic unconstrained binary optimization (QUBO). Journal of Heuristics 13, 2 (April 2007), 99–132.
[4]
Michael Brusco, Clintin P. Davis-Stober, and Douglas Steinley. 2021. Ising formulations of some graph-theoretic problems in psychological research: Models and methods. Journal of Mathematical Psychology 102 (2021), 102536.
[5]
Guillaume Chapuis, Hristo Djidjev, Georg Hahn, and Guillaume Rizk. 2018. Finding maximum cliques on the d-wave quantum annealer. Journal of Signal Processing Systems 91, 3–4 (May2018), 363–377.
[6]
Vicky Choi. 2020. The effects of the problem hamiltonian parameters on the minimum spectral gap in adiabatic quantum optimization. Quantum Information Processing 19, 3 (Jan. 2020), 1–25.
[7]
Mark W. Coffey. 2017. Adiabatic quantum computing solution of the knapsack problem. arXiv preprint arXiv:1701.05584 (2017).
[8]
William Cruz-Santos, Salvador E. Venegas-Andraca, and Marco Lanzagorta. 2019. A QUBO formulation of minimum multicut problem instances in trees for d-wave quantum annealers. Scientific Reports 9, 1 (Nov. 2019), 17216.
[9]
D-Wave Systems Inc.2020. Programming the D-Wave QPU: Setting the Chain Strength. Technical Report. Burnaby, BC, Canada. https://www.dwavesys.com/media/vsufwv1d/14-1041a-a_setting_the_chain_strength.pdf.
[10]
D-Wave Systems Inc.2020. QPU-Specific Characteristics. Retrieved July 10, 2022 from https://docs.dwavesys.com/docs/latest/doc_physical_properties.html.
[11]
D-Wave Systems Inc.2020. Annealing in Low-Energy States. Retrieved July 10, 2022 fromhttps://docs.dwavesys.com/docs/latest/c_gs_2.html#annealing-in-low-energy-states.
[12]
D-Wave Systems Inc.2021. D-Wave Solver Properties and Parameters Reference. Technical Report. Burnaby, BC, Canada. https://docs.dwavesys.com/docs/latest/c_solver_parameters.html#auto-scale.
[13]
Michael J. Dinneen, Anuradha Mahasinghe, and Kai Liu. 2019. Finding the chromatic sums of graphs using a D-Wave quantum computer. Journal of Supercomputing 75, 8 (Feb. 2019), 4811–4828.
[14]
Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD’96). 226–231.
[15]
Sebastian Feld, Christoph Roch, Thomas Gabor, Christian Seidel, Florian Neukart, Isabella Galter, Wolfgang Mauerer, and Claudia Linnhoff-Popien. 2019. A hybrid solution method for the capacitated vehicle routing problem using a quantum annealer. Frontiers in ICT 6 (2019), 1–13.
[16]
Fred Glover, Gary Kochenberger, and Yu Du. 2019. Quantum bridge analytics I: A tutorial on formulating and using QUBO models. 4OR 17, 4 (2019), 335–371.
[17]
Erica K. Grant and Travis S. Humble. 2020. Adiabatic Quantum Computing and Quantum Annealing. Springer.
[18]
Pratibha Raghupati Hegde, Gianluca Passarelli, Annarita Scocco, and Procolo Lucignano. 2022. Genetic optimization of quantum annealing. Physical Review A 105, 1 (Jan.2022), 012612.
[19]
Tadashi Kadowaki and Hidetoshi Nishimori. 1998. Quantum annealing in the transverse Ising model. Physical Review E 58, 5 (Nov.1998), 5355–5363.
[20]
Kamran Khan, Saif Ur Rehman, Kamran Aziz, Simon Fong, and S. Sarasvady. 2014. DBSCAN: Past, present and future. In Proceedings of the 5th International Conference on the Applications of Digital Information and Web Technologies (ICADIWT’14). 232–238.
[21]
C. R. Laumann, R. Moessner, A. Scardicchio, and S. L. Sondhi. 2015. Quantum annealing: The fastest route to quantum computation? European Physical Journal Special Topics 224, 1 (Feb. 2015), 75–88.
[22]
Andrew Lucas. 2014. Ising formulations of many NP problems. Frontiers in Physics 2 (2014), 5.
[23]
Charles Moussa, Jan N. van Rijn, Thomas Bäck, and Vedran Dunjko. 2022. Hyperparameter importance of quantum neural networks across small datasets. arXiv preprint arXiv:2206.09992 (2022).
[24]
Florian Neukart, Gabriele Compostella, Christian Seidel, David von Dollen, Sheir Yarkoni, and Bob Parney. 2017. Traffic flow optimization using a quantum annealer. Frontiers in ICT 4 (2017), 1–6.
[25]
Rodolfo A. Quintero and Luis F. Zuluaga. 2021. Characterizing and Benchmarking QUBO Reformulations of the Knapsack Problem. Technical Report. Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, PA.
[26]
Christoph Roch, Alexander Impertro, and Claudia Linnhoff-Popien. 2021. Cross entropy optimization of constrained problem Hamiltonians for quantum annealing. In Computational Science—ICCS 2021. Lecture Notes in Computer Science, Vol. 12747. Springer, 60–73.
[27]
Christoph Roch, Alexander Impertro, Thomy Phan, Thomas Gabor, Sebastian Feld, and Claudia Linnhoff-Popien. 2020. Cross entropy hyperparameter optimization for constrained problem Hamiltonians applied to QAOA. In Proceedings of the 2020 International Conference on Rebooting Computing (ICRC’20). 50–57.
[28]
Juexiao Su, Tianheng Tu, and Lei He. 2016. A quantum annealing approach for Boolean satisfiability problem. In Proceedings of the 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC’16). 1–6.
[29]
Juexiao Su, Tianheng Tu, and Lei He. 2016. A quantum annealing approach for Boolean satisfiability problem. In Proceedings of the 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC’16). 1–6.
[30]
Hayato Ushijima-Mwesigwa, Christian F. A. Negre, and Susan M. Mniszewski. 2017. Graph partitioning using quantum annealing on the D-Wave system. In Proceedings of the 2nd International Workshop on Post Moores Era Supercomputing. ACM, New York, NY.
[31]
Sheir Yarkoni, Aske Plaat, and Thomas Back. 2018. First results solving arbitrarily structured maximum independent set problems using quantum annealing. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC’18). 1–6.
[32]
Sheir Yarkoni, Hao Wang, Aske Plaat, and Thomas Bäck. 2019. Boosting quantum annealing performance using evolution strategies for annealing offsets tuning. In Quantum Technology and Optimization Problems. Springer International Publishing, Cham, Switzerland, 157–168.

Cited By

View all
  • (2024)Restricted global optimization for QAOAAPL Quantum10.1063/5.01893741:2Online publication date: 29-Apr-2024
  • (2024)QUBO Formulation for Sparse Sensor Placement for ClassificationInnovations for Community Services10.1007/978-3-031-60433-1_2(17-35)Online publication date: 31-May-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Quantum Computing
ACM Transactions on Quantum Computing  Volume 4, Issue 2
June 2023
192 pages
EISSN:2643-6817
DOI:10.1145/3584867
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 March 2023
Online AM: 22 December 2022
Accepted: 12 December 2022
Revised: 18 November 2022
Received: 04 August 2022
Published in TQC Volume 4, Issue 2

Check for updates

Author Tags

  1. Quantum annealing
  2. penalty factor
  3. constrained Hamiltonian
  4. minimal spectral gap
  5. optimization
  6. clustering
  7. regression
  8. artificial neural network
  9. D-Wave Systems

Qualifiers

  • Research-article

Funding Sources

  • German BMWK Project PlanQK

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)694
  • Downloads (Last 6 weeks)99
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Restricted global optimization for QAOAAPL Quantum10.1063/5.01893741:2Online publication date: 29-Apr-2024
  • (2024)QUBO Formulation for Sparse Sensor Placement for ClassificationInnovations for Community Services10.1007/978-3-031-60433-1_2(17-35)Online publication date: 31-May-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media