Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Review of Detection Limits for Various Techniques for Bacterial Detection in Food Samples
Next Article in Special Issue
Flexible Organic Electrochemical Transistors for Energy-Efficient Neuromorphic Computing
Previous Article in Journal
Synergetic Catalytic Effect between Ni and Co in Bimetallic Phosphide Boosting Hydrogen Evolution Reaction
Previous Article in Special Issue
Oxide Ionic Neuro-Transistors for Bio-inspired Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimisation Challenge for a Superconducting Adiabatic Neural Network That Implements XOR and OR Boolean Functions

by
Dmitrii S. Pashin
1,
Marina V. Bastrakova
1,2,
Dmitrii A. Rybin
1,
Igor. I. Soloviev
2,3,4,*,
Nikolay V. Klenov
4,5 and
Andrey E. Schegolev
3,6
1
Faculty of Physics, Lobachevsky State University of Nizhni Novgorod, 603022 Nizhny Novgorod, Russia
2
Russian Quantum Centre, 143025 Moscow, Russia
3
Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University, 119991 Moscow, Russia
4
National University of Science and Technology MISIS, 119049 Moscow, Russia
5
Faculty of Physics, Lomonosov Moscow State University, 119991 Moscow, Russia
6
Science Department, Moscow Technical University of Communication and Informatics (MTUCI), 111024 Moscow, Russia
*
Author to whom correspondence should be addressed.
Nanomaterials 2024, 14(10), 854; https://doi.org/10.3390/nano14100854
Submission received: 5 March 2024 / Revised: 1 April 2024 / Accepted: 11 May 2024 / Published: 14 May 2024
(This article belongs to the Special Issue Neuromorphic Devices: Materials, Structures and Bionic Applications)

Abstract

:
In this article, we consider designs of simple analog artificial neural networks based on adiabatic Josephson cells with a sigmoid activation function. A new approach based on the gradient descent method is developed to adjust the circuit parameters, allowing efficient signal transmission between the network layers. The proposed solution is demonstrated on the example of a system that implements XOR and OR logical operations.

1. Introduction

A distinctive feature in the current era of information technology evolution is the widespread development and implementation of artificial intelligence (AI) [1,2,3,4,5,6]. In order to effectively solve a number of tasks, specialised hardware implementation of AI systems is required [7,8]. The most popular and exciting at the moment are the so-called neuromorphic chips or neuromorphic processors. In this field, world giants such as Intel (Loihi 1 and Loihi 2) and IBM (TrueNorth, NorthPole) have made their mark. In addition to neuromorphic processors, there are machine learning processors (Intel Movidius Myriad 2, Mobileye EyeQ) designed to accelerate data processing (video, machine vision, etc.) and tensor processors (Google TPU, Huawei Ascend, Intel Nervana NNP) that are designed to accelerate arithmetic operations. While the latter two types have been successfully implemented in modern hardware platforms (smartphones, cloud computing, etc.), neuromorphic processors, despite their potential, are unfortunately not yet widespread and remain mostly at the laboratory production and testing stage [9,10,11,12,13,14,15,16,17,18].
There are a number of post-Moore technology platforms that enable the realisation of AI technologies at the hardware level, promising advances in performance and/or energy efficiency. Optical neuromorphic networks are an excellent example [19,20] of energy-efficient systems with high performance. Photonic-superconducting interfaces [21,22,23,24] and other hybrid optical-superconducting neural networks [25,26,27,28] were once a major milestone in the development of this field of applied science. These systems use light pulses to transmit signals and superconducting circuits based on quantum interferometers to process and store information. Superconducting elements are known for their high energy efficiency [29,30,31,32,33,34,35,36,37]. In the context of modern data centres that require massive cooling, superconductor-based hybrid computers may become quite competitive players. It is also worth noting that quantum computers [38,39,40] are now being developed on the basis of superconductor technology. Therefore, the creation of superconducting neuromorphic chips, capable of hybridisation with quantum computers (QCs), seems very reasonable. Examples might be qubit spectrum detection of a QC’s output signal or a QC’s calculation of synaptic weights for an externally tunable artificial neural network. This study focuses on the optimisation of superconducting basic elements and their interconnections, specified for superconducting logic gates in neuromorphic systems (Figure 1).
It is also necessary to mention here imitations in the neural activity of living tissues with the help of superconducting electronics using Josephson contacts [28,29,30,31,32,34,37,41,42,43,44,45,46]. These works demonstrate the operation of bio-inspired neurons (capable of reproducing basic biological patterns of nervous activity, such as excitability, spiking, and bursting) and synapses, as well as simple neural networks. The possibility of using Josephson circuits for modelling and simulating the work of neurons and tissues, as well as for more applied tasks (e.g., recognition), will allow a new level of performance (computational and modelling speed, energy efficiency) in spiking neural networks.
Previously, we presented the concept of an adiabatic interferometer-based superconducting neuron [47,48,49], capable of operating in classical and quantum modes with ultra-low energy dissipation per operation (in the zJ range) [50,51,52,53,54,55,56,57]. The development of an adiabatic perceptron requires the realisation of a large number of connections between neurons via superconducting synapses [47]. Good synapses for perceptron-type networks should have the following important properties: a wide range of weights (both negative and positive, as well as zero), low noise, signal-type preservation (high linearity), and circuit simplicity (as few components as possible). Based on these requirements, we used the synapse scheme first presented in [58].
Combining these elements into an analog network implies the generally difficult task of studying the complex nonlinear dynamics of the system. We propose a solution to this problem and demonstrate the results on the example of a three-neuron network simulating XOR and OR logic gates.

2. The Model for Two Coupled Adiabatic Neurons

Before the simulation of the superconducting logic element, we have considered the system of two coupled S c -neurons having sigmoid activation function. These basic elements are the superconducting interferometers connected by the inductive synapse—see Figure 2. The formation of the activation functions (flux-to-flux transformations) on individual S c -neurons has been previously studied in detail in both classical [47,48,49] and quantum modes [55,56]. Here we consider the interaction between different parts of the system. We choose an inductive synapse instead of the Josephson one [59] because of the absolute linearity of its transfer characteristic and a wide dynamic range [58,60].
The S c neurons (areas outlined by the cyan and navy blue dashed lines in Figure 2) are designed according to the integrating-and-processing principle: the integrating part (IP) collects or i n t e g r a t e s all input signals, while the processing part (PP) processes this input signal and generates an output signal. Generally, an S c -neuron consists of three branches (its processing part): two of them (the branches with the inductance l o u t 1 , 2 and the branch with the Josephson junction and the inductance l 1 , 2 ) form the circuit of the so-called quantron; the third branch, with a single inductance l a 1 , 2 , shunts the quantron circuit. In Figure 2, the IP of the output neuron is highlighted by a light-yellow box and is formed by a so-called coupler—an inductive ring ( l t 1 , 2 , 3 , 4 ) that collects the output flux from input neuron(s) (the IP of the input neuron is not shown in Figure 2). The signal, in the form of magnetic flux, flows from the neuron’s IP to the neuron’s PP through the inductances l 1 , 2 and l a 1 , 2 . The inductance l o u t 1 , 2 is used to transmit the magnetic flux from the S c -neuron to the subsequent element (in our case, the input neuron transmits its signal to the inductive synapse).
The inductive synapse (green box in Figure 2) in turn also has three branches: the input branch (containing the inductance l i n ) is responsible for signal reception, and the branches containing the tunable kinetic inductances l s 1 and l s 2  [58,61,62,63] provide its further transmission. Synapse adjustment is realised by external magnetic or spin–current influence (not shown in Figure 2). By changing the values of the inductances l s 1 and l s 2 , one can vary the weight of the synapse.
In the following, all inductances are normalised to the characteristic Josephson inductance of the output neuron Josephson junction, Φ 0 / 2 π I C 2 , where I C 2 is the critical current of this junction. Magnetic fluxes are normalised to the magnetic flux quantum, φ = 2 π Φ / Φ 0 , Φ 0 = h / 2 e .
The input signal ( φ i n ) has been set in the form of a smoothed trapezoid, which makes it possible to take into account both the rising (rise time) and falling (fall time) phases of the signal. The duration of the plateau section can also be controlled:
φ i n ( t ) = A i n · 1 1 + exp ( 2 D ( t t 1 ) ) + 1 1 + exp ( 2 D ( t t 2 ) ) A i n .
The parameters A i n and D set the level and the rise/fall rate of the input magnetic flux, respectively. As shown in [48], the input signal in the form of (1) allows one to obtain the sigmoid transfer function of the S c -neuron for certain values of the inductances.
The circuit shown in Figure 2 is described by the following system of equations:
i a 1 + i 1 + i o u t 1 = 0 , φ 1 φ i n 2 + i 1 l 1 = i o u t 1 l o u t 1 + m 1 i i n , φ 1 φ i n 2 + i 1 l 1 = i a 1 l a 1 + φ i n 2 , i i n + i s 1 + i s 2 = 0 , i s 1 ( l s 1 + l p ) + m 2 i c i r c = i s 2 ( l s 2 + l p ) m 2 * i c i r c , i s 1 ( l s 1 + l p ) + m 2 i c i r c = i i n ( l i n + l p ) + m 1 i o u t 1 , i c i r c j = 1 4 l t j = i 2 m 3 i a 2 m 3 * i s 1 m 2 + i s 2 m 2 * , i 2 + i a 2 + i o u t 2 = 0 , φ 2 m 3 i c i r c + i 2 l 2 = i o u t 2 l o u t 2 , φ 2 m 3 i c i r c + i 2 l 2 = i a 2 l a 2 + m 3 * i c i r c .
Here, φ 1 , 2 are the superconducting phase drops at the Josephson junctions of the input and output neurons, while l p is an additional non-adjustable (parasitic) inductance in this circuit, which is not explicitly shown in Figure 2 but is taken into account in our calculations. The currents i a 1 , i 1 , and i o u t 1 are the currents flowing through the corresponding inductances in the input neurons l a 1 , l 1 , and l o u t 1 . The currents i i n , i s 1 and i s 2 are the currents in the synapse, flowing through l i n , l s 1 and l s 2 . The currents i s 1 and i s 2 induce the circulating current i c i r c in the integrating part of the output neuron. The circulating current in turn induces currents in the processing part of the output neurons i 2 , i a 2 , and i o u t 2 , which flow through the inductances l 2 , l a 2 , and l o u t 2 . All currents in (2) are normalised by I C 2 . The parameters m k and m k * are mutual inductance coefficients in transformer elements ( k = 1 , 2 , 3 ), which are considered equal to the average values of the inductances that constitute the corresponding transformers.
It can be shown that the currents in the proposed circuit (Figure 2) have a simple relationship with the phases of the Josephson junctions and the external flux:
i γ = κ γ ( 1 ) φ 1 κ γ ( 2 ) φ 2 κ γ ( i n ) φ i n .
Here, all coefficients κ γ ( 1 ) , κ γ ( 2 ) , and κ γ ( i n ) are obtained from the system in Equation (2) and represented in terms of inductances according to Figure 2. The subscripts γ = 1 , 2 , i n , s , a , o u t of the coefficients indicate the currents ( i 1 , i 2 , i i n , and Δ i s = i s 1 i s 2 , respectively) to which they belong. The superscript in Formula (3) takes the values of the corresponding phases of the junctions or the input magnetic flux. The analytical expressions for these coefficients are bulky, so they are given in Appendix A.
Note that, due to the complex dependence of φ 1 , φ 2 on φ i n , the currents i γ are not, in fact, linear in any of them. The non-linearity of the system comes from the Josephson junctions, whose currents, I n (where the subscript n = 1 , 2 is the index of the junction), can be written as
I n = 2 e C n φ n ¨ + 2 e R n φ n ˙ + I C n sin φ n .
in the frame of the well-known resistively shunted junction model with capacitance (RSJC) [64].
Here, we consider an energy-efficient circuit consisting of tunnel superconducting–insulator–superconducting (SIS) Josephson junctions with a high normal state resistance R n , so that the second term in Equation (4) becomes negligibly small and, as modelling shows, does not contribute significantly to the overall dynamics of the system and, therefore, can be safely omitted.
After normalisation of (4) by I C 2 , the equations take the following form:
c n φ n ¨ + i C n sin φ n = i n ,
where i C n = I C n / I C 2 is a dimensionless critical current; t C = C 2 2 e I C 2 is a characteristic time and τ = t / t C is a dimensionless time; and c n = C n / C 2 is a dimensionless capacity. Note that such systems of interacting neurons can also be considered within the framework of the Hamiltonian formalism. As an example, in Appendix B, we present the derivation of the Hamiltonian of the system shown in Figure 2. This approach is quite simple and convenient in the case of scaling the circuit to a larger number of layers in a neural network, as well as for numerical modelling of nonlinear dynamics and further study of the quantum mode of operation of the circuit [55,56], including taking into account the influence of environments.
Solution of the system in Equation (5) gives the transfer characteristics of the input and output neurons as a response to the input magnetic flux in (1). Previous studies of single S c -neurons [48] have shown that the sigmoid activation function can be realised under the following condition: l n < l o u t n 2 + 1 l o u t n l n * and l a n = l n + 1 . Hence, as in the single-neuron case, we consider values of inductances l n < l n * at which there are no plasmonic oscillations in the output characteristics of the first (input) and the second (output) neurons.
In the first step of the analysis, we assume that the coupler inductances should be equal: l t 1 = l t 2 and l t 3 = l t 4 . Figure 3 illustrates the formation of sigmoid activation functions for the input and output neurons under this assumption. It is seen that the current at the output neuron (blue curve in Figure 3) drops by two orders of magnitude; this drawback reflects the difficulty in practical system implementation. It is also necessary to obtain the synapse weights that are at least in the range from −1 to +1, which turns out to be impossible in some situations (see Figure 4).
The above issues imply the need for parameter optimisation. In the next part of the paper, we propose this procedure, which can be generalised to the case of large computing systems.

3. Formulation and Solution of the Optimisation Problem

We consider the optimisation problem of a system of two coupled neurons from the point of view of solving the two problems of synapse weights and neuron response magnitudes mentioned above. However, a closer look at these problems reveals that they are closely related: achieving higher values of weights can potentially increase the response magnitude of the output. Therefore, further actions are aimed at finding a functional to describe the synapse weight as a function of the system parameters and finding its extrema using the gradient descent method. As such a functional, we consider the slope angle of the synapse characteristic α , which can be expressed analytically in the following form:
tan α = d Δ i s d t / d i i n d t = κ s ( 1 ) φ ˙ 1 + κ s ( 2 ) φ ˙ 2 + κ s ( i n ) φ ˙ i n κ i n ( 1 ) φ ˙ 1 + κ i n ( 2 ) φ ˙ 2 + κ i n ( i n ) φ ˙ i n .
When using the gradient descent method, it is necessary to solve a system of differential equations (Equation (5)) at each step, which is the main computational complexity due to the large number of varied system parameters. To overcome this difficulty, we propose several simplifications.
Since dynamic processes in the system are associated with changes in the input flux and, moreover, take place exactly at the rise/fall time intervals, and since the dependence Δ i s ( i i n ) is linear, it is sufficient to determine the value of the angle in (6) at the inflection point t 1 when φ ¨ i n ( t 1 ) = 0 . Additionally, φ ¨ 1 ( t 1 ) = φ ¨ 2 ( t 1 ) = 0 due to the sigmoid activation function. By using this approximation, we obtain the system of equations for φ ˙ 1 ( t 1 ) and φ ˙ 2 ( t 1 ) :
φ ˙ 1 ( t 1 ) = φ ˙ i n η 1 ( κ κ 2 ( i n ) κ 1 ( i n ) κ 2 ( 2 ) κ 1 ( i n ) i C 2 cos ( φ 2 ( t 1 ) ) ) , φ ˙ 2 ( t 1 ) = φ ˙ i n η 1 ( κ κ 1 ( i n ) κ 2 ( i n ) κ 1 ( 1 ) κ 2 ( i n ) i C 1 cos ( φ 1 ( t 1 ) ) ) ,
where η κ 2 + ( i C 1 cos ( φ 1 ( t 1 ) ) + κ 1 ( 1 ) ) · ( i C 2 cos ( φ 2 ( t 1 ) ) + κ 2 ( 2 ) ) , reminding one that κ κ 1 ( 2 ) = κ 2 ( 1 ) , and the values of φ 1 ( t 1 ) and φ 2 ( t 1 ) can be found from
i C 1 sin ( φ 1 ( t 1 ) ) = ( κ φ 2 ( t 1 ) + κ 1 ( 1 ) φ 1 ( t 1 ) + κ 1 ( i n ) φ i n ( t 1 ) ) , i C 2 sin ( φ 2 ( t 1 ) ) = ( κ φ 1 ( t 1 ) + κ 2 ( 2 ) φ 2 ( t 1 ) + κ 2 ( i n ) φ i n ( t 1 ) ) .
By substituting the obtained values of φ ˙ 1 ( t 1 ) and φ ˙ 2 ( t 1 ) into the expression in (6), we obtain an explicit form for α that is dependent on all system parameters. This allows us to implement the gradient descent method to maximise the angle α without directly calculating the dynamics (5). A similar approach allows us to quickly optimise the parameters to maximise the current at the output neuron by using (3).
A visualisation of this method for different initial parameters is shown in Figure 5. We selected several initial sets of system inductances, for which α was calculated using (6) and maximised based on the gradient descent method. The angle max α is non-monotonic with respect to the system parameters and has several local maxima. In Figure 5, we show a section for several trajectories along which the angle max α is maximised in the subspace of inductances l i n and l t 1 = l t 2 , where the arrow indicates the path from their initial values to the optimal ones. It is seen that all curves converge at l t 1 , 2 2 (which was chosen as an upper boundary value for the inductances l t 1 4 ) and l i n 0.3 , where a certain local maximum of optimisation is reached for max α and, therefore, for the achievable synapse weights in our system.
Figure 6a shows tan ( α ) dependence on the inductance difference Δ l s for optimal system parameters found by the gradient descent method. The good agreement between the results obtained from the exact calculation of Equation (5) (the red circles) and by using Equations (6)–(8) (the blue line) indicates the validity of the approximations used. Dependencies of the synapse output current Δ i s on the input current | i i n | for different values of Δ l s are shown in Figure 6b.
The proposed method allows one to abandon the solution of the Hamiltonian system, which is a time-consuming computational task. We reduce the optimisation problem to solving a set of algebraic equations, which significantly reduces the computational time. This approach is promising from the point of view of scaling neural networks and calculating their optimal configuration parameters.
The obtained results demonstrate that the gradient descent method can be used to optimise the parameters of a synapse connecting two neurons. Extending the applicability of the method to more complex systems consisting of a larger number of neurons and synapses is also possible, but may require additional assumptions related to mutual influence of neurons on each other (localisation approximation). Hence, the challenge in the optimisation of the parameters of a large neural network is reduced to solving local problems of finding functionals that are similar to Equation (6) and then fine-tuning the found solutions by the gradient descent method in a multi-parameter space.

4. Circuit Structure Optimisation

The performed parameter optimisation does not eliminate the signal level drop at the output neuron in the considered circuit design (Figure 2). To overcome this problem, we are developing a modification of the circuit in which the magnetic connection between the input neuron and the synapse is replaced by a galvanic connection—see Figure 7.
Within the framework of the proposed approach, gradient descent was applied to the modified scheme to solve the optimisation problems. The analysis of the system showed that the main parameters responsible for the current at the output neuron are the coupling inductances l t j (where j = 1 , 2 , 3 , 4 ). Figure 8a shows that we need to minimise the value of the inductance l t 4 connecting the coupler to the Josephson arm in the output neuron. The direction of the arrows shows the path of the trajectory (from the initial value to the optimal one) for maximising the angle max α during the gradient descent execution. Figure 8b shows the calculation for optimisation of the remaining coupler inductances. It can be seen that all trajectories tend to the values l t 3 2 and l t 1 , 2 0.7 . We calculate the activation functions of the neurons shown in the inset of Figure 8b using these values. By application of the optimisation approach, we are able to significantly increase the current at the output neuron, which is important for the practical implementation of such systems.
After re-optimisation of the parameters, we re-examine the synaptic weights. We analyse the dependence of the current ratio i o u t 2 / i o u t 1 on Δ l S , where the input flux reaches a plateau at t = ( t 1 + t 2 ) / 2 see (Figure 9a). It can be seen that we can adjust the sum of the inductances such that the values of the out currents at the input and output neurons coincide (see Figure 9b). Note that the output current of the output neuron can even exceed the output current of the first neuron at small values of l t 4 . Thus, depending on the technological limitations, it is possible to obtain the maximum response at the output layer of the neurons.

5. Analog Implementation of the XOR and OR Logic Elements

The classical XOR (logical inequality operator) element has two inputs and one output. If the input signals do not match, the output is “1”, and “0” otherwise. The basic neural network implementing XOR consists of three neurons (two input neurons and one output neuron). The inputs of the neural network are supplied with signal in the form of smoothed trapezoid “1” or no signal “0”—see Figure 10a. The optimisation problem is reduced to finding such parameters of the system at which the output layer neuron activates according to the XOR truth table. Similar considerations are valid for obtaining a neural network operating according to the OR gate principle.
The discussion of the neural-XOR/OR superconducting circuit based on three adiabatic neurons (shown in Figure 10b) begins with writing down the corresponding system of equations:
( i 1 ) i n 1 + ( i a 1 ) i n 1 + ( i o u t 1 ) i n 1 = 0 , ( i 1 ) i n 2 + ( i a 1 ) i n 2 + ( i o u t 1 ) i n 2 = 0 , ( φ 1 ) i n 1 φ i n 1 2 + ( i 1 l 1 ) i n 1 = ( i o u t 1 l o u t 1 ) i n 1 + ( i s 1 l s 1 ) i n 1 + ( m 2 ) i n 1 i c i r c , ( φ 1 ) i n 2 φ i n 2 2 + ( i 1 l 1 ) i n 2 = ( i o u t 1 l o u t 1 ) i n 2 + ( i s 1 l s 1 ) i n 2 + ( m 2 ) i n 2 i c i r c , ( φ 1 ) i n 1 φ i n 1 2 + ( i 1 l 1 ) i n 1 = ( i o u t 1 l o u t 1 ) i n 1 + ( i s 2 l s 2 ) o u t 1 ( m 2 * ) i n 1 i c i r c , ( φ 1 ) i n 2 φ i n 2 2 + ( i 1 l 1 ) i n 2 = ( i o u t 1 l o u t 1 ) i n 2 + ( i s 2 l s 2 ) i n 2 ( m 2 * ) i n 2 i c i r c , ( φ 1 ) i n 1 φ i n 1 2 + ( i 1 l 1 ) i n 1 = ( i a 1 l a ) i n 1 + φ i n 1 2 , ( φ 1 ) i n 2 φ i n 2 2 + ( i 1 l 1 ) i n 2 = ( i a 1 l a ) i n 2 + φ i n 2 2 , ( i o u t 1 ) i n 1 ( i s 1 ) i n 1 ( i s 2 ) i n 1 = 0 , ( i o u t 1 ) i n 2 ( i s 1 ) i n 2 ( i s 2 ) i n 2 = 0 , ( ( l t 1 ) i n 1 + ( l t 2 ) i n 1 + ( l t 1 ) i n 2 + ( l t 2 ) i n 2 + ( l t 3 ) o u t 1 + ( l t 4 ) o u t 1 ) i c i r c = ( ( i s 1 m 2 ) i n 1 ( i s 1 m 2 ) i n 2 + ( i 2 m 3 ) o u t 1 + ( m 2 * i s 2 ) i n 1 + ( m 2 * i s 2 ) i n 2 ( m 3 * i a 2 ) o u t 1 ) , ( i 2 ) o u t 1 + ( i a 2 ) o u t 1 + ( i o u t 2 ) o u t 1 = 0 , ( φ 2 ) o u t 1 ( m 3 ) o u t 1 i c i r c + ( i 2 l 2 ) o u t 1 = ( i o u t 2 l o u t 2 ) o u t 1 , ( φ 2 ) o u t 1 ( m 3 ) o u t 1 i c i r c + ( i 2 l 2 ) o u t 1 = ( i a 2 l a 2 ) o u t 1 + ( m 3 * ) o u t 1 i c i r c ,
where we preserve the notations according to Figure 7, but with the subscripts for the neurons in the input ( i n 1 , 2 ) and output ( o u t 1 ) layers (a more detailed scheme with all designations can be found in Appendix C). The input signals defined by expression (1) are denoted accordingly as φ i n 1 , 2 .
Solving the optimisation problem for the system in Equation (9) makes it possible to configure the neural network to be capable of operating both as an XOR or as an OR logic element, which is quite expected. An obvious choice for such a neural network configuration is the choice of weight coefficients: they should be asymmetric for XOR, and, on the contrary, they should be symmetric for OR implementation. By solving the system in Equation (9) that describes the circuit shown in Figure 10, the truth tables for XOR/OR network implementations were obtained and are presented in Figure 11. The case when there is no signal at the input of both input neurons is not shown: if there is no signal at both inputs of the circuit, there is no signal at the output as well.
One point is worth mentioning regarding the proposed implementations of the neural networks. Here, the XOR output can be of both positive and negative polarity (see Figure 11). The OR output with “1” in both inputs is twice as large as that with inputs “1” + “0” or “0” + “1” (see Figure 11). This is in contrast to the digital implementations, where the output can be “0” or “1” only.

6. Conclusions

In this paper, we demonstrate an optimisation algorithm for the parameters of adiabatic neural networks. The algorithm allowed us to find the optimal values for operation of the circuits with different combinations of synapses and neurons, including the ones mimicking logical XOR and OR elements. In addition, a generalisation of this algorithm to neural networks of higher dimensionality, consisting of superconducting S c -neurons and synapses, was discussed.
It should be noted that, even in the development of such simple neural networks, we faced a significant signal decay problem. For larger neural networks, the solution may imply an addition of magnetic flux amplifiers (boosters), well-known in adiabatic superconducting logic [57]. The utilisation of an analogue–digital (and, apparently, optical-superconducting) approach for the network implementation is another option.
Regarding the experimental feasibility of the presented schemes, there are a number of experimental works [49,65,66,67] that use a similar technique for the fabrication of Josephson junctions and demonstrate their critical currents in the range of 50 to 150 μA, corresponding to characteristic values of inductance magnitudes at the level of 2.2–6.6 pH. This confirms the experimental feasibility of the design considerations presented.

Author Contributions

Conceptualisation, A.E.S., D.S.P., N.V.K.; Data curation, A.E.S., N.V.K., M.V.B.; Formal analysis, A.E.S., N.V.K.; Methodology, D.S.P., M.V.B.; Software, D.S.P., D.A.R., M.V.B.; Supervision, A.E.S.; Validation, I.I.S.; Visualisation, A.E.S.; Writing—original draft, D.S.P., N.V.K., M.V.B.; Writing—review and editing, I.I.S., with contribution from the coauthors. All authors have read and agreed to the published version of the manuscript.

Funding

The development of the main concept was carried out with the financial support of the Strategic Academic Leadership Program “Priority-2030” (grant from NITU “MISIS” No. K2-2022-029). The development of the method of analysis for the evolution of the adiabatic logic cells was carried out with the support of the Grant of the Russian Science Foundation No. 22-72-10075. A.S. is grateful to grant 22-1-3-16-1 from the Foundation for the Advancement of Theoretical Physics and Mathematics “BASIS”. The work of M.B. and I.S. was supported by Rosatom in the framework of the Roadmap for Quantum computing (Contract No. 868-1.3-15/15-2021, dated 5 October 2021).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All relevant data are included in the article.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. Analytical Expressions for Coefficients of “Motion Equations”

The analytical representation of the coefficients for the “equations of motion” of the composite system (2) is given below:
κ 1 ( 2 ) = κ 2 ( 1 ) = 1 / ( l 1 l a 2 l o 2 l p s 2 m 1 ) ( l a 2 m 2 p ( l a o 1 l p s 2 m 2 + l a o 1 l i n p m 2 p l r 1 m 2 p ) ( l k l o l o u t 1 + l a o 2 ( l p s 2 m 2 l p s 1 m 2 * ) ( m 1 2 m 2 p + l o u t 1 ( 2 l p m 2 + l s 2 m 2 + l i n m 2 p + l p m 2 * ) ) ) ( l a 2 l a t 2 m 3 + ( l a o 2 l 2 l a t 2 ) m 3 * ) + l o l p s 2 ( m 1 2 m 2 p l o u t 1 ( l s 2 m 2 + l i n m 2 p + l p ( 2 m 2 + m 2 * ) ) ) ( l a o 2 m 3 * + ( l a 2 m 3 l 2 m 3 * ) ( l o u t 2 + l a 2 ( l a 2 m 3 l 2 m 3 * ) ) ) ) , κ 1 ( i n ) = ( l w ( l 1 l a 1 ) l a 2 ( m 1 2 m 2 p ( l p s 2 l x + l y + l a o 2 m 2 p m 2 * ) + l o u t 1 ( l p s 1 l s 2 l x m 2 + l p s 2 l y m 2 l i s l s 2 l x m 2 p + l i n p l y m 2 p + l p l x ( l p s 1 m 2 l i s m 2 p ) + l a o 2 m 2 p ( l p s 1 m 2 + + l i s m 2 p ) m 2 * ) ) ) / ( 2 l 1 l w ) , κ 2 ( 1 ) = ( ( l a 1 l q m 1 m 2 p ( l p s 2 m 2 l p s 1 m 2 * ) ( l a 2 m 3 + l o u t 2 m 3 p ) ) / ( ( l r 1 ( l q l a 2 l y ) m 2 p + l a o 1 ( l p s 1 l q m 2 + l a 2 l p s 2 l y m 2 l i s l q m 2 p + l a 2 l i n p l y m 2 p ) ) ( l 2 l a t 2 l f l a 2 l o u t 2 l p s 2 l t + l a 2 l o u t 2 m 2 p m 2 * + l a 2 l p s 2 m 3 2 + l o u t 2 l p s 2 m 3 p 2 + l 2 l p s 2 m 3 * 2 ) ) ) κ 2 ( i n ) = ( ( ( l 1 l a 1 ) l q m 1 m 2 p ( l p s 2 m 2 l p s 1 m 2 * ) ( l a 2 m 3 + l o u t 2 m 3 p ) ) / ( 2 ( l r 1 ( l q l a 2 l y ) m 2 p + l a o 1 ( l p s 1 l q m 2 + l a 2 l p s 2 l y m 2 l i s l q m 2 p + l a 2 l i n p l y m 2 p ) ) ( l 2 l a t 2 l f l a 2 l o u t 2 l p s 2 l t + l a 2 l o u t 2 m 2 p m 2 * + l a 2 l p s 2 m 3 2 + l o u t 2 l p s 2 m 3 p 2 + l 2 l p s 2 m 3 * 2 ) ) ) , κ i n ( 1 ) = ( l a 1 m 1 ( l 2 l a t 2 ( l p s l t + m 2 p 2 ) + l a 2 l o u t 2 ( l p s l t + m 2 p 2 ) + l a 2 l p s m 3 2 + l o u t 2 l p s m 3 p 2 + l 2 l p s m 3 * 2 ) ) / l g , κ i n ( 1 ) = ( l a 1 m 1 ( l 2 l a t 2 ( l p s l t + m 2 p 2 ) + l a 2 l o u t 2 ( l p s l t + m 2 p 2 ) + l a 2 l p s m 3 2 + l o u t 2 l p s m 3 p 2 + l 2 l p s m 3 * 2 ) ) / l g , κ i n ( 2 ) = ( ( l a o 1 ( l p s 2 m 2 l p s 1 m 2 * ) ( l a 2 m 3 + l o u t 2 m 3 p ) ) / l g ) , κ i n ( i n ) = ( ( ( l 1 l a 1 ) m 1 ( l a 2 ( l o u t 2 l p s l t l o u t 2 m 2 p 2 l p s m 3 2 ) l o u t 2 l p s m 3 p 2 + l 2 ( l a t 2 l p s l t l a t 2 m 2 p 2 l p s m 3 * 2 ) ) ) / l d ) , κ s ( 1 ) = ( l a 1 m 1 ( l a 2 l o u t 2 ( l s 1 l t + l s 2 l t + m 2 m 2 p m 2 p m 2 * ) + l 2 l a t 2 ( l s 1 l t + l s 2 l t + m 2 2 m 2 * 2 ) + l a 2 ( l s 1 l s 2 ) m 3 2 + l o u t 2 ( l s 1 l s 2 ) m 3 p 2 + l 2 ( l s 1 l s 2 ) m 3 * 2 ) ) / l g , κ s ( 2 ) = ( l a o 1 ( 2 l i n + 3 l p + l s 2 ) m 2 2 l r 1 m 2 p + l a o 1 ( 2 l i n + 3 l p + l s 1 ) m 2 * ) ( l a 2 m 3 + l o u t 2 m 3 p ) / l g κ s ( i n ) = ( ( ( l 1 l a 1 ) m 1 ( l 2 l a t 2 ( l s 1 l t l s 2 l t m 2 2 + m 2 * 2 ) + l a 2 l o u t 2 ( l s 1 l t l s 2 l t m 2 2 + m 2 * 2 ) + l a 2 ( l s 1 + l s 2 ) m 3 2 + l o u t 2 ( l s 1 + l s 2 ) m 3 p 2 + l 2 ( l s 1 + l s 2 ) m 3 * 2 ) ) / l d ) .
The following is a list of the notations used in the given coefficients:
l t = l t 1 + l t 2 + l t 3 + l t 4 ,
l a o 1 = l 1 l a t 1 + l a 1 l o u t 1 , l a o 2 = l 2 l a t 2 + l a 2 l o u t 2 ,
l r 1 = ( l 1 + l a 1 ) m 1 2 , l p s 1 = l p + l s 1 , l p s 2 = l p + l s 2 ,
m 2 p = m 2 + m 2 * , m 3 p = m 3 + m 3 * ,
l p s 1 = l p + l s 1 , l p s 2 = l p + l s 2 ,
l x = l 2 l a t 2 l t + l a 2 l o u t 2 l t l a 2 m 3 2 l o u t 2 m 3 p 2 l 2 m 3 * 2 ,
l y = l a o 2 m 2 m 2 p + l a 2 l p s 1 ( l o u t 2 l t + m 3 2 ) + l o u t 2 l p s 1 m 3 p 2 + l 2 l p s 1 ( l a t 2 l t + m 3 * 2 )
l i n p = l i n + l p , l i s = l i n + 2 l p + l s 1 ,
l q = l a 2 ( l p s 2 l x l a o 2 m 2 p m 2 * ) ,
l w = l r 1 ( l q + l a 2 l y ) m 2 p l a o 1 ( l p s 1 l q m 2 + l a 2 l p s 2 l y m 2 l i s l q m 2 p + l a 2 l i n p l y m 2 p )
l j = l o u t 12 l p , l p s = 2 l p + l s 1 + l s 2 ,
l k = 3 l p 2 + l i n l p s + 2 l p l s 12 + l s 1 l s 2 ,
l b = 2 l j + l a 2 l o u t 1 l p s + l o u t 12 l s 12 ,
l s 12 = l s 1 + l s 2 , l a t 1 = l a 1 + l o u t 1 , l a t 2 = l a 2 + l o u t 2 ,
l o u t 12 = l o u t 1 l o u t 2 , l p s 12 = l p s 1 + l p s 2 ,
l u = l a 2 ( 2 l j ( l s 12 l t + m 2 2 + m 2 m 2 * + m 2 * 2 ) + l o u t 12 ( 3 l p 2 l t l i n l p s l t l s 1 l s 2 l t + l s 2 m 2 2 + l i n m 2 p 2 + l s 1 m 2 * 2 ) + l k l o u t 1 m 3 2 + m 1 2 ( l o u t 2 l p s l t l o u t 2 m 2 p 2 l p s m 3 2 ) ) + l o u t 2 ( l k l o u t 1 l p s m 1 2 ) m 3 p 2 + l 2 ( l b l i n l t l t ( 3 l o u t 12 l p 2 + 2 l j l s 12 + l o u t 12 l s 1 l s 2 l o u t 2 l p s m 1 2 ) + ( 2 l j + l o u t 12 l s 2 l o u t 2 m 1 2 ) m 2 2 + l i n l o u t 12 m 2 p 2 + m 2 * ( 2 l j m 2 p + l o u t 12 l s 1 m 2 * l o u t 2 m 1 2 ( 2 m 2 + m 2 * ) ) + l a 2 ( m 1 2 ( l p s l t m 2 p 2 ) + l o u t 1 ( 3 l p 2 l t l s 1 l s 2 l t + l s 2 m 2 2 + l i n m 2 p 2 + l s 1 m 2 * 2 + 2 l p ( l s 12 l t + m 2 2 + m 2 m 2 * + m 2 * 2 ) ) ) + ( l k l o u t 1 l p s m 1 2 ) m 3 * 2 ) , l e = l o u t 2 m 1 2 ( 2 l p l t + l s 12 l t m 2 p 2 ) + 2 l j ( l s 12 l t + m 2 2 + m 2 m 2 * + m 2 * 2 ) + l o u t 12 ( 3 l p 2 l t + l s 2 m 2 2 + l s 1 ( l s 2 l t + m 2 * 2 ) ) + l a 1 l o u t 2 ( 3 l p 2 l t + l s 2 m 2 2 + l s 1 ( l s 2 l t + m 2 * 2 ) + 2 l p ( l s 12 l t + m 2 2 + m 2 m 2 * + m 2 * 2 ) ) + ( l a t 1 ( 3 l p 2 + 2 l p l s 12 + l s 1 l s 2 ) l p s m 1 2 ) m 3 2 + l i n ( 2 l j l t l o u t 12 l s 12 l t + l o u t 12 m 2 p 2 + l a 1 l o u t 2 ( l p s l t + m 2 p 2 ) + l a t 1 l p s m 3 2 ) , l h = l b l i n l t l a 2 l o u t 1 ( 3 l p 2 + 2 l p l s 12 + l s 1 l s 2 ) l t + l t ( 3 l o u t 12 l p 2 + 2 l j l s 12 + l o u t 12 l s 1 l s 2 l a t 2 l p s m 1 2 ) + ( 2 l j + l o u t 12 l s 2 + l a 2 l o u t 1 ( 2 l p + l s 2 ) l a t 2 m 1 2 ) m 2 2 + l i n ( l a 2 l o u t 1 + l o u t 12 ) m 2 p 2 2 ( l j + l a 2 l o u t 1 l p l a t 2 m 1 2 ) m 2 m 2 * + ( 2 l j + l o u t 12 l s 1 + l a 2 l o u t 1 ( 2 l p + l s 1 ) l a t 2 m 1 2 ) m 2 * 2 + l a 1 l a t 2 ( 3 l p 2 l t + l i n l p s l t l s 1 l s 2 l t + l s 2 m 2 2 l i n m 2 p 2 + l s 1 m 2 * 2 + 2 l p ( l s 12 l t + m 2 2 + m 2 m 2 * + m 2 * 2 ) ) + l a 1 l k m 3 * 2 + ( l k l o u t 1 l p s m 1 2 ) m 3 * 2 l g = l a 1 l u + l 1 ( l a 2 l e + l 2 l h + ( 2 l i n l j + 3 ( l o u t 12 + l a 1 l o u t 2 ) l p 2 + l a 1 l i n l o u t 2 l p s + 2 l j l s 12 + l i n l o u t 12 l s 12 + 2 l a 1 l o u t 2 l p l s 12 + l o u t 12 l s 1 l s 2 + l a 1 l o u t 2 l s 1 l s 2 l o u t 2 l p s m 1 2 ) m 3 2 + 2 l o u t 2 ( l a t 1 l k l p s m 1 2 ) m 3 m 3 * + l o u t 2 ( l a t 1 l k l p s m 1 2 ) m 3 * 2 ) , l d = 2 ( l a 1 l u + l 1 ( l 2 l h + ( 2 l i n l j + 3 ( l o u t 12 + l a 1 l o u t 2 ) l p 2 + 2 l j l s 12 + l i n l o u t 12 l s 12 + l o u t 12 l s 1 l s 2 + l a 1 l o u t 2 ( l i n l p s + 2 l p l s 12 + l s 1 l s 2 ) l o u t 2 l p s m 1 2 ) m 3 2 l a 2 ( l i n ( 2 l j + l a 1 l o u t 2 l p s + l o u t 12 l s 12 ) l t + 2 l j ( l s 12 l t m 2 2 ) + l o u t 2 m 1 2 ( l p s l t + m 2 2 ) l i n ( l o u t 12 + l a 1 l o u t 2 ) m 2 p 2 + l o u t 12 ( 3 l p 2 l t + l s 1 l s 2 l t l s 2 m 2 2 l s 1 m 2 * 2 ) + l a 1 l o u t 2 ( 3 l p 2 l t + 2 l p l s 12 l t + l s 1 l s 2 l t l s 2 m 2 2 2 l p m 2 m 2 p ( 2 l p + l s 1 ) m 2 * 2 ) m 2 * ( 2 l j ( m 2 m 2 * ) + l o u t 2 m 1 2 ( 2 m 2 + m 2 * ) ) l a t 1 l k m 3 2 + l p s m 1 2 m 3 2 ) + 2 l o u t 2 ( l a t 1 l k l p s m 1 2 ) m 3 m 3 * + l o u t 2 ( l a t 1 l k l p s m 1 2 ) m 3 * 2 ) ) , l f = l p s 2 l t m 2 p m 2 * , l o = l 2 l a t 2 l f + l a 2 l o u t 2 l p s 2 l t l a 2 l o u t 2 m 2 p m 2 * l a 2 l p s 2 m 3 2 l o u t 2 l p s 2 m 3 p 2 l 2 l p s 2 m 3 * 2

Appendix B. Hamiltonian Formalism for Two Coupled Neurons

The study of the dynamic properties of connected neurons can also be carried out within the framework of the Hamiltonian formalism. Herewith our system can be represented as two bound particles with momenta p n = c n φ n ˙ , and therefore the motion of particles obeys the classical Hamilton’s equations:
φ n ˙ = H p n , p n ˙ = H φ n ,
where H is a Hamiltonian (an integral of motion).
By integrating the system (A1) we can find the momentum parts of the Hamiltonian, which have the standard form p n 2 / ( 2 c n ) . Also from Equation (5) it can be seen that p n ˙ = i n i C n sin φ n , therefore using the Hamiltonian’s feature κ κ 1 ( 2 ) = κ 2 ( 1 ) one gets the form:
H = n = 1 , 2 H ω n i C n cos φ n + κ φ 1 φ 2 κ 1 ( i n ) 2 2 κ 1 ( 1 ) + κ 2 ( i n ) 2 2 κ 2 ( 2 ) φ i n 2 .
Here the first term defines the Hamiltonians of individual neurons
H ω n = p n 2 2 c n + ω n 2 2 φ n + κ n ( i n ) κ n ( n ) φ i n 2 , ω n 2 = κ n ( n ) ( n = 1 , 2 ) .
The second term in Equation (A2) defines the interaction between neurons, and the last term is responsible for the dynamic control of neurons by the external magnetic flux φ i n according to (1).
Using the example of two interacting neurons within the framework of the Hamiltonian formalism, it is clearly seen that the dynamic processes in the system resemble two interacting nonlinear oscillators, the coupling strength of which is linear relative to the phases at each of the Josephson contacts. Using this approach, the numerical analysis of the system is reduced to solving coupled differential equations of the first order (in contrast to (5), where it is necessary to solve a system of differential equations of the second order), which greatly simplifies further consideration of more complex systems of coupled neurons and synapses.

Appendix C. Superconducting XOR/OR Network Scheme with Notations

Here is a more detailed scheme of the XOR/OR network with all the notations used in the system of Equation (9)—Figure A1.
Figure A1. Schematic representation of the 3-neuron XOR/OR network in its superconducting implementation.
Figure A1. Schematic representation of the 3-neuron XOR/OR network in its superconducting implementation.
Nanomaterials 14 00854 g0a1

References

  1. Zbontar, J.; LeCun, Y. Stereo matching by training a convolutional neural network to compare image patches. J. Mach. Learn. Res. 2016, 17, 2287–2318. [Google Scholar]
  2. Tolosana, R.; Vera-Rodriguez, R.; Fierrez, J.; Ortega-Garcia, J. Exploring recurrent neural networks for on-line handwritten signature biometrics. IEEE Access 2018, 6, 5128–5138. [Google Scholar] [CrossRef]
  3. Kaya, M.; Bilge, H.Ş. Deep metric learning: A survey. Symmetry 2019, 11, 1066. [Google Scholar] [CrossRef]
  4. Ruiz, V.; Linares, I.; Sanchez, A.; Velez, J.F. Off-line handwritten signature verification using compositional synthetic generation of signatures and Siamese Neural Networks. Neurocomputing 2020, 374, 30–41. [Google Scholar] [CrossRef]
  5. Wang, M.; Deng, W. Deep face recognition: A survey. Neurocomputing 2021, 429, 215–244. [Google Scholar] [CrossRef]
  6. Ilina, O.; Ziyadinov, V.; Klenov, N.; Tereshonok, M. A survey on symmetrical neural network architectures and applications. Symmetry 2022, 14, 1391. [Google Scholar] [CrossRef]
  7. Le Gallo, M.; Khaddam-Aljameh, R.; Stanisavljevic, M.; Vasilopoulos, A.; Kersting, B.; Dazzi, M.; Karunaratne, G.; Brändli, M.; Singh, A.; Mueller, S.M.; et al. A 64-core mixed-signal in-memory compute chip based on phase-change memory for deep neural network inference. Nat. Electron. 2023, 1–14. [Google Scholar] [CrossRef]
  8. Modha, D.S.; Akopyan, F.; Andreopoulos, A.; Appuswamy, R.; Arthur, J.V.; Cassidy, A.S.; Datta, P.; DeBole, M.V.; Esser, S.K.; Otero, C.O.; et al. Neural inference at the frontier of energy, space, and time. Science 2023, 382, 329–335. [Google Scholar] [CrossRef] [PubMed]
  9. Kumar, S. Introducing qualcomm zeroth processors: Brain-inspired computing. Qualcomm OnQ Blog, 14 October 2013; 1. [Google Scholar]
  10. Prezioso, M.; Merrikh-Bayat, F.; Hoskins, B.; Adam, G.C.; Likharev, K.K.; Strukov, D.B. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 2015, 521, 61. [Google Scholar] [CrossRef]
  11. Bose, S.K.; Mallinson, J.B.; Gazoni, R.M.; Brown, S.A. Stable self-assembled atomic-switch networks for neuromorphic applications. IEEE Trans. Electron. Devices 2017, 64, 5194–5201. [Google Scholar] [CrossRef]
  12. Davies, M.; Srinivasa, N.; Lin, T.H.; Chinya, G.; Cao, Y.; Choday, S.H.; Dimou, G.; Joshi, P.; Imam, N.; Jain, S.; et al. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 2018, 38, 82–99. [Google Scholar] [CrossRef]
  13. Cheng, R.; Goteti, U.S.; Hamilton, M.C. Spiking neuron circuits using superconducting quantum phase-slip junctions. J. Appl. Phys. 2018, 124, 152126. [Google Scholar] [CrossRef]
  14. Jeong, H.; Shi, L. Memristor devices for neural networks. J. Phys. Appl. Phys. 2018, 52, 023003. [Google Scholar] [CrossRef]
  15. DeBole, M.V.; Taba, B.; Amir, A.; Akopyan, F.; Andreopoulos, A.; Risk, W.P.; Kusnitz, J.; Otero, C.O.; Nayak, T.K.; Appuswamy, R.; et al. TrueNorth: Accelerating from zero to 64 million neurons in 10 years. Computer 2019, 52, 20–29. [Google Scholar] [CrossRef]
  16. Arute, F.; Arya, K.; Babbush, R.; Bacon, D.; Bardin, J.C.; Barends, R.; Biswas, R.; Boixo, S.; Brandao, F.G.; Buell, D.A.; et al. Quantum supremacy using a programmable superconducting processor. Nature 2019, 574, 505–510. [Google Scholar] [CrossRef]
  17. Berggren, K.; Xia, Q.; Likharev, K.K.; Strukov, D.B.; Jiang, H.; Mikolajick, T.; Querlioz, D.; Salinga, M.; Erickson, J.R.; Pi, S.; et al. Roadmap on emerging hardware and technology for machine learning. Nanotechnology 2020, 32, 012002. [Google Scholar] [CrossRef]
  18. Wan, W.; Kubendran, R.; Schaefer, C.; Eryilmaz, S.B.; Zhang, W.; Wu, D.; Deiss, S.; Raina, P.; Qian, H.; Gao, B.; et al. A compute-in-memory chip based on resistive random-access memory. Nature 2022, 608, 504–512. [Google Scholar] [CrossRef]
  19. Feldmann, J.; Youngblood, N.; Wright, C.D.; Bhaskaran, H.; Pernice, W.H. All-optical spiking neurosynaptic networks with self-learning capabilities. Nature 2019, 569, 208–214. [Google Scholar] [CrossRef]
  20. Jha, A.; Huang, C.; Peng, H.T.; Shastri, B.; Prucnal, P.R. Photonic Spiking Neural Networks and Graphene-on-Silicon Spiking Neurons. J. Light. Technol. 2022, 40, 2901–2914. [Google Scholar] [CrossRef]
  21. Singh, R.; Zheludev, N. Superconductor photonics. Nat. Photonics 2014, 8, 679–680. [Google Scholar] [CrossRef]
  22. Fan, L.; Zou, C.L.; Cheng, R.; Guo, X.; Han, X.; Gong, Z.; Wang, S.; Tang, H.X. Superconducting cavity electro-optics: A platform for coherent photon conversion between superconducting and photonic circuits. Sci. Adv. 2018, 4, eaar4994. [Google Scholar] [CrossRef] [PubMed]
  23. Gu, X.; Kockum, A.F.; Miranowicz, A.; Liu, Y.x.; Nori, F. Microwave photonics with superconducting quantum circuits. Phys. Rep. 2017, 718, 1–102. [Google Scholar]
  24. Berman, O.L.; Lozovik, Y.E.; Eiderman, S.L.; Coalson, R.D. Superconducting photonic crystals: Numerical calculations of the band structure. Phys. Rev. B 2006, 74, 092505. [Google Scholar] [CrossRef]
  25. Shainline, J.M.; Buckley, S.M.; Mirin, R.P.; Nam, S.W. Superconducting optoelectronic circuits for neuromorphic computing. Phys. Rev. Appl. 2017, 7, 034013. [Google Scholar] [CrossRef]
  26. Shainline, J.M.; Buckley, S.M.; McCaughan, A.N.; Chiles, J.; Jafari-Salim, A.; Mirin, R.P.; Nam, S.W. Circuit designs for superconducting optoelectronic loop neurons. J. Appl. Phys. 2018, 124, 152130. [Google Scholar] [CrossRef]
  27. Shainline, J.M.; Buckley, S.M.; McCaughan, A.N.; Chiles, J.T.; Jafari Salim, A.; Castellanos-Beltran, M.; Donnelly, C.A.; Schneider, M.L.; Mirin, R.P.; Nam, S.W. Superconducting optoelectronic loop neurons. J. Appl. Phys. 2019, 126, 044902. [Google Scholar] [CrossRef]
  28. Schneider, M.; Toomey, E.; Rowlands, G.; Shainline, J.; Tschirhart, P.; Segall, K. SuperMind: A survey of the potential of superconducting electronics for neuromorphic computing. Supercond. Sci. Technol. 2022, 35, 053001. [Google Scholar] [CrossRef]
  29. Crotty, P.; Schult, D.; Segall, K. Josephson junction simulation of neurons. Phys. Rev. E 2010, 82, 011914. [Google Scholar] [CrossRef]
  30. Russek, S.E.; Donnelly, C.A.; Schneider, M.L.; Baek, B.; Pufall, M.R.; Rippard, W.H.; Hopkins, P.F.; Dresselhaus, P.D.; Benz, S.P. Stochastic single flux quantum neuromorphic computing using magnetically tunable Josephson junctions. In Proceedings of the 2016 IEEE International Conference on Rebooting Computing (ICRC), San Diego, CA, USA, 17–19 October 2016; pp. 1–5. [Google Scholar]
  31. Schneider, M.L.; Donnelly, C.A.; Russek, S.E.; Baek, B.; Pufall, M.R.; Hopkins, P.F.; Dresselhaus, P.D.; Benz, S.P.; Rippard, W.H. Ultralow power artificial synapses using nanotextured magnetic Josephson junctions. Sci. Adv. 2018, 4, e1701329. [Google Scholar] [CrossRef]
  32. Toomey, E.; Segall, K.; Castellani, M.; Colangelo, M.; Lynch, N.; Berggren, K.K. Superconducting nanowire spiking element for neural networks. Nano Lett. 2020, 20, 8059–8066. [Google Scholar] [CrossRef]
  33. Ishida, K.; Byun, I.; Nagaoka, I.; Fukumitsu, K.; Tanaka, M.; Kawakami, S.; Tanimoto, T.; Ono, T.; Kim, J.; Inoue, K. Superconductor Computing for Neural Networks. IEEE Micro 2021, 41, 19–26. [Google Scholar] [CrossRef]
  34. Zhang, H.; Gang, C.; Xu, C.; Gong, G.; Lu, H. Brain-inspired spiking neural network using superconducting devices. IEEE Trans. Emerg. Top. Comput. Intell. 2021, 7, 271–277. [Google Scholar] [CrossRef]
  35. Semenov, V.K.; Golden, E.B.; Tolpygo, S.K. A new family of bioSFQ logic/memory cells. IEEE Trans. Appl. Supercond. 2021, 32, 1–5. [Google Scholar] [CrossRef]
  36. Casaburi, A.; Hadfield, R.H. Superconducting circuits that mimic the brain. Nat. Electron. 2022, 5, 627–628. [Google Scholar] [CrossRef]
  37. Feldhoff, F.; Toepfer, H. Short- and Long-Term State Switching in the Superconducting Niobium Neuron Plasticity. IEEE Trans. Appl. Supercond. 2024, 34, 1–5. [Google Scholar] [CrossRef]
  38. Siddiqi, I. Engineering high-coherence superconducting qubits. Nat. Rev. Mater. 2021, 6, 875–891. [Google Scholar] [CrossRef]
  39. Vozhakov, V.A.; Bastrakova, M.V.; Klenov, N.V.; Soloviev, I.I.; Pogosov, W.V.; Babukhin, D.V.; Zhukov, A.A.; Satanin, A.M. State control in superconducting quantum processors. Phys.-Uspekhi 2022, 65, 457–476. [Google Scholar] [CrossRef]
  40. Calzona, A.; Carrega, M. Multi-mode architectures for noise-resilient superconducting qubits. Supercond. Sci. Technol. 2022, 36, 023001. [Google Scholar] [CrossRef]
  41. Segall, K.; LeGro, M.; Kaplan, S.; Svitelskiy, O.; Khadka, S.; Crotty, P.; Schult, D. Synchronization dynamics on the picosecond time scale in coupled Josephson junction neurons. Phys. Rev. E 2017, 95, 032220. [Google Scholar] [CrossRef]
  42. Feldhoff, F.; Toepfer, H. Niobium Neuron: RSFQ Based Bio-Inspired Circuit. IEEE Trans. Appl. Supercond. 2021, 31, 1–5. [Google Scholar] [CrossRef]
  43. Goteti, U.S.; Dynes, R.C. Superconducting neural networks with disordered Josephson junction array synaptic networks and leaky integrate-and-fire loop neurons. J. Appl. Phys. 2021, 129, 073901. [Google Scholar] [CrossRef]
  44. Chalkiadakis, D.; Hizanidis, J. Dynamical properties of neuromorphic Josephson junctions. Phys. Rev. E 2022, 106, 044206. [Google Scholar] [CrossRef] [PubMed]
  45. Schegolev, A.E.; Klenov, N.V.; Gubochkin, G.I.; Kupriyanov, M.Y.; Soloviev, I.I. Bio-Inspired Design of Superconducting Spiking Neuron and Synapse. Nanomaterials 2023, 13, 2101. [Google Scholar] [CrossRef] [PubMed]
  46. Crotty, P.; Segall, K.; Schult, D. Biologically realistic behaviors from a superconducting neuron model. IEEE Trans. Appl. Supercond. 2023, 33, 1–6. [Google Scholar] [CrossRef]
  47. Schegolev, A.; Klenov, N.; Soloviev, I.; Tereshonok, M. Learning cell for superconducting neural networks. Supercond. Sci. Technol. 2020, 34, 015006. [Google Scholar] [CrossRef]
  48. Bastrakova, M.; Gorchavkina, A.; Schegolev, A.; Klenov, N.; Soloviev, I.; Satanin, A.; Tereshonok, M. Dynamic processes in a superconducting adiabatic neuron with non-shunted Josephson contacts. Symmetry 2021, 13, 1735. [Google Scholar] [CrossRef]
  49. Ionin, A.; Shuravin, N.; Karelina, L.; Rossolenko, A.; Sidel’nikov, M.; Egorov, S.; Chichkov, V.; Chichkov, M.; Zhdanova, M.; Shchegolev, A.; et al. Experimental Study of a Prototype of a Superconducting Sigma Neuron for Adiabatic Neural Networks. J. Exp. Theor. Phys. 2023, 137, 888–898. [Google Scholar] [CrossRef]
  50. Takeuchi, N.; Arai, K.; Yoshikawa, N. Directly coupled adiabatic superconductor logic. Supercond. Sci. Technol. 2020, 33, 065002. [Google Scholar] [CrossRef]
  51. Khazali, M.; Mølmer, K. Fast multiqubit gates by adiabatic evolution in interacting excited-state manifolds of Rydberg atoms and superconducting circuits. Phys. Rev. X 2020, 10, 021054. [Google Scholar] [CrossRef]
  52. Ayala, C.L.; Tanaka, T.; Saito, R.; Nozoe, M.; Takeuchi, N.; Yoshikawa, N. Mana: A monolithic adiabatic integration architecture microprocessor using 1.4-zj/op unshunted superconductor josephson junction devices. IEEE J.-Solid-State Circuits 2020, 56, 1152–1165. [Google Scholar] [CrossRef]
  53. Yamazaki, Y.; Takeuchi, N.; Yoshikawa, N. A compact interface between adiabatic quantum-flux-parametron and rapid single-flux-quantum circuits. IEEE Trans. Appl. Supercond. 2021, 31, 1–5. [Google Scholar] [CrossRef]
  54. Setiawan, F.; Groszkowski, P.; Ribeiro, H.; Clerk, A.A. Analytic design of accelerated adiabatic gates in realistic qubits: General theory and applications to superconducting circuits. PRX Quantum 2021, 2, 030306. [Google Scholar] [CrossRef]
  55. Bastrakova, M.V.; Pashin, D.S.; Rybin, D.A.; Schegolev, A.E.; Klenov, N.V.; Soloviev, I.I.; Gorchavkina, A.A.; Satanin, A.M. A superconducting adiabatic neuron in a quantum regime. Beilstein J. Nanotechnol. 2022, 13, 653–665. [Google Scholar] [CrossRef] [PubMed]
  56. Pashin, D.S.; Pikunov, P.V.; Bastrakova, M.V.; Schegolev, A.E.; Klenov, N.V.; Soloviev, I.I. A bifunctional superconducting cell as flux qubit and neuron. Beilstein J. Nanotechnol. 2023, 14, 1116–1126. [Google Scholar] [CrossRef] [PubMed]
  57. Mizushima, N.; Takeuchi, N.; Yamanashi, Y.; Yoshikawa, N. Adiabatic quantum-flux-parametron boosters for long interconnection and large fanouts. Supercond. Sci. Technol. 2023, 36, 115021. [Google Scholar] [CrossRef]
  58. Bakurskiy, S.; Kupriyanov, M.; Klenov, N.V.; Soloviev, I.; Schegolev, A.; Morari, R.; Khaydukov, Y.; Sidorenko, A. Controlling the proximity in a Co/Nb multilayer: The properties of electronic transport. Beilstein J. Nanotechnol. 2020, 11, 1336–1345. [Google Scholar] [CrossRef] [PubMed]
  59. Njitacke, Z.T.; Ramakrishnan, B.; Rajagopal, K.; Fozin, T.F.; Awrejcewicz, J. Extremely rich dynamics of coupled heterogeneous neurons through a Josephson junction synapse. Chaos Solitons Fractals 2022, 164, 112717. [Google Scholar] [CrossRef]
  60. Schegolev, A.E.; Klenov, N.V.; Bakurskiy, S.V.; Soloviev, I.I.; Kupriyanov, M.Y.; Tereshonok, M.V.; Sidorenko, A.S. Tunable superconducting neurons for networks based on radial basis functions. Beilstein J. Nanotechnol. 2022, 13, 444–454. [Google Scholar] [CrossRef] [PubMed]
  61. Annunziata, A.J.; Santavicca, D.F.; Frunzio, L.; Catelani, G.; Rooks, M.J.; Frydman, A.; Prober, D.E. Tunable superconducting nanoinductors. Nanotechnology 2010, 21, 445202. [Google Scholar] [CrossRef]
  62. Splitthoff, L.J.; Bargerbos, A.; Grünhaupt, L.; Pita-Vidal, M.; Wesdorp, J.J.; Liu, Y.; Kou, A.; Andersen, C.K.; Van Heck, B. Gate-tunable kinetic inductance in proximitized nanowires. Phys. Rev. Appl. 2022, 18, 024074. [Google Scholar] [CrossRef]
  63. Klenov, N.; Khaydukov, Y.; Bakurskiy, S.; Morari, R.; Soloviev, I.; Boian, V.; Keller, T.; Kupriyanov, M.; Sidorenko, A.; Keimer, B. Periodic Co/Nb pseudo spin valve for cryogenic memory. Beilstein J. Nanotechnol. 2019, 10, 833–839. [Google Scholar] [CrossRef] [PubMed]
  64. Stewart, W.C. Current–voltage characteristics of Josephson junctions. Appl. Phys. Lett. 1968, 12, 277. [Google Scholar] [CrossRef]
  65. Ionin, A.; Karelina, L.; Shuravin, N.; Sidel’nikov, M.; Razorenov, F.; Egorov, S.; Bol’ginov, V. Experimental Study of the Transfer Function of a Superconducting Gauss Neuron Prototype. JETP Lett. 2023, 118, 766–772. [Google Scholar] [CrossRef]
  66. He, Y.; Ayala, C.L.; Takeuchi, N.; Yamae, T.; Hironaka, Y.; Sahu, A.; Gupta, V.; Talalaevskii, A.; Gupta, D.; Yoshikawa, N. A compact AQFP logic cell design using an 8-metal layer superconductor process. Supercond. Sci. Technol. 2020, 33, 035010. [Google Scholar] [CrossRef]
  67. Takeuchi, N.; Yamae, T.; Luo, W.; Hirayama, F.; Yamamoto, T.; Yoshikawa, N. Scalable flux controllers using adiabatic superconductor logic for quantum processors. Phys. Rev. Res. 2023, 5, 013145. [Google Scholar] [CrossRef]
Figure 1. OpenAI’s DALLE 3 prompt-generated image of a superconducting neural network, simulating an XOR operation.
Figure 1. OpenAI’s DALLE 3 prompt-generated image of a superconducting neural network, simulating an XOR operation.
Nanomaterials 14 00854 g001
Figure 2. Schematic representation of two coupled S c -neurons (input in cyan box and output in navy blue box), connected through the inductive synapse (in the green box) and the coupler, which integrates part of the output neuron (in the light-yellow box). The processing part of the output neuron is highlighted by the red box. Black or red arrows and blue curled arrows indicate currents and corresponding magnetic fluxes, respectively.
Figure 2. Schematic representation of two coupled S c -neurons (input in cyan box and output in navy blue box), connected through the inductive synapse (in the green box) and the coupler, which integrates part of the output neuron (in the light-yellow box). The processing part of the output neuron is highlighted by the red box. Black or red arrows and blue curled arrows indicate currents and corresponding magnetic fluxes, respectively.
Nanomaterials 14 00854 g002
Figure 3. Activation functions for the first (input) and second (output) neurons. The “red” curve corresponds to i o u t 1 ; the “blue” curve corresponds to i o u t 2 . Parameters of the system: l 1 , 2 = 0.2 , l o u t 1 = l o u t 2 = 1 , l a 1 , a 2 = l 1 , 2 + 1 , l i n = 1 , l t 1 = l t 2 = 0.1 , l t 3 = l t 4 = 1 , l s 1 + l s 2 = 1 , l s 1 l s 2 = 0.9 .
Figure 3. Activation functions for the first (input) and second (output) neurons. The “red” curve corresponds to i o u t 1 ; the “blue” curve corresponds to i o u t 2 . Parameters of the system: l 1 , 2 = 0.2 , l o u t 1 = l o u t 2 = 1 , l a 1 , a 2 = l 1 , 2 + 1 , l i n = 1 , l t 1 = l t 2 = 0.1 , l t 3 = l t 4 = 1 , l s 1 + l s 2 = 1 , l s 1 l s 2 = 0.9 .
Nanomaterials 14 00854 g003
Figure 4. Synaptic weights without system parameter optimisation: (a) dependence of the output current from the synapse ( Δ i s ) as a function of the modulus of input current ( | i i n | ) and (b) calculations for the dependence of the slope angle α on Δ l s = l s 1 l s 2 . Parameters of the system: l 1 , 2 = 0.2 , l o u t 1 = l o u t 2 = 1 , l a 1 , a 2 = l 1 , 2 + 1 , l i n = 1 , l t 1 = l t 2 = 0.1 , l t 3 = l t 4 = 1 , l s 1 + l s 2 = 1 .
Figure 4. Synaptic weights without system parameter optimisation: (a) dependence of the output current from the synapse ( Δ i s ) as a function of the modulus of input current ( | i i n | ) and (b) calculations for the dependence of the slope angle α on Δ l s = l s 1 l s 2 . Parameters of the system: l 1 , 2 = 0.2 , l o u t 1 = l o u t 2 = 1 , l a 1 , a 2 = l 1 , 2 + 1 , l i n = 1 , l t 1 = l t 2 = 0.1 , l t 3 = l t 4 = 1 , l s 1 + l s 2 = 1 .
Nanomaterials 14 00854 g004
Figure 5. Gradient descent trajectories for max α maximisation for different initial parameters (shown by different colors), projected onto the plane { l i n ; l t 1 , t 2 } .
Figure 5. Gradient descent trajectories for max α maximisation for different initial parameters (shown by different colors), projected onto the plane { l i n ; l t 1 , t 2 } .
Nanomaterials 14 00854 g005
Figure 6. Demonstration of synapse capabilities after system parameter optimisation: (a) comparison of numerical and analytical calculations for the dependence of the slope angle α on Δ l s and (b) dependence of the output current from the synapse Δ i s as a function of the input current | i i n | . The blue line shows the result of the approximate calculation using Equations (6)–(8). The red circles show the result of the exact numerical calculation of the dynamics from Equation (5). Parameters of the system are: l 1 , 2 = 0.1 , l i n = 0.3 , l t 1 = l t 2 = 2 , l t 3 = l t 4 = 0.1 , l o u t 1 , 2 = 0.1 , l s 1 + l s 2 = 3 .
Figure 6. Demonstration of synapse capabilities after system parameter optimisation: (a) comparison of numerical and analytical calculations for the dependence of the slope angle α on Δ l s and (b) dependence of the output current from the synapse Δ i s as a function of the input current | i i n | . The blue line shows the result of the approximate calculation using Equations (6)–(8). The red circles show the result of the exact numerical calculation of the dynamics from Equation (5). Parameters of the system are: l 1 , 2 = 0.1 , l i n = 0.3 , l t 1 = l t 2 = 2 , l t 3 = l t 4 = 0.1 , l o u t 1 , 2 = 0.1 , l s 1 + l s 2 = 3 .
Nanomaterials 14 00854 g006
Figure 7. Schematic representation of the modified coupling between two S c -neurons: the transformer consisting of inductances l o u t 1 and l i n coupling the input neuron, and the synapse (see Figure 2) is replaced by a direct coupling via the inductance l o u t 1 only.
Figure 7. Schematic representation of the modified coupling between two S c -neurons: the transformer consisting of inductances l o u t 1 and l i n coupling the input neuron, and the synapse (see Figure 2) is replaced by a direct coupling via the inductance l o u t 1 only.
Nanomaterials 14 00854 g007
Figure 8. Projection of gradient descent trajectories for different initial parameters (each curve corresponds to its own initial values): (a) on axes l t 4 and l t 1 , and (b) on axes l t 3 and l t 1 = l t 2 . The inset in (b) shows the transfer characteristics of the input and output neurons ( i o u t 1 , 2 ( φ i n ) ) with the following system parameters: l 1 , 2 = 0.1 , l t 1 = l t 2 = 0.7 , l t 3 = 1.5 , l t 4 = 0.1 , l o u t 1 , 2 = 0.1 , Σ l s = 3 , Δ l s = 1.53 .
Figure 8. Projection of gradient descent trajectories for different initial parameters (each curve corresponds to its own initial values): (a) on axes l t 4 and l t 1 , and (b) on axes l t 3 and l t 1 = l t 2 . The inset in (b) shows the transfer characteristics of the input and output neurons ( i o u t 1 , 2 ( φ i n ) ) with the following system parameters: l 1 , 2 = 0.1 , l t 1 = l t 2 = 0.7 , l t 3 = 1.5 , l t 4 = 0.1 , l o u t 1 , 2 = 0.1 , Σ l s = 3 , Δ l s = 1.53 .
Nanomaterials 14 00854 g008
Figure 9. (a) Dependence of the current ratio i o u t 2 / i o u t 1 at the moment when the input flux reaches a plateau at t = ( t 1 + t 2 ) / 2 on the normalised difference of the inductances of the synaptic arms for l t 4 = 0.1 and different values of the inductance sum. (b) Maximum ratio between output currents i o u t 2 and i o u t 1 for different values of l t 4 in dependence on the inductance sum, Σ l s . Other system parameters are as follows: l 1 , 2 = 0.1 , l t 1 = l t 2 = 0.7 , l t 3 = 1.5 , l o u t 1 , 2 = 0.1 .
Figure 9. (a) Dependence of the current ratio i o u t 2 / i o u t 1 at the moment when the input flux reaches a plateau at t = ( t 1 + t 2 ) / 2 on the normalised difference of the inductances of the synaptic arms for l t 4 = 0.1 and different values of the inductance sum. (b) Maximum ratio between output currents i o u t 2 and i o u t 1 for different values of l t 4 in dependence on the inductance sum, Σ l s . Other system parameters are as follows: l 1 , 2 = 0.1 , l t 1 = l t 2 = 0.7 , l t 3 = 1.5 , l o u t 1 , 2 = 0.1 .
Nanomaterials 14 00854 g009
Figure 10. (a) Schematic representation of the 3-neuron XOR/OR network and (b) its superconducting implementation.
Figure 10. (a) Schematic representation of the 3-neuron XOR/OR network and (b) its superconducting implementation.
Nanomaterials 14 00854 g010
Figure 11. Demonstration of neural network operation as an XOR/OR logic gate. Synaptic weights are asymmetric/symmetric, respectively. The scheme of the neural network is shown in Figure 10.
Figure 11. Demonstration of neural network operation as an XOR/OR logic gate. Synaptic weights are asymmetric/symmetric, respectively. The scheme of the neural network is shown in Figure 10.
Nanomaterials 14 00854 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pashin, D.S.; Bastrakova, M.V.; Rybin, D.A.; Soloviev, I.I.; Klenov, N.V.; Schegolev, A.E. Optimisation Challenge for a Superconducting Adiabatic Neural Network That Implements XOR and OR Boolean Functions. Nanomaterials 2024, 14, 854. https://doi.org/10.3390/nano14100854

AMA Style

Pashin DS, Bastrakova MV, Rybin DA, Soloviev II, Klenov NV, Schegolev AE. Optimisation Challenge for a Superconducting Adiabatic Neural Network That Implements XOR and OR Boolean Functions. Nanomaterials. 2024; 14(10):854. https://doi.org/10.3390/nano14100854

Chicago/Turabian Style

Pashin, Dmitrii S., Marina V. Bastrakova, Dmitrii A. Rybin, Igor. I. Soloviev, Nikolay V. Klenov, and Andrey E. Schegolev. 2024. "Optimisation Challenge for a Superconducting Adiabatic Neural Network That Implements XOR and OR Boolean Functions" Nanomaterials 14, no. 10: 854. https://doi.org/10.3390/nano14100854

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop