Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
INVITED PAPER Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges This paper reviews challenges and progress in implementing timing-based neuronal learning mechanisms in silicon. By Mostafa Rahimi Azghadi, Student Member IEEE , Nicolangelo Iannella, Member IEEE , Said F. Al-Sarawi, Member IEEE , Giacomo Indiveri, Senior Member IEEE , and Derek Abbott, Fellow IEEE | The ability to carry out signal processing, rules, ranging from phenomenological ones (e.g., based on classification, recognition, and computation in artificial spike timing, mean firing rates, or both) to biophysically spiking neural networks (SNNs) is mediated by their synap- realistic ones (e.g., calcium-dependent models). We discuss ses. In particular, through activity-dependent alteration of the application domains, weaknesses, and strengths of their efficacies, synapses play a fundamental role in learning. The mathematical prescriptions under which synapses mod- various representative approaches proposed in the literature, and provide insight into the challenges that engineers ify their weights are termed synaptic plasticity rules. These face when designing and implementing synaptic plasticity learning rules can be based on abstract computational rules in VLSI technology for utilizing them in real-world neuroscience models or on detailed biophysical ones. As applications. ABSTRACT these rules are being proposed and developed by experi- | Analog/digital synapse; Bienenstock–Cooper– mental and computational neuroscientists, engineers strive KEYWORDS to design and implement them in silicon and en masse in Munro (BCM); calcium-based plasticity; learning; local correla- order to employ them in complex real-world applications. In this paper, we describe analog very large-scale integration tion plasticity (LCP); neuromorphic engineering; rate-based plasticity; spike-timing-dependent plasticity (STDP); spike- (VLSI) circuit implementations of multiple synaptic plasticity based plasticity; spiking neural networks; synaptic plasticity; triplet STDP; very large-scale integration (VLSI); voltage-based STDP Manuscript received October 3, 2013; revised January 7, 2014 and February 24, 2014; accepted March 26, 2014. Date of publication April 23, 2014; date of current version April 28, 2014. This work was supported by the European Community’s Seventh Framework Programme ERC under Grant 257219 neuroP; and the Australian Research Council under Grant FT120100351. M. Rahimi Azghadi is with the School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, S.A. 5005, Australia and with the Institute of Neuroinformatics, University of Zurich and ETH Zurich, CH-8057 Zurich, Switzerland (e-mail: mostafa@eleceng.adelaide.edu.au; mostafa@ini.phys.ethz.ch). N. Iannella, S. F. Al-Sarawi, and D. Abbott are with the School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, S.A. 5005, Australia (e-mail: iannella@eleceng.adelaide.edu.au; alsarawi@eleceng.adelaide.edu.au; dabbott@eleceng.adelaide.edu.au). G. Indiveri is with the Institute of Neuroinformatics, University of Zurich and ETH Zurich, CH-8057 Zurich, Switzerland (e-mail: giacomo@ini.phys.ethz.ch). Digital Object Identifier: 10.1109/JPROC.2014.2314454 I. INTRODUCTION For more than a century, there has been considerable effort in attempting to find answers to the question: ‘‘how does learning and memory take place in the brain?’’ Although there is still no general agreement, neuroscientists concur on a common set of general rules and hypotheses [1]–[3]. It is agreed that learning and memory in the brain are governed mainly by complex molecular processes, which give rise to a phenomenon called synaptic plasticity. The actions of synaptic plasticity can manifest 0018-9219 Ó 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. Vol. 102, No. 5, May 2014 | Proceedings of the IEEE 717 Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges Table 1 Synaptic Plasticity Models Already Implemented in VLSI themselves through alterations in the efficacy of synapses that allow networks of cells to alter their communication. In order to decipher the mystery of learning through synaptic plasticity, neuroscientists typically postulate their hypotheses on how the brain learns and propose specific models of plasticity rules that can explain their theoretical and experimental observations. Then, these hypotheses can be implemented in software or hardware and tested with real-world stimuli to verify their values in addressing real-world challenges. While software implementations are ideal for exploring different hypotheses and testing different models, dedicated hardware implementations are commonly used to implement efficient neural processing systems that can be exposed to real-world stimuli from the environment and process them in real time, using massively parallel elements that operate with time constants that are similar to those measured in biological neural systems. This approach is followed for both attempting to get a deeper understanding of how learning occurs in physical systems (including the brain), and for realizing efficient hardware systems that can be used to carry out complex practical tasks, ranging from sensory processing to surveillance, robotics, or brain–machine interfaces. The synaptic plasticity models developed by neuroscientists are typically translated into electronic circuits and implemented using conventional very large-scale integration (VLSI) technologies. Currently, many of these models form the foundations for developing VLSI ‘‘neuromorphic systems’’ [4], [5]. In this paper, we present an overview of a representative set of synaptic plasticity circuits presented in the literature, and compare them to the learning circuits that we have developed throughout the course of the last ten years. II. SYNAPTIC PLASTICITY RULES Experimental investigations on synaptic plasticity can lead to extremely diverse results, depending on the animal preparation studied, on the area of the brain analyzed, on the protocol used, and on many other factors [1], [6], [7]. These variations have produced inconsistent or even controversial results, and led to the development of a large number of synaptic plasticity models, ranging from very abstract ones, to very elaborate and detailed ones [8]. 718 Proceedings of the IEEE | Vol. 102, No. 5, May 2014 Abstract models of synaptic plasticity are typically based on the timing of the spikes produced by presynaptic and postsynaptic neurons, and are aimed at reproducing the basic phenomenology of learning. These rules are the ones often used to implement learning mechanisms for various applications [9]–[12]. Examples of such rules include the basic spike-timing-dependent plasticity (STDP) [6], [7], [13] and the triplet-based STDP (TSTDP) [14], [15] models. On the other hand, more detailed models that attempt to explain the data obtained in neuroscience experiments take into account neuron and synapse state variables, in addition to the timing of the spikes. Examples of such models include the spike-driven plasticity model [16], the voltage-based STDP model [17], and membrane potential-based models, such as those reviewed in [8]. Even more detailed plasticity models have been proposed to explain the detailed cellular and molecular mechanisms observed in real synapses [18]. When selecting a synaptic plasticity model to implement, it is important to consider the target application domain, and to choose the required level of complexity accordingly. Table 1 lists a number of representative synaptic plasticity rules that have been designed and successfully implemented in VLSI, ranging from very abstract (e.g., based solely on the timing of the presynaptic and postsynaptic spikes), to biophysically realistic ones. These rules are compared in terms of synaptic variables employed for altering the synaptic efficacy, i.e., spike time, membrane potential, and calcium ion concentration. A. Pair-Based STDP The pair-based STDP (PSTDP) rule is the classical description based on the timing of the spikes produced by the presynaptic and postsynaptic neurons. This rule has been used in many computational studies [13], [31], [32]. The original rule is expressed by Dw ¼ ( Dwþ ¼ Aþ e  ðDt  Þ þ ;  ðDt Þ Dw ¼ A e if Dt > 0 (1) ; if Dt  0 where Dt ¼ tpost  tpre is the timing difference between a single pair of presynaptic and postsynaptic spikes. The rule Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges presented in (1) shows that, if a postsynaptic spike arrives in a predetermined time window, specified by þ , after a presynaptic spike, this order of spikes leads to an increase in the synaptic weight, and a potentiation occurs. On the other hand, if a postsynaptic spike precedes a presynaptic one, in a specified time window of  , the synaptic weight is decreased, or a depression occurs. The amount of this depression or potentiation depends on the time difference between the two spikes ðDtÞ, as well as the potentiation and depression amplitude parameters (Aþ and A ). B. Triplet-Based STDP In the TSTDP model, changes to synaptic weights occur in relation to the timing differences among triplet combinations of spikes [15] 8 Dt2  Dt1  þ ð y Þ > < Dwþ ¼ eð þ Þ Aþ þ A e 2 3 Dw ¼   Dt Dt 1 > : Dw ¼ eð Þ A þ A eð x 3 Þ 2 3 (2) where the synaptic weight is potentiated at times when a postsynaptic spike occurs or depressed at the time when a presynaptic spike occurs. The depression and potentiation þ þ  amplitude parameters are A 2 , A3 , A2 , and A3 , while Dt1 ¼ tpostðnÞ  tpreðnÞ , Dt2 ¼ tpostðnÞ  tpostðn1Þ  , and Dt3 ¼ tpreðnÞ  tpreðn1Þ   are the time differences between preðnÞ and postðnÞ, postðnÞ and postðn  1Þ, and preðnÞ and preðn  1Þ, respectively. Here,  ; x and þ ; y are time constants that determine the windows, in which depression and potentiation take place [15]. Besides,  is a small positive constant that is needed to distinguish between the current spike and the previous spike of the same type [15]. C. Spike Driven Synaptic Plasticity (SDSP) In the spike-driven model of synaptic plasticity, changes in synapse efficacy occur whenever a presynaptic spike arrives [16]. At this time, if the postsynaptic membrane potential Vmem is higher than a threshold voltage Vmth potentiation, or if it is lower than the threshold, depression will occur requiring the fact that the amount of calcium concentration in the postsynaptic site CðtÞ is within a predefined boundary at the arrival of the presynaptic spike. In short, the synaptic efficacy W will change according to W ¼ W þ a; if Vmem ðtÞ > Vmth and lup G CðtÞ G hup W ¼ W  b; if Vmem ðtÞ  Vmth and ldn G CðtÞ G hdn (3) where a and b are the amount of potentiation and depression, respectively. In addition, ½lup ; hup  and ½ldn ; hdn  are the boundaries for the calcium concentration CðtÞ for potentiation and depression states, respectively. If the required conditions are not satisfied, there will be no potentiation or depression. When there is no spike coming and therefore there is no synaptic weight change, the synaptic weight W will drift toward either high or low synaptic weight asymptotes. The direction of the drift will depend on the values of the weights at that specific time, which can be above/below a certain threshold W [16], [25] dWðtÞ ¼ ; if WðtÞ > W dt dWðtÞ ¼  ; if WðtÞ  W : dt (4) D. BCM-Like Local Correlation Plasticity (LCP) A phenomenological rule that has been successfully implemented in VLSI is the LCP rule [8], [33]. This is a BCM-like rule based on dmðtÞ ¼ ðuðtÞ  u ÞgðtÞ dt (5) where mðtÞ is the synaptic weight, uðtÞ is the neuron’s membrane potential, u is a threshold that separates LTP and LTD induction, and gðtÞ is a conductance variable that is related to the postsynaptic current Ipsc and therefore has its maximum value at the time of a presynaptic arrival and decays thereafter. Detailed expressions for uðtÞ and gðtÞ can be found in [8] and [27]. E. Modified Ion Channel-Based Plasticity This rule not only considers calcium and its level for inducing synaptic weight changes, but also introduces the effect of other ion channels and receptors as the pathways for calcium to change in the postsynaptic neuron and therefore causes either potentiation or depression. The synaptic weight change mechanism is as follows: presynaptic action potentials release glutamate neurotransmitters that binds to N-methyl-D-aspartate (NMDA) receptors, and when postsynaptic activities that provide large membrane depolarizations are simultaneously present, it leads to an increase in the level of calcium [28], [29]. This rule is capable of reproducing both BCM (rate-based) and PSTDP (timing-based) mechanisms using a unified model. However, this model is complex and requires a large number of state variables [29]. F. Iono-Neuromorphic Intracellular Calcium-Mediated Plasticity Model This is a synaptic plasticity rule that is focused on the intracellular calcium dynamics of the synapse. It is a biophysically realistic plasticity rule that acts entirely on Vol. 102, No. 5, May 2014 | Proceedings of the IEEE 719 Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges the dynamics of the ions and channels within the synapse. The rule was originally proposed by Shouval et al. [18] and modified to be implemented in VLSI. The weight changes for the VLSI circuit are given by dw ¼ ð½CaÞ  ðWð½CaÞ  wÞ (6) where w is the current synaptic weight, ð½CaÞ is a nonlinear function, Wð½CaÞ is the calcium-dependent update rule, and  plays the role of a learning rate. Similar to the first mentioned biophysical rule, this rule is also shown to be able to reproduce BCM and PSTDP biological experiments [30]. III . BUILDING BLOCKS FOR IMPLEMENTING SYNAPTIC PLASTICI TY RULES I N VLSI This section reviews the most common and useful electronic building blocks required for implementing various types of synaptic plasticity rules in VLSI. A. Single Capacitors and Transistors The storage of synaptic weight values, and of other state variables that need to be memorized in models of synaptic plasticity requires memory elements in VLSI. Typically, analog values are stored using capacitors. In VLSI technology, capacitors can be implemented using metal–oxide–semiconductor capacitors (MOSCAPs), or multiple layers of poly-silicon separated by an insulator (typically silicon dioxide), or special metal-on-metal (MOM) structures that are built from orthogonal interleaved metal fingers. These solutions usually offer the most compact and convenient way of storing variables, but they have the limitation of being leaky: as the charge stored in these devices tends to slowly leak away due to imperfect insulator used in building these devices. Alternative ways of storing analog variables involve the use of floating-gate devices [34], or of dedicated analog-to-digital converters (ADCs) and digital memory circuits, such as static random access memory (SRAM) elements [35], [36]. Considering the required time constants and the targeted network and application, these approaches are more/less bulky and/or convenient, compared to the storage on VLSI capacitors, which is not applicable for longtime storage. Another issue that should be taken into account when selecting the storage technique for synaptic weight and dynamics is the required precision they need for their application. This issue is discussed in Section V-H. While capacitors are passive devices, metal–oxide– semiconductor field effect transistors (MOSFETs) are active and represent the main basic building block in VLSI technology. Depending on the voltage difference between the transistor gate and source terminals Vgs , their current– 720 Proceedings of the IEEE | Vol. 102, No. 5, May 2014 voltage characteristic can dramatically change. In particular, if Vgs > Vth , the transistor acts in its above-threshold (i.e., strong inversion) regime. On the other hand, if Vgs G Vth , the transistor operates in its subthreshold (i.e., weak inversion) regime [37]. Neuromorphic engineers are interested in the subthreshold domain for two essential reasons. The first reason is the exponential relationship between the drain current ID of a transistor and its gate voltage Vg , as shown in Ids ¼ I0 en Vg =UT ðeVs =UT  eVd =UT Þ (7) where I0 is a current-scaling parameter, n denotes the n-type MOSFET subthreshold slope factor, UT represents the thermal voltage, and Vd , Vg , and Vs are the drain, gate, and source voltages of the transistor, as is shown in Fig. 1(a), relative to the bulk potential, respectively [37]. Fig. 1(b) shows that the drain-source current shown in (7) is a summation of two currents in opposite directions: one is called forward current If , which is a function of the gatesource voltage, and flows from the drain to the source; and the other current Ir is called the reverse current, and flows from the source to the drain Ids ¼ I0 en Vg =UT Vs =UT  I0 en Vg =UT Vd =UT ¼ If  Ir : (8) If Vds > 4UT  100 mV, as the energy band diagram in Fig. 1(b) shows, because of the larger barrier height (in contrast to the Vds G 4UT state, where barrier heights are almost equal), the concentration of electrons at the drain end of the channel will be much lower than that at the Fig. 1. (a) Symbol of an NMOS transistor. (b) The drain-source current Ids of an NMOS device in its subthreshold region of operation is a summation of two currents with opposite directions. (c) Current-voltage characteristic of the NMOS transistor, which shows significantly different behavior for above and below threshold [37]. Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges source end, and therefore the reverse current, from source to drain, Ir becomes negligible, and the transistor will operate in the subthreshold saturation regime. Therefore, there will be a pure exponential relationship between Vgs and Ids as Ids ¼ I0 en Vg =UT Vs =UT : (9) This exponential behavior is analogous to the exponential relationship between the ionic conductance of a neuron and its membrane potential. Therefore, a transistor is able to directly emulate the required behavior of an ionic conductance [38]. Fig. 1(c), which is a log-linear plot, shows the drain-source current characteristic of an NMOS device, as a function of its gate-source voltage. The figure shows the exponential dependence of the current to the gate-source voltage, below the device threshold. It also shows the quadratic dependence of the current to the gatesource voltage, when the device operates in its above threshold region. The second reason is the low-power consumption of transistors in their subthreshold regime, due to very low subthreshold currents in the order of nano to pico Ampères [see Fig. 1(c)]. Minimizing power consumption is a main feature of neuromorphic circuits and it is crucial for fulfilling the ultimate goal of realizing an artificial brain scale intelligent system with billions of electronic neurons and synapses. Due to these reasons, many synaptic plasticity circuits, e.g., [20], [24], [30], [39], and [40], exploit transistors in their subthreshold region of operation, in order to implement their desired neural dynamics and consume as little power as possible. B. Differential Pair (DP) and Operational Transconductance Amplifier (OTA) Differential pairs (DPs) are electronic components widely utilized in neural analog circuit design [37], [41]. A DP in its basic form consists of three transistors, two of which are used for receiving the input voltages at their gates and the other one for biasing the pair by a constant current source [see Fig. 2(a)]. As shown in Fig. 2(c), a DP sets a sigmoidal relationship between differential input voltages and the currents flowing across each of the two differential transistors. The sigmoidal function is crucial to artificial neural networks and has been useful in describing the activities of populations of neurons [42]. This makes the differential pair an interesting and useful building block for neuromorphic engineers. Differential pairs can be used for various applications including spike integration for a synapse circuit [43], and a rough voltage difference calculator [27]. They are also the heart of operational transconductance amplifiers (OTAs) which are another essential component in electronic and neuromorphic engineering. Fig. 2. (a) A basic DP circuit consists of three transistors. (b) The OTA circuit converts the difference between its two input voltages to a corresponding current at its output. This circuit has been extensively used in the implementation of various neuromorphic devices [27], [30], [37], [47]. (c) The DP sets a sigmoidal relationship between differential input voltages and the currents flowing across each of the two differential transistors. This is a useful behavior for implementing similar sigmoidal behavior, observed in neural systems [37]. The OTA is another essential building block not only in neuromorphic engineering, but also in general analog integrated circuit design [37], [41], [44]. It is usually used to perform voltage mode computation and produces an output as a current. This analog component is commonly employed as a voltage-controlled linear conductor. However, in its simplest form, the OTA is not really linear and usually sets a sigmoidal function between differential voltage inputs and the output current [see Fig. 2(c)]. In various VLSI implementations of neuromorphic synapses and synaptic plasticity rules, the OTA has been used in different roles [45]. In some cases, it has been used to act as an active resistor when forming a leaky integrator [27], [46], and sometimes to act as a low-cost comparator [27]. In addition, a number of neuromorphic designers have carried out some changes to the basic structure of the OTA [27], [45], [47] to increase its symmetry, dynamic range, and linearity and at the same time decrease the offset. In result, the OTA has greater stability against noise and process variation, and gains better ability to mimic the desired neural function [27], [30], [37], [47]. Vol. 102, No. 5, May 2014 | Proceedings of the IEEE 721 Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges C. Synaptic Potential and Leaky Integrator (Decay) Circuits When implementing any synaptic plasticity rules of interest, there is always a need to implement some dynamics to represent potential for potentiation and potential for depression. These potentials start with the arrival of a spike, and can lead to potentiation/depression in the synaptic weight, if another spike arrives in the synapse before the potential vanishes. Fig. 3 shows a circuit that has been utilized to implement the required potentials in a number of PSTDP and TSTDP circuits including [20], [24], and [40]. It can also be utilized in implementing any synaptic plasticity circuit, where there is a need for controllable decay dynamics. This circuit that acts as a leaky integrator controls both the amplitude of the generated potential signal as well as its time constant. There is another instance of leaky integrator, in which only time constant is controllable and the required amplitude of the potentiation/depression should be realized with another circuit/transistor in the plasticity circuit. Two different arrangements of this leaky integrator are shown in Fig. 4. In these circuits, the dawn of the signal determined by the arrival of a spike, and the time constant is controlled by the voltage applied ðVtau Þ to the gate of a PMOS/NMOS transistor. IV. NEUROMORPHIC IMPLEMENTATION OF SYNAPTIC PLAS TICITY RULES The area of neuromorphic implementation of various synaptic plasticity rules has been active for over a decade, and many researchers and neuromorphic engineers have been involved in hardware realization of various synaptic plasticity rules. Below is a review of a variety of approaches for implementing different synaptic plasticity rules discussed in Section II. A. Pair-Based STDP Learning Circuits Many pair-based STDP circuits have been implemented by different groups and using various VLSI design Fig. 3. (a) Synaptic potential (decay) circuit. (b) Synaptic potential module. The output of this module is a decay function, whose time constant and amplitude are controlled by Itau and Iamp , respectively. The decay starts once a pre/post spike arrives. 722 Proceedings of the IEEE | Vol. 102, No. 5, May 2014 Fig. 4. Leaky integrator circuit for producing required decay dynamics with adjustable time constants. (a) Leaky integrator for driving a PMOS transistor. (b) Leaky integrator for driving an NMOS transistor. (c) Leaky integrator module symbol. strategies in recent years [20]–[24], [39], [40], [46], [48], [49]. A brief review of these VLSI circuits for PSTDP is presented below. One of the first designs for PSTDP, which is the conventional form of timing-dependent plasticity, was first proposed by Bofill-i-Pettit and Murray [20]. Fig. 5(a) shows a version of this circuit. In this design, two transistors (Mp and Md) that operate in their subthreshold (weak inversion) region are utilized to control the amount of current flowing into/out of the synaptic weight capacitor CW . The voltages that control these transistors are Vpot (potential for potentiation) and Vdep (potential for depression). These potentials are produced by two instances of the synaptic potential circuit presented in Fig. 3. This design uses currents for controlling circuit bias parameters that correspond to the PSTDP learning rule parameters presented in (1). Simulation results for generating STDP learning window using this circuit are also presented in Fig. 5(b). This figure demonstrates the exponential decay behavior in the learning window, which is in accordance to the exponential formula of PSTDP rule presented in (1). This exponential behavior is reached by biasing Mp and Md, in their subthreshold regions of operation. Since this circuit is designed with transistors biased in the subthreshold region, it is susceptible to process variation. In order to study how sensitive this design is to the variation, we performed 1000 Monte Carlo (MC) simulations [see Fig. 5(c)], in which the threshold of all transistors in the design independently underwent 3-sigma variations, i.e., the threshold of each transistor was able to change up to Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges Fig. 5. (a) This PSTDP rule circuit that is a modified version of the design proposed in [20] is presented in [40]. (b) The exponential learning window generated by Matlab and the PSTDP circuit under various process corners. Similar protocols and time constants to [19] were employed. (c) The learning window variation for 1000 MC runs with 3-sigma transistor parameter variations from typical model parameters, generated using a similar circuit to the one shown in (a). 30 mV. This figure shows that device mismatch results in deviations in the STDP learning window, from that of the target design, but the LTP and LTD regions in the window are well preserved. Besides the mentioned VLSI designs for PSTDP, we have designed other STDP designs such as the PSTDP circuit given in [21]. This circuit is symmetric and has two branches of transistors, as shown in Fig. 6(a). The upper branch is responsible for charging the weight capacitor, if a presynaptic spike precedes a postsynaptic one in a determined time, and the bottom branch is for discharging the capacitor if the reverse spike order occurs. The potentiation and depression timings in this design are set by two leaky integrators, in which their decays are set by two bias voltages Vtp and Vtd , for potentiation and depression time constants, respectively. In addition, the amplitude of the potentiation and depression are set by VAþ and VA , respectively. Fig. 6(b) shows the STDP learning window in an accelerated time scale. Fig. 6(b) shows chip measurement results for STDP learning window, in biologically plausible time constants for three different synaptic time constants controlled by Vtp and Vtd [21]. Note that this design utilizes two instances of the leaky integrators shown in Fig. 4, for controlling the potentiation and depression time constants of the potentials for potentiation and depression. Another PSTDP circuit that has been utilized in a VLSI spiking neural network chip as part of the FACETS project was proposed by Schemmel et al. [50]. In this design, the STDP circuit that is local to each synapse has a symmetric structure. The voltage potentials for potentiation or depression correspond to the quantity of charge stored on synaptic capacitors, which are discharged at a fixed rate, determined by a set of three diode-connected transistors working in their subthreshold region of operation to cause an exponential decay to mimic the operation of a leaky integrator. These capacitors later determine the amount of change in the synaptic weight corresponding to the time Fig. 6. (a) PSTDP circuit presented in [21]. (b) The STDP learning window generated by the PSTDP circuit shown in (a) was measured from the multineuron chip presented in [21] in biologically plausible time and under the PSTDP experimental protocol utilized in [59]. The figure shows the window for various potentiation and depression time constants. Vol. 102, No. 5, May 2014 | Proceedings of the IEEE 723 Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges Fig. 7. (a) Schematic diagram of a VLSI learning neuron with an array of SDSP synapses: multiple instances of synaptic circuits source in parallel with their output currents into the I&F neuron’s membrane capacitance [56]. The I&F neuron integrates the weighted sum of the currents and produces sequences of spikes in the output. (b) Synapse with presynaptic weight update module. An AER asynchronous logic block receives input spikes and generates the pre and  pre pulses. An amplifier in a positive-feedback configuration forms a bistability circuit that slowly drives the weight voltage VWi toward one of the two stable states Vwlow or Vwhi . The transistors driven by the pre and  pre pulses, together 0 0 and VDN signals, implement the weight update. The diff-pair integrator block represents a current-mode low-pass with those controlled by the VUP filter circuit that generates an output synaptic current Isyn with biologically plausible temporal dynamics. This current is then sourced into the Vmem node of the I&F circuit. (c) Neuron with postsynaptic weight control module. An I&F neuron circuit integrates the input synaptic currents and produces a spike train at the output. A diff-pair integrator filter generates the VCa signal, encoding the neuron’s mean firing rate. Voltage comparator and a current comparator circuits determine whether to update the synaptic weights of the afferent synapses, and whether to increase or decrease their value. difference between the presynaptic and postsynaptic spikes. Another PSTDP circuit was proposed by Arthur and Boahen [51]. This symmetric analog design utilizes a static random access memory (RAM) cell for storing a binary state of the synapse weight, which is either high (potentiated) or low (depressed). This circuit uses leaky integrators, similar to those shown in Fig. 4, in order to implement the required dynamics for the plasticity. Upon the arrival of a spike, the plasticity potentials are generated. They start decaying linearly thereafter, and if a complementary spike arrives in their decay intervals, its time difference with its complementary spike determines the required level of potentiation or depression. B. TSTDP Learning Circuits As (2) shows, in order to implement the TSTDP rule, similar circuit dynamics to those employed in PSTDP circuits are needed. It is shown in [24], [52], and [53] that TSTDP circuits are able to reproduce a similar learning window to the one generated by the PSTDP circuit that is shown in Fig. 5(b). Furthermore, it is shown that these circuits are able to account for various biological experiments including the triplet [14], [54], quadruplet [54], and frequency-dependent pairing experiments [55], while the PSTDP circuits clearly lack these abilities [39], [40]. For further details, the reader is directed to [24] and [53]. Although different TSTDP circuits have shown promising results in reproducing experimental data beyond the capacity of PSTDP circuits [39], [40], a present limitation is that they are not yet proven by fabrication [24], [53] and are thus not further discussed here. 724 Proceedings of the IEEE | Vol. 102, No. 5, May 2014 C. Spike-Driven Learning Circuits The SDSP learning rule has been implemented in a number of different analog circuits in VLSI [25], [26]. In the SDSP rule as described in Section II-C, the dynamics of the voltages produced in the neuron depends on the membrane potential Vmem of the neuron. So the SDSP rule changes the synaptic weight according to the time of presynaptic and membrane potential of the postsyaptic neuron. This membrane potential itself depends on the frequency of postsynaptic spikes generated by the neuron. Fig. 7 shows a brief view of the neuron and synapse structure implemented in VLSI to realize the SDSP rule. Fig. 7(b) shows that for implementing the SDSP synapse, a differential pair integrator (DPI) [43] along with a bistability circuit are the main components and the rest of the required components are only needed once per neuron. In addition, Fig. 7(c) demonstrates the neuron soma circuit and the implemented synaptic plasticity dynamics required for the SDSP rule. D. LCP Learning Circuit In addition to the spike-based and spike-timing-based synaptic plasticity circuits mentioned so far, there is another circuitry proposed by Mayr et al. [27] that uses a hybrid learning rule composed of both timing and rate of spikes to alter the synaptic weight. This phenomenological rule was already introduced in Section II-D. For generating the required exponential decay dynamics for both uðtÞ and gðtÞ, an OTA-based design approach has been utilized. Similar to the PSTDP design reported in [46], this design exploits a balanced OTA with negative feedback, which acts as a large resistor in the required leaky integrators. However, this design uses an active source degeneration Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges topology to further improve the dynamic range and linearity of the integrator. These integrators are needed for both membrane potential uðtÞ, which decays linearly back to the resting potential after a postsynaptic pulse duration is finished (hyperpolarization dynamic), as well as the postsynaptic current gðtÞ, which decays exponentially toward zero after a presynaptic spike arrival. The time constants in these integrators can be tuned by changing the resistance of the OTA-based resistor that in turn can be altered by calibrating the leak bias current in the design. Beside these exponential decay dynamics, the rule needs subtraction and multiplication for changing the synaptic weight. These functions are approximated using a differential pair and its tail current [27]. E. Biophysical Learning Circuits The two major biophysical designs available in the literature, that are able to demonstrate both PSTDP and BCM-like behavior, are the designs proposed in [29] and [30]. The first design implements an elaborate biophysical synaptic plasticity model, which is based on the general biophysical processes taking place in the synapse (see Section II-E). The presented synaptic plasticity circuit utilizes current mode design technique in order to implement the targeted biophysical rule that describes the detailed dynamic of the synaptic ion channels [29], [47]. Recently, the same group has published another iononeuromorphic VLSI design which explores a similar approach for implementing both spike-rate-dependent plasticity (SRDP) and STDP using a unique biophysical synaptic plasticity model as briefly explained in Section II-F. Similar to their first design, they used current-mode design technique to implement the required ion channel dynamics [30]. These biophysical designs, similar to the design presented in [27] and [46], utilized OTAs for implementing part of their required dynamics, as well as to convert voltage to current. The other circuit building blocks that have been used in these designs include: DP for implementing required sigmoidal function, current mirror for copying some currents in the circuit were needed, translinear current divider, and multiplier. The complexity of the detailed biophysical rules implemented in these circuits results in a larger number of transistors compared to the mentioned implementation of phenomenological rules shown in Section IV-A–IV-D. V. CHALLENGE S I N NEUROMORPHIC ENGINEERING Indeed, the main advantage of a hardware neuromorphic system lies in its high degree of parallelism, which allows the individual neuromorphic circuits to work on biological time scales, in order to minimize power consumption [56]. However, this approach has its own challenges, such as design process variation, interconnection, input–output bandwidth, and large silicon area usage when considering large-scale systems. Interestingly, the challenges and constraints faced by neuromorphic engineers when implementing synaptic learning as a main part of neuromorphic systems are similar to the ones encountered in biological learning, such as lack of longtime weight storage [57], [58] and limited wiring. The main challenges and obstacles one faces when implementing a large-scale neuromorphic system are summarized below. A. Power Consumption We know that the brain consists of billions of neurons and trillions of synapses, each of which consumes much less power compared to their silicon counterparts [4]. Recently, new integrated neural circuitry with a lowpower structure has been proposed that consumes even less power per spike compared to a biological neuron [45]. Although it is a big step toward having a low-power spiking neural system, it is very naive to think we are close to a neural system with a power consumption close to the brain, since this work does not consider the interconnection and communication among spikes and its required power. It also does not take into account the required complexity in the neural and synaptic structures, which is sometime necessary for specific applications. In addition, the power consumption of a synapse or neuron heavily depends on its model parameters and their values that can change the weight modification pattern and at the same time lead to high or low power consumption. The other fact that should be considered is the spike pulse width utilized in the neuromorphic design, that can have significant effects on both functionality and power consumption of the system. Therefore, an effective approach for decreasing the power consumption of a neuromorphic system is to optimize the neuron and synapse circuit bias parameters, as well as the structure of the spike pulses, in a way that while having the required functionality, consumes the minimum possible power. Beside these, one of the other effective ways for implementing power-efficient circuits is to minimize the number of active circuits (e.g., amplifiers) and to use circuits that employ transistors operating deep in the subthreshold domain. This approach allows the design of circuits that operate with extremely low currents (below pA) and low supply voltages. However, operating in the subthreshold region of operation and using lower supply voltages result in greater susceptibility to process variation. B. Process Variation and Device Mismatch Due to inevitable variations in device parameters when fabricated in an integrated circuit technology, the resulting devices and circuits most likely will deviate in function and output when compared to their ideal response. Process variations have a strong effect especially when designing circuits biased in the subthreshold regime, because of the variation in the transistor’s threshold voltage. Transistor Vol. 102, No. 5, May 2014 | Proceedings of the IEEE 725 Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges mismatch is a challenge especially in designs including current mirrors, DPs, and circuits that require an exact matching between several components. Many of the neuron, synapse, and learning circuits are implemented in subthreshold. In addition, many of these designs employ current mirrors and DPs. Therefore, these neural systems are highly susceptible to process variations and device mismatch [24], [27], [40], [60]. Various approaches have been proposed to tackle the process variation and mismatch problems. These include post-fabrication calibration [24], [61], mismatch minimization techniques employed at the circuit design stage [30], [47], and off-chip event-based compensation strategies [62], [63]. Among these approaches the mismatch minimization technique utilizing wide-dynamic range devices [30], [47], as well as the off-chip event-based compensation strategies [63] such as the use of AER mappers and routers (e.g., probabilistic) to redistribute events in a way to compensate for mismatch effects, are the viable methods for reducing the effect of mismatch in large-scale neuromorphic systems. Contrary to large-scale neuromorphic systems, several works such as [23], [25], [26], and [64] have demonstrated that small-scale neuromorphic systems can successfully perform learning tasks, even in the presence of the unavoidable device mismatch. Giulioni et al. [26] have demonstrated how the synaptic plasticity rule described in Section II-C and the learning circuits described in Section IV-C can perform robust classification of uncorrelated and even correlated patterns, when embedded in a recurrent network of spiking neurons. In a recent paper [64], the authors explicitly address the problem of device mismatch of these analog plasticity circuits at the network level: they trained an attractor network to express bistable dynamics, and demonstrated how the learning is robust, in spite of the small network size and the considerable inhomogeneity of neuromorphic components. Similar demonstrations exist for other circuit implementations. For example, in [25], Mitra et al. present a network of spiking neurons and plastic synapses and use it to perform robust classification of binary patterns; in [23], Bamford et al. demonstrate the proper operation of STDP circuits, measuring the effect of mismatch in a full network distributed across multiple chips. While these examples demonstrate that both circuit and network level solutions exist for obtaining robust learning performance, in principle, the variability of synaptic plasticity circuits could create imbalances in potentiation and depression rates, leading most, if not all synaptic weights in a network to fully depressed or fully potentiated states. It is therefore important to carefully assess the effect of mismatch when designing networks of spiking neurons with learning circuits, and to carefully choose the design strategy of the synaptic plasticity circuit, which will depend on the adopted network architecture. Although device mismatch is the most significant source of variation in neuromorphic synaptic plasticity 726 Proceedings of the IEEE | Vol. 102, No. 5, May 2014 circuits, other types of variations, i.e., supply voltage and temperature variations, should also be considered, specially when large-scale neuromorphic systems are targeted. C. Voltage and Temperature (VT) Variation Voltage variations can arise due to supply voltage fluctuations during device operation, while temperature variations can arise due to both environmental factors and due to the operation of the device itself, which can create heat gradients on the surface of the chip. The effects of these variations can be simulated by the CAD tools, with different parameters in the transistor models (with different so-called ‘‘corners,’’ within which the devices operate correctly). An instance of a simulation that takes into account these variations is shown in Fig. 5(b), where the STDP learning window is produced for various device corners, showing how in this case the circuit is robust to these variations. Since there are three simultaneous sources of variations in an analog VLSI system [i.e., parameter, voltage, and temperature (PVT)], these variations should be coupled together in order to explore the various PVT variation corners, in which the device has its best, typical, or worst characteristic. In addition to the transistors, variations also affect the characteristics of the interconnects that have their own corners. So device and interconnects could have worst performance at different corners. Considering these corners when designing the targeted application is essential, as the design might be dominated by device corners, interconnect corners, or a mixture of both [65]. D. Silicon Real Estate Silicon area occupation of neuromorphic systems is related to the area used by each neuron and synapse, and to the way they are connected together. Considerations on the area required by the interconnects are listed in Section V-E. Concerning the area required by the silicon neuron and synapse designs, there are two main approaches to consider. One is the biophysically realistic approach that attempts to model in great detail the biophysics of neurons and synapses, usually producing large circuits, such as the design proposed in [29] and [30]. The other approach, which aims to implement the phenomenology of the action potential generation mechanisms but sacrificing biological fidelity, usually produces more compact circuits [56]. Perhaps the most critical component, however, is the synapse design, as in learning architectures, most of the silicon real estate is going to be consumed by these elements. If the synapses have all the same type of (linear) temporal dynamics, it is possible to exploit the superposition principle and use one single shared (linear) temporal filter circuit to model the temporal dynamics of many synapses [25]. The individual synapse elements are therefore ‘‘only’’ required to implement the weight update mechanism and transmit a weighted pulse to the shared integrator. Naturally, the Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges smaller the weight-update circuit, the larger the number of synapses can be integrated in the same area. There are very promising emerging technologies (e.g., based on 3-D VLSI integration, 3-D VLSI packaging [66], and resistive RAMS) that may offer ways of making extremely compact synapse elements and (consequently) extremely dense synaptic arrays [67], [68]. E. Interconnection and Routing As in biology, wiring is a significant issue. If each neuron in neuromorphic architectures were to use a dedicated wire to target its destination synapses (as real neurons use axons), then the area required by interconnects would dominate, and (given the essentially 2-D nature of VLSI technology) it would be impossible to design large-scale systems. Fortunately, it is possible to exploit the very large differences in time scales between typical neuron transmission times and silicon communication circuits. In this way, it is possible to time-multiplex the action potentials generated by the silicon neurons and share wires, creating therefore ‘‘virtual axons.’’ The most common protocol that is used in the neuromorphic community to accomplish this is based on the address– event representation (AER) [69], [70]. In this representation, the action potentials generated by a particular neuron are transformed into a digital address that identifies the source neuron, and broadcast asynchronously on a common data bus. By using asynchronous arbiters and routing circuits [71] it is therefore possible to create large-scale neural networks with reconfigurable network topologies. These networks can be distributed within the same chip (e.g., among multiple neural cores [72]), or across multiple chips [73], [74]. F. Electronic Design Automation for Large-Scale Neuromorphic Systems Although there are a number of neuromorphic systems that deal with a relatively high number of analog neurons, designing large-scale neuromorphic systems is still a very complex task. One of the major obstacles on the way is the lack of an electronic design automation (EDA) tool that can facilitate the design procedure, while taking into account the targeted design requirement. There are promising recent accomplishments that exploit existing EDA tool chains for automating the design of neuromorphic circuits (for example, for designing the asynchronous logic circuits that make up the arbiters and routers described above [75], [76]). However, there is a need for a new generation of EDA tools that are optimized for neuromorphic architectures with hybrid analog/digital circuits, asynchronous logic circuits, and networks characterized by very large fan-in and fan-out topologies. G. Bias Generation for Neuromorphic Circuits The complex behavior of neural circuits including neurons and synapses is controlled by many parameters including synapse potentiation and depression time constants and amplitudes, neuron spiking thresholds, spiking frequency adaptation, and refractory period parameters. For controlling silicon neurons and synapses, these parameters should be presented as small-scale and high-accuracy voltages and currents to silicon neurons and synapses. Generating these bias voltages and currents, which usually span over a wide range, usually needs a specific dedicated VLSI circuit that generates these values in a programmable and reconfigurable manner. Fortunately, there are a number of high-resolution, wide-dynamic range, temperature-compensated analog programmable bias generator circuitries already available in the literature, which can be used for synaptic plasticity circuits and systems [77], [78]. Considering large-scale neuromorphic systems with a large number of neurons and synapses, a bias sharing technique for neurons and synapses that are in close proximity is a practicable approach and has been utilized in Stanford University Neurogrid chips [79] as well as in the FACETS project [50]. The challenges mentioned in Sections V-A–V-G are engaged with typical neuromorphic systems and are not specific to synaptic plasticity circuits. However, a specific challenge on the way of implementing required synaptic plasticity rules and integrating them into network of neurons is the synaptic weight storage method, which is discussed in more details in Section V-H. H. Synaptic Weight Storage and Stabilization In neuromorphic architectures, synaptic weights are usually represented as the amount of charge stored across a weight capacitor [24]. However, this weight is not stable as the charge on the capacitor leaks away thus slowly loosing the synaptic weight value. This instability is due to the leakage of the transistors connected to the capacitor. Therefore, the synaptic weight cannot be preserved longer than a few hundreds of milliseconds to a few seconds, depending on the capacitance of the weight capacitor. For this reason, early neuromorphic designs used large capacitors [20], [23], [24], [45]. However, this takes up a large portion of the precious silicon real estate, and is not compatible with the goal of integrating large-scale neural systems. Therefore, a number of other approaches have been proposed, in order to address this challenge on the way to realizing long-term plasticity in silicon. These approaches are briefly reviewed as follows. 1) Accelerated-Time Synaptic Plasticity Circuits: A number of neuromorphic designers have used a time-scaling approach [24], [27], [50], [80], in which the circuits used operate in accelerated time scales (e.g., 103 to 105 ) compared to the timing of real neurons. An instance of an accelerated neuromorphic system is the BrainScaleS wafer-scale system [81]. The main advantages of this approach are: 1) increased speed of emulating large-scale neuromorphic systems that is useful for long experiments; Vol. 102, No. 5, May 2014 | Proceedings of the IEEE 727 Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges and 2) a higher degree of density and integration, due to smaller capacitors. On the other hand, main disadvantages of this approach are: 1) the inability of the acceleratedtime system to be directly interfaced to sensors or to directly interact with the environment; and 2) the high bandwidth (and power) required for transmitting spikes across chips. 2) Utilizing Reverse-Biased Transistors to Decrease Leakage: Using reverse-biased transistors in the path of charging/discharging weight capacitors reduces leakage currents and, therefore, increases the weight stability on that capacitor. This approach was first proposed by Linares-Barranco and Serrano-Gotarredona [82]. Recently, it has been successfully exploited in [23] for storing synaptic weight for a longer period of time in the order of hundreds of milliseconds. In order to reverse-bias the transistors in a circuit, as is proposed in [23], the Gnd and Vdd are shifted a few hundreds of millivolts toward Vdd and Gnd, respectively. By reducing the supply voltage slightly or increasing the ground voltage level, the transistor back gate will be in both cases at higher voltages, resulting in an increase in the threshold voltage, which leads to reduced leakage current. However, the reverse biasing approach will not provide weight stability in the long term. This is due to the strong relation between the threshold voltage and the process corners, which are affected by process variations during the time of fabrication. 3) Digitizing the Synaptic Weight and Storing It in Memory: This approach has been followed in a few ways. In one of the pioneering works on neural networks presented in 1989, the approach was to serially and periodically refresh the analog weights stored on the capacitor with the weight stored in the memory [84]. This approach, however, needs digital-to-analog converters (DACs) and analog-to-digital converters (ADC). Moradi and Indiveri [35], [83] have used a single current-mode DAC, available beside each neuron integrated circuit, in order to convert 5-b digitally stored synaptic weights in asynchronous SRAMs, to a current that drives the shared synapse integrator, as shown in Fig. 8. Therefore, the synaptic weights here are considered as virtual synapses and their weights come into effect whenever they receive a spike from the AER system [35], [83]. This approach utilizes a time multiplexing technique and therefore only uses one DAC per several synapse memory cells. A similar approach of using virtual synapses with digitized synaptic weights has been employed by other neuromorphic engineers, in order to tackle both synaptic weight storage and also reduce area usage [85]. In [86], Pfeil et al. discuss the issue of digitizing weight on the PSTDP rule and show that considering other constraints of neuromorphic designs, increasing the weight storage resolution is not necessarily useful for PSTDP. 728 Proceedings of the IEEE | Vol. 102, No. 5, May 2014 Fig. 8. Schematic diagram of the programmable synapse circuit. The top part of the diagram represents a DPI circuit which implements the temporal dynamics. The bottom part of the diagram represents the DAC that converts the SRAM 5-b weight into a corresponding synaptic current. After [83]. 4) Bistability Mechanism: Another approach for synaptic weight stabilization is a bistability mechanism that is based on the idea of having the long-term state of a synapse either potentiated or depressed. For example, in the circuit of Fig. 7(b), an amplifier with positive feedback is utilized to drive the synaptic weight stored on the weight capacitor and updated by the desired synaptic plasticity rule in the short term, slowly either upward or downward depending on the current value of the synaptic weight that is above or below a predetermined threshold [21], [25], [87]. Analogous approaches use digital circuits to store the binary state (e.g., on SRAM cells [51]), or map and store the weight on a multistage analog memory [88], [89]. In this approach, the synaptic weight is updated continuously when there are spikes, but as soon as there is no activity, the weight is driven toward a high or low value, depending whether the current synaptic weight is above or below a set bistability threshold. The bistable nature of synaptic weights has experimental support, as well as benefits over the use of large weight capacitors, in large neuromorphic systems [90], [91]. In addition, from a theoretical perspective, it has been argued that the performance of associative networks is not necessarily degraded if the dynamic range of the synaptic efficacy is restricted even into two stable states [92], [93]. Furthermore, bistable synapses can be implemented in a small area compared to having large-scale capacitors for Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges Table 2 Synaptic Plasticity Circuits Comparison preserving the synaptic weights for longer periods of times [21]. Due to these benefits, this technique is a suitable approach to be used in all of our reviewed synaptic plasticity circuits including the STDP and TSTDP circuits. However, this approach has the limitation in being volatile and will lose its state once the system is powered down. Permanent storage of synaptic weights can be achieved using nonvolatile memory elements, or special devices and technologies, as discussed below. 5) Floating Gate: Floating gate (FG) technology is another possible approach for nonvolatile storage of synaptic weights. It has been exploited in neuromorphic systems to implement Hebbian-based and STDP learning rules [34], [94]. This storage technique leads to a compact single transistor implementation of STDP [34], which saves significant silicon area. However, this approach requires extra circuitry to drive the tunneling and hotelectron injection circuits for increasing or decreasing the synaptic stored weight values. 6) Memristors: The memristor as the fourth circuit element [95], [96] possesses invaluable characteristics including nonvolatility, low power, and high density, which are the features always being sought for implementing large-scale neuromorphic systems. Therefore, memristors represent a promising solution for solving the problem of synaptic weight storage [68], [97]. Memristive arrays can also be integrated with CMOS in order to form a nonvolatile synapse circuit [97], [98]. These hybrid CMOS/memristor synapse circuits then can be utilized to implement both computational and detailed biophysical synaptic plasticity learning rules that are quite useful for neural computation. Although the memristors have significant strengths, there are still no established solutions for implementing reliable arrays of devices, with well-controlled characteristics. There are still significant issues with the accuracy of device programming, the device yield, and the device-to-device variations. These issues are currently being addressed by many research groups worldwide, hence presenting unique opportunities for neuromorphic engineering and implementations. VI . DI SCUS SI ON So far in this paper, with a circuit design approach, we reviewed several synaptic plasticity rules and showed and discussed single circuit implementations of these rules. We also discussed several important challenges of neuromorphic engineering and counter approaches to tackle those challenges. In the following parts of this paper, we focus more on the system aspects of a neuromorphic system. First, in this section, the use of the mentioned synaptic plasticity rules and circuits in real neuromorphic learning systems is discussed and the systems are analyzed in terms of power consumption and silicon real estate. Then, in Section VII, the applications of these neuromorphic systems in real-world engineering tasks are reviewed and discussed. In addition, an example of an effective neuromorphic system is mentioned, and it is described how it learns to perform an engineering task. Table 2 summarizes the key properties of some neuromorphic systems for learning and synaptic plasticity applications. Note that the estimated area and power consumption data in this table only reflect the reported data in the related papers. These numbers are dependent on many parameters, including the synaptic plasticity rule implemented, the synaptic plasticity parameters, the weight storage techniques, and the network stimulation pattern and protocols. Since, in some papers, the exact power consumption and area requirement of the synapse is not available, the total power and area of the chip are divided by the number of synapses and neurons on the chip, to calculate a rough value of the size and power requirement of the synapse. Also, note that the calculated estimated area for each synapse encompasses both the Vol. 102, No. 5, May 2014 | Proceedings of the IEEE 729 Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges synapse circuit as well as its synaptic plasticity circuit, which may be reported or implemented separately in the related papers. The feedforward network with STDP learning presented in [20] successfully implements the targeted synchrony detection, but it consumes significant power and occupies a large area. The high power consumption is due to power hungry biasing current distribution network designed to minimize mismatch between synapses. In addition, the area of the designed STDP circuit is significantly large due to huge capacitors of the order of several pico-farads. The implementation of an array of neurons with bistable STDP synapses [21] is the next design that has better power and size performance compared to the first mentioned design [20]. Furthermore, this neuromorphic system utilizes the AER communication protocol and, therefore, is reconfigurable, in contrary to the hard-wired network structure presented in [20]. The next two neural networks with STDP synapses, mentioned in Table 2, are also configurable. This feature helps the design to customize its topology where there is a need for various studies and applications, such as the designs in [21] and [50], which have been used to show STDP learning window, LTP, and LTD characteristics. In terms of silicon real estate required for the STDP circuit, the design in [50] has a compact structure that occupies an area of 50 m2 for the STDP circuit and 100 m2 for the synapse including STDP, DAC, and memory cell for storing the synaptic weight. Power consumption information for this FACETS accelerated-time neuromorphic architecture is not listed. The neuromorphic learning chip presented in [51] also uses STDP and on-chip SRAM cells to store a binary state of the synaptic weight updated by the STDP circuit. Considering the number of neurons and synapses in this architecture and the overall area of the chip presented in [51], which is 10 mm2 , this design that has been used for learning patterns also has a compact synapse size, on par with the FACETS project chip [50]. The next design reviewed in Table 2 is an adaptive olfactory neural network with on-chip STDP learning [46]. There is no power consumption information available in the paper. In addition, the exact area occupied by neurons and synapses on the chip has not been reported. However, considering the die area of the fabricated olfactory chip, the OTA-based synapse circuit with STDP occupies an area larger than the area required for the design mentioned in [50]. Tanaka et al. [22] developed an accelerated-time neuromorphic chip with STDP learning in a Hopfield network for associative memory. Although they used a similar VLSI technology to the design presented in [50], their implemented synapse takes up significantly larger silicon area. The power consumption of the synapse presented in this work is also 250 W, which is high for a synapse circuit compared to other designs presented in 730 Proceedings of the IEEE | Vol. 102, No. 5, May 2014 Table 2. In another attempt for implementing STDP, CruzAlbrecht et al. designed a test low-energy STDP circuit and have verified their design in terms of producing STDP learning window and its power consumption [45]. The STDP synapse presented in this work consumes only 37 pW of power at 100 Hz. On the other hand, this design which utilizes different OTAs for realizing an STDP learning window, considering its 90-nm design technology, occupies a large silicon area of 64 823 m2 . Comparing to all previously mentioned STDP-based learning circuits and systems, the neuromorphic learning network presented in [99], with 256 neurons and 64 000 synapses, that only consumes 8 nW of power and occupies roughly 13 m2 per synapse in the UVT chip, is the most efficient neuromorphic design. It is shown in [99] that this design can be configured for various cognitive tasks such as pattern recognition and classification as well as associative memory. Further to these designs, Bamford et al. developed a weight-dependent STDP (W–STDP) circuit [23], which is different from designs mentioned so far that implemented conventional form of STDP. They showed that the W–STDP design can be implemented using the physical constrains of CMOS transistors, and, therefore, their design has an acceptable area and a low power consumption comparing to previous STDP designs. Another W–STDP design is the single transistor synapse device proposed in [34]. This device utilizes an FG transistor to implement W–STDP, while the synaptic weight changes are stored in a nonvolatile manner in the FG. It is shown that this device is able to demonstrate LTP, LTD, and STDP behaviors, and is highly scalable. All neuromorphic systems mentioned so far have used STDP as the learning mechanism in their networks. However, as already mentioned, other synaptic plasticity rules have also been implemented and tested in neuromorphic systems for applications and synaptic plasticity experiment replications. One of the first designs that used a different rule than STDP for a classification task was the design presented in [25] that employs SDSP learning algorithm for synaptic plasticity. The area of this design is comparable to the area required for the STDP learning rule, implemented in previous designs. The authors have also shown the significant performance of the implemented neural network with SDSP learning in classifying complex rate-based patterns [25]. Another neuromorphic system that implements a different synaptic plasticity rule rather than STDP is the design presented in [27] and [33]. This design implements a BCM-like voltage-dependent rule called LCP (see Section II-D) to replicate synaptic plasticity experiments beyond STDP such as TSTDP [14] and frequencydependent STDP [55]. Considering the higher ability in replicating synaptic plasticity experiments compared to STDP, this circuit has higher complexity. However, the presented design in [33] is in par with most of the STDP Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges designs presented so far in both power and area requirements. There are also a few biophysical VLSI neuromorphic designs available in the literature that take into account details of synaptic plasticity phenomena and implement its underlying mechanism with a high degree of similarity to biological synapses, in silicon [29], [30]. This similarity results in the specific ability of these synapses to account for both SRDP and STDP experiments and replicate intracellular dynamics of the synapse, where simple previous synapses with STDP fail. It also leads to large silicon area requirement for these circuits, while their reported power consumption is reasonable comparing to most of the other VLSI synaptic plasticity designs presented in Table 2. In addition to the custom made hardware systems that opt to implement a specific type of learning (synaptic plasticity) rule and use it in a specifically designed and structured spiking neural network for an application or neuromorphic research, general neural architectures, such as the Spinnaker [100], can be instructed, using software, to implement any desired spiking neural network (whether simple or complex) with any learning rule of choice. In Spinnaker, the targeted neural network is numerically simulated in core processors and the synaptic weights are stored in shared dynamic random access memory (DRAM). This neural architecture utilizes asynchronous design strategy for global routing in its design, so that the power consumption of the design can potentially be improved. It also uses low-power ARM processors and DRAMs to reduce the power consumption of the system. However, implementing a specific synaptic plasticity rule in this general neural architecture consumes more power than a typical custom VLSI design of that rule, due to its software-based approach. VII. APPLICATIONS OF NEUROMORPHIC CIRCUITS WITH SYNAPTIC PLASTICITY In order to implement a system with the capabilities close to the brain, many neuromorphic engineers have been following a bottom–up design strategy. Therefore, they start with building basic blocks of the brain in silicon. One of the main building blocks is the synapse that itself includes the synaptic weight plasticity mechanism. This is the main block that brings about learning, memory, and computational properties of the neural system [101]. In this section, we briefly discuss and review how VLSI implementation of various synaptic plasticity rules can be useful in learning and real-world applications. We also give an example of an efficient learning network and show how neurons and synapses in this network are able to learn and in result perform a real-world task. Since working with live creatures and measuring experimental data from biological sources is time consuming and challenging, maybe one of the first applications for a neuromorphic system that contains both neurons and synapses with any desired synaptic plasticity rule, is used for experimental neuroscientists. They can use a neuromorphic system, which acts according to a desired synaptic plasticity rule and neural combination, and therefore experiment with various features and characteristics in that system. For example, the biophysically inspired iononeuromorphic circuits proposed in [29] and [30] provides useful insight into how the calcium level alters in the real synapse. Furthermore, since it is widely believed that synaptic plasticity underlies learning and computational power in the brain [101], [102], various mechanisms that have direct or hypothetical relation to the synaptic plasticity experiments are being used as the learning part of a spiking neural network, to perform various cognitive and machine learning tasks [11], [25], [103]. It is known that the spiking behavior and the activity of the presynaptic and postsynaptic neurons cause the synapses in the network to adapt themselves to these activities (i.e., learn). These activities that are coded in the form of spikes represent the input to the network. It is, therefore, absolutely essential to first have the correct spike coding structure to effectively represent data to the neural network, and then it is critical to adapt the synapses in a proper way, which is efficient for learning the current type of inputs to the network. This means that the learning mechanism, i.e., the synaptic plasticity rule, can heavily depend on the structure of input to the network, which in turn depends on the application. Sometimes neuroscientists modify a rule or even combine a number of rules to use them for their intended applications. It means that after a careful study of the nature of the input and the required process to reach the desired output, they decide on the structure of the learning method. The study presented in [104] shows an example of a learning method that couples STDP and spike frequency adaptation (SFA) for updating synaptic weights, to enable the learning in a perceptron-like structure. This work proposes an effective platform for sensory guided processing, where two sources of auditory and visual sensory inputs result in changes in the perceptron neuron spiking activity. It is shown that visual inputs can act as a teacher in their used perceptron learning mechanism, while auditory inputs are used for updating the synaptic weights and learning the input auditory patterns [104]. Another example is the neuromorphic architecture developed for object recognition and motion anticipation using a modified version of STDP [105]. In another study, TSTDP which is a modified version of pair-based STDP is used to generate receptive field development, which is a well-known feature of the ratebased BCM rule [12]. Gjorgjieva et al. showed that TSTDP can learn up to third-order spatio–temporal correlations, which is of importance in neural coding applications [106] where the PSTDP rule lacks this capability, even though it Vol. 102, No. 5, May 2014 | Proceedings of the IEEE 731 Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges is also able to account for the BCM rate-based rule under specific assumptions [49], [107]. This is useful for developing direction and speed selectivity in the visual cortex [12]. Therefore, this rule appears to be useful for pattern classification applications. The previous three examples show that depending on the needs for the application, and with mathematical and computational analysis, modification to synaptic plasticity rules can be useful for performing tasks, which cannot be carried out with the simple form of the plasticity rules such as STDP, SFA, and BCM. Therefore, the nature and needs of an application and its inputs have a direct impact on the synaptic plasticity mechanism and hence on its VLSI implementation. Perhaps the conventional form of STDP, which is according to the formulation shown in [13] and is in agreement with PSTDP experiments in [19], is the most examined type of synaptic plasticity rule that has been exploited for learning, in various applications ranging from topographic mapping formation [108] to pattern recognition [10], to data set classification [103]. The pair-based STDP has been also utilized for many learning tasks, including receptive field development through cortical reorganization [109], object recognition and motion anticipation [105], unsupervised learning of visual features [9], learning cross-modal spatial transformations [110], object recognition [11], odor data classification [111], associative memory type of learning using STDP [22], temporal synchrony detection [20], and associative memory, as well as variability and noise compensation tasks [51]. Although some of these learning applications, such as the last five mentioned works, have been successfully implemented as part of a neuromorphic system, many of the other synaptic plasticity rules that have been modeled based on biological experiments performed in vivo and in vitro are yet to be explored by neuromorphic engineers for other applications. Examples of these plasticity rules that have not been explored for any application are the hybrid rules proposed in [8], [17], and [112], as well as the biophysical-based rule proposed in [18], [29], and [30]. In addition to the spiketiming-based rules, other spike-based rules such as the SDSP rule [16] are shown to be useful in other applications such as supervised learning for real-time pattern classification [25]. In general, when considering implementing learning (synaptic plasticity) circuits for specific applications such as robotics, neuroprostheses, brain–machine interfaces, neural computation, and control, a number of design aspects should be taken into account, including: 1) the nature of inputs to the system that should be learned; 2) the level of complexity the implemented system and application can afford; 3) the use of most appropriate synaptic plasticity rule, in terms of VLSI implementation complexity and performance in processing input neuronal data, which can account for the required level of performance for the targeted application; and 4) the possible need for modifying the structure of available synaptic plasticity rules for a better 732 Proceedings of the IEEE | Vol. 102, No. 5, May 2014 performance, lower implementation complexity, or input data processing. As an example, here we review a neuromorphic learning network and answer the above mentioned questions about it. As already discussed, one of the most effective implementations of a VLSI SNN capable of learning to preform a real-world task is the design presented in [99]. This neuromorphic system is composed of 256 neurons and 256  256 synapses, in a crossbar array structure to be used for various applications, including an associative memory task. The above mentioned questions are answered regarding this system. 1) The input to this system can be set as 256 spike trains, each one corresponding to a neuron in the network. These 256 spike trains encode the information embedded in the input pattern and present it to the network of neurons. The network changes its weights according to a PSTDP algorithm, in the training phase, when patterns are presented to the network for learning. In the test phase, the neurons are presented with a partial version of the original pattern, and the network through its weights reflects the learned complete pattern, as output spikes. 2) The complexity of the targeted task and the number of patterns that can be learned using this neuromorphic system is directly related to the complexity of the network, i.e., its reconfigurability and neuron and synapses count. Since in the present network only 256 neurons with binary synapses are used, as the results in [99] show, the network can only learn 0.047 patterns per neuron in an associative memory task. It is also shown that, if synapses with 4-b precision are used instead of binary synapses, the learning capacity of the hardware network increases up to 0.109 patterns per neuron. 3) The spiking network implemented in this work has a highly reconfigurable structure with on-chip probabilistic PSTDP learning, thanks to its crossbar architecture and transposable synapse SRAM cells, which make PSTDP possible. Therefore, it can realize various network topologies and perform different cognitive tasks. Obviously, for implementing more complex tasks and learning a high number of patterns, a large-scale network with high-precision synapses is needed. Since this design is a basic building block for a scalable neuromorphic system, this extension can be carried out easily. The performance of the associative memory task presented for this network (see [99]) shows that, for this application, simple binary PSTDP synapses integrated with digital integrate and fire neurons are enough. 4) In addition to the main chip that contains 64 000 probabilistic binary PSTDP synapses and 256 Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges neurons, three different variants of this chip were investigated to follow different targets such as area, power consumption, and learning capability. It is shown that the system with higher learning capability consumes the highest amount of power and occupies the largest silicon real estate among all designs. In addition to this digital event-driven synchronous neuromorphic learning network that can be scaled up for various real-time learning tasks, in another work, IBM scientists have proposed a similar digital event-driven neuromorphic synaptic core [72], but this time they utilized asynchronous operation to decrease the active power of the system, and also implemented learning offchip. This system has been successfully used in various applications, including pattern recognition and autoassociative memory. It also shows a one-to-one correspondence with a neural programming model that makes it possible to realize any type of learning task that can be modeled in software [72]. The questions mentioned above can be answered for this other neuromorphic learning circuit along the same line as the first discussed design [99]. It is worth mentioning that the IBM neuromorphic learning network, presented in [99], utilized digital silicon neurons and binary silicon synapses. Therefore, this neuromorphic learning system is not technically subject to device mismatch. However, as already mentioned in Section V-B, when designing a network of analog learning circuits, the device mismatch can lead to inhomogeneity in synaptic plasticity circuits across the network, which may result in an imbalance in potentiation and depression rates, which can affect the learning performance of the system in any targeted application. Hence, a careful assessment of the effect of mismatch while designing neuromorphic learning systems is essential [60]. The reviewed VLSI implementations of synaptic plasticity rules for various applications show that the current neuromorphic VLSI designs are far behind the conventional machine-learning systems that can be implemented on hardware or software and easily outperform neuromorphic systems in many aspects from performance to design complexity. However, since learning in neuromorphic systems builds upon the extraordinary learning capabilities of the brain, the research in this area is useful for deciphering the mystery of brain-like learning, which can pave new avenues for science and engineering and revolutionize human life. VIII . CONCLUSION Synaptic plasticity is believed to be responsible for acquiring computational capabilities, learning, and memory in the brain. One should understand the underlying mechanisms of the plasticity rules and their computational role before utilizing them for learning and processing in real-world applications. Recent advances in VLSI technol- ogy, combined with progresses in experimental neuroscience and neuromorphic circuit design techniques, have led to useful implementations of these rules in hardware. However, most of these implementations can only be applied to demonstrate proofs of principles. To successfully apply neuromorphic circuits in real-world applications, potentially replacing or enhancing some of the conventional technology and approaches being used today, requires the development of large-scale neuromorphic systems that go beyond single chip, or single core solutions [113]. One of the most challenging tasks that needs to be addressed to achieve this is therefore the interchip, or intermodule communication. Currently, both single-wafer and multicore or multichip solutions based on asynchronous logic are being investigated [73], [75], [114]. In addition promising emerging technologies such as 3-D VLSI could provide efficient solutions to this problem. Another open challenge that is hindering progress in the design of large-scale neuromorphic systems is the lack of appropriate EDA tools to assists neuromorphic designers in the design, verification, and testing phases. As already mentioned, currently there are several promising design automation tools for generating asynchronous logic circuits that are helpful for designing interconnecting circuits in large-scale neuromorphic systems, but further developments for mixed analog/digital design tools is needed. The area requirement for synaptic weight storage is another challenge for large-scale neuromorphic systems. This can be addressed with the use of newly developed resistive memory elements, which are integrable with CMOS technology, occupy small area, and consume little power. However, these resistive elements are susceptible to variations and suffer from low yields, which should be effectively addressed before utilizing them in large-sale systems. All these and other mentioned challenges are currently being addressed by an active and enthusiastic research community. The small group of neuromorphic engineers that was once limited to a dozen research laboratories around the world in the mid-1990s is now flourishing, with many more groups spread around the whole globe, and with increasing support from both research funding organizations and strong industrial microelectronic groups. In general, with the many efforts and initiatives that are being started in the field of neuromorphic engineering, the future of this field is very promising, and the ongoing research on implementations of learning mechanisms in neuromorphic systems is likely to lead to systems that can be used in real-world applications in the near future. h Acknowledgment The authors would like to thank the associate editor, Prof. K. Boahen, as well as the anonymous reviewers for their fruitful and constructive comments that improved the quality of this paper. Vol. 102, No. 5, May 2014 | Proceedings of the IEEE 733 Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges REFERENCES [1] L. Abbott and S. Nelson, ‘‘Synaptic plasticity: Taming the beast,’’ Nature Neurosci., vol. 3, pp. 1178–1183, Nov. 2000. [2] D. Hebb, The Organization of Behavior: A Neuropsychological Theory. London, U.K.: Lawrence Erlbaum, 2002. [3] L. Cooper, N. Intrator, B. Blais, and H. Shouval, Theory of Cortical Plasticity. Singapore: World Scientific, 2004. [4] C. Mead, ‘‘Neuromorphic electronic systems,’’ Proc. IEEE, vol. 78, no. 10, pp. 1629–1636, Oct. 1990. [5] G. Indiveri and T. Horiuchi, ‘‘Frontiers in neuromorphic engineering,’’ Front. Neurosci., vol. 5, no. 118, 2011, DOI: 10.3389/fnins. 2011.00118. [6] H. Markram, W. Gerstner, and P. J. Sjöström, ‘‘Spike-timing-dependent plasticity: A comprehensive overview,’’ Front. Synaptic Neurosci., vol. 4, no. 2, 2012, DOI: 10.3389/ fnsyn.2012.00002. [7] J. Lisman and N. Spruston, ‘‘Questions about STDP as a general model of synaptic plasticity,’’ Front. Synaptic Neurosci., vol. 2, no. 140, 2010, DOI: 10.3389/fnsyn.2010. 00140. [8] C. Mayr and J. Partzsch, ‘‘Rate and pulse based plasticity governed by local synaptic state variables,’’ Front. Synaptic Neurosci., vol. 2, no. 33, 2010, DOI: 10.3389/ fnsyn.2010.00033. [9] T. Masquelier and S. J. Thorpe, ‘‘Unsupervised learning of visual features through spike timing dependent plasticity,’’ PLoS Comput. Biol., vol. 3, no. 2, 2007, DOI: 10.1371/journal.pcbi.0030031. [10] T. Masquelier, R. Guyonneau, and S. J. Thorpe, ‘‘Spike timing dependent plasticity finds the start of repeating patterns in continuous spike trains,’’ PLoS One, vol. 3, no. 1, 2008, DOI: 10.1371/journal.pone. 0001377. [11] T. Masquelier and S. J. Thorpe, ‘‘Learning to recognize objects using waves of spikes and spike timing-dependent plasticity,’’ in Proc. Int. Joint Conf. Neural Netw., 2010, DOI: 10.1109/IJCNN.2010. 5596934. [12] J. Gjorgjieva, C. Clopath, J. Audet, and J. Pfister, ‘‘A triplet spike-timing-dependent plasticity model generalizes the Bienenstock-Cooper-Munro rule to higher-order spatiotemporal correlations,’’ Proc. Nat. Acad. Sci., vol. 108, no. 48, pp. 19383–19388, 2011. [13] S. Song, K. Miller, and L. Abbott, ‘‘Competitive Hebbian learning through spike-timing-dependent synaptic plasticity,’’ Nature Neurosci., vol. 3, pp. 919–926, 2000. [14] R. Froemke and Y. Dan, ‘‘Spike-timing-dependent synaptic modification induced by natural spike trains,’’ Nature, vol. 416, no. 6879, pp. 433–438, 2002. [15] J. Pfister and W. Gerstner, ‘‘Triplets of spikes in a model of spike timing-dependent plasticity,’’ J. Neurosci., vol. 26, no. 38, pp. 9673–9682, 2006. [16] J. Brader, W. Senn, and S. Fusi, ‘‘Learning real-world stimuli in a neural network with spike-driven synaptic dynamics,’’ Neural Comput., vol. 19, no. 11, pp. 2881–2912, 2007. [17] C. Clopath and W. Gerstner, ‘‘Voltage and spike timing interact in STDPVA unified model,’’ Front. Synaptic Neurosci., vol. 2, no. 25, 2010, DOI: 10.3389/fnsyn.2010. 00025. 734 [18] H. Z. Shouval, M. F. Bear, and L. N. Cooper, ‘‘A unified model of NMDA receptor-dependent bidirectional synaptic plasticity,’’ Proc. Nat. Acad. Sci. USA, vol. 99, no. 16, pp. 10831–10836, 2002. [19] G. Bi and M. Poo, ‘‘Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type,’’ J. Neurosci., vol. 18, no. 24, pp. 10464–10472, 1998. [20] A. Bofill-I-Petit and A. Murray, ‘‘Synchrony detection and amplification by silicon neurons with STDP synapses,’’ IEEE Trans. Neural Netw., vol. 15, no. 5, pp. 1296–1304, Sep. 2004. [21] G. Indiveri, E. Chicca, and R. Douglas, ‘‘A VLSI array of low-power spiking neurons and bistable synapses with spike-timing dependent plasticity,’’ IEEE Trans. Neural Netw., vol. 17, no. 1, pp. 211–221, Jan. 2006. [22] H. Tanaka, T. Morie, and K. Aihara, ‘‘A CMOS spiking neural network circuit with symmetric/asymmetric STDP function,’’ IEICE Trans. Fund. Electron. Commun. Comput. Sci., vol. E92-A, no. 7, pp. 1690–1698, 2009. [23] S. Bamford, A. Murray, and D. Willshaw, ‘‘Spike-timing dependent plasticity with weight dependence evoked from physical constraints,’’ IEEE Trans. Biomed. Circuits Syst., vol. 6, no. 4, pp. 385–398, Aug. 2012. [24] M. R. Azghadi, S. Al-Sarawi, D. Abbott, and N. Iannella, ‘‘A neuromorphic VLSI design for spike timing and rate based synaptic plasticity,’’ Neural Netw., vol. 45, pp. 70–82, 2013. [25] S. Mitra, S. Fusi, and G. Indiveri, ‘‘Real-time classification of complex patterns using spike-based learning in neuromorphic VLSI,’’ IEEE Trans. Biomed. Circuits Syst., vol. 3, no. 1, pp. 32–42, Feb. 2009. [26] M. Giulioni, M. Pannunzi, D. Badoni, V. Dante, and P. Del Giudice, ‘‘Classification of correlated patterns with a configurable analog VLSI neural network of spiking neurons and self-regulating plastic synapses,’’ Neural Comput., vol. 21, no. 11, pp. 3106–3129, 2009. [27] C. Mayr, M. Noack, J. Partzsch, and R. Schuffny, ‘‘Replicating experimental spike and rate based neural learning in CMOS,’’ in Proc. IEEE Int. Symp. Circuits Syst., 2010, pp. 105–108. [28] C.-C. Lee, ‘‘Kinetic modeling of amyloid fibrillation and synaptic plasticity as memory loss and formation mechanisms,’’ Ph.D. dissertation, Dept. Chem. Eng., Massachusetts Inst. Technol., Cambridge, MA, USA, 2008. [29] Y. Meng, K. Zhou, J. Monzon, and C. Poon, ‘‘Iono-neuromorphic implementation of spike-timing-dependent synaptic plasticity,’’ in Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., 2011, pp. 7274–7277. [30] G. Rachmuth, H. Shouval, M. Bear, and C. Poon, ‘‘A biophysically-based neuromorphic model of spike rate- and timing-dependent plasticity,’’ Proc. Nat. Acad. Sci. USA, vol. 108, no. 49, pp. E1266–E1274, 2011. [31] N. Iannella and S. Tanaka, ‘‘Synaptic efficacy cluster formation across the dendrite via STDP,’’ Neurosci. Lett., vol. 403, no. 1–2, pp. 24–29, 2006. [32] N. Iannella, T. Launey, and S. Tanaka, ‘‘Spike timing-dependent plasticity as the origin of the formation of clustered synaptic efficacy engrams,’’ Front. Comput. Neurosci., vol. 4, no. 20, 2010, DOI: 10.3389/fncom. 2010.00021. Proceedings of the IEEE | Vol. 102, No. 5, May 2014 [33] C. Mayr, J. Partzsch, M. Noack, and R. Schuffny, ‘‘Live demonstration: Multiple-timescale plasticity in a neuromorphic system,’’ in Proc. IEEE Int. Symp. Circuits Syst., May 2013, pp. 666–670. [34] S. Ramakrishnan, P. Hasler, and C. Gordon, ‘‘Floating gate synapses with spike-time-dependent plasticity,’’ IEEE Trans. Biomed. Circuits Syst., vol. 5, no. 3, pp. 244–252, Jun. 2011. [35] S. Moradi and G. Indiveri, ‘‘A VLSI network of spiking neurons with an asynchronous static random access memory,’’ in Proc. IEEE Biomed. Circuits Syst. Conf., 2011, pp. 277–280. [36] M. R. Azghadi, S. Moradi, and G. Indiveri, ‘‘Programmable neuromorphic circuits for spike-based neural dynamics,’’ in Proc. 11th IEEE Int. New Circuit Syst. Conf., 2013, DOI: 10.1109/NEWCAS.2013.6573600. [37] S.-C. Liu, T. Delbruck, J. Kramer, G. Indiveri, and R. Douglas, Analog VLSI: Circuits and Principles. Cambridge, MA, USA: MIT Press, 2002. [38] A. G. Andreou, K. A. Boahen, P. O. Pouliquen, A. Pavasovic, R. E. Jenkins, and K. Strohbehn, ‘‘Current-mode subthreshold MOS circuits for analog VLSI neural systems,’’ IEEE Trans. Neural Netw., vol. 2, no. 2, pp. 205–213, Mar. 1991. [39] M. R. Azghadi, O. Kavehei, S. Al-Sarawi, N. Iannella, and D. Abbott, ‘‘Novel VLSI implementation for triplet-based spike-timing dependent plasticity,’’ in Proc. 7th Int. Conf. Intell. Sensors Sensor Netw. Inf. Process., 2011, pp. 158–162. [40] M. R. Azghadi, S. Al-Sarawi, N. Iannella, and D. Abbott, ‘‘Efficient design of triplet based spike-timing dependent plasticity,’’ in Proc. IEEE Int. Joint Conf. Neural Netw., 2012, DOI: 10.1109/IJCNN.2012.6252820. [41] R. Douglas, M. Mahowald, and C. Mead, ‘‘Neuromorphic analogue VLSI,’’ Annu. Rev. Neurosci., vol. 18, pp. 255–281, 1995. [42] H. R. Wilson and J. D. Cowan, ‘‘Excitatory and inhibitory interactions in localized populations of model neurons,’’ Biophys. J., vol. 12, no. 1, pp. 1–24, 1972. [43] C. Bartolozzi and G. Indiveri, ‘‘Synaptic dynamics in analog VLSI,’’ Neural Comput., vol. 19, no. 10, pp. 2581–2603, 2007. [44] B. Razavi, Design of Analog CMOS Integrated Circuits. New York, NY, USA: McGraw-Hill, 2002. [45] J. M. Cruz-Albrecht, M. W. Yung, and N. Srinivasa, ‘‘Energy-efficient neuron, synapse and STDP integrated circuits,’’ IEEE Trans. Biomed. Circuits Syst., vol. 6, no. 3, pp. 246–256, Jun. 2012. [46] T. Koickal, A. Hamilton, S. Tan, J. Covington, J. Gardner, and T. Pearce, ‘‘Analog VLSI circuit implementation of an adaptive neuromorphic olfaction chip,’’ IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 54, no. 1, pp. 60–73, Jan. 2007. [47] G. Rachmuth and C.-S. Poon, ‘‘Transistor analogs of emergent iono-neuronal dynamics,’’ HFSP J., vol. 2, no. 3, pp. 156–166, 2008. [48] K. Cameron, V. Boonsobhak, A. Murray, and D. Renshaw, ‘‘Spike timing dependent plasticity (STDP) can ameliorate process variations in neuromorphic VLSI,’’ IEEE Trans. Neural Netw., vol. 16, no. 6, pp. 1626–1637, Nov. 2005. [49] M. R. Azghadi, S. Al-Sarawi, N. Iannella, and D. Abbott, ‘‘Design and implementation of BCM rule based on spike-timing dependent plasticity,’’ in Proc. IEEE Int. Joint Conf. Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] Neural Netw., 2012, DOI: 10.1109/IJCNN. 2012.6252778. J. Schemmel, A. Grubl, K. Meier, and E. Mueller, ‘‘Implementing synaptic plasticity in a VLSI spiking neural network model,’’ in Proc. Int. Joint Conf. Neural Netw., 2006, DOI: 10.1109/IJCNN.2006.246651. J. V. Arthur and K. Boahen, ‘‘Learning in silicon: Timing is everything,’’ in Advances in Neural Information Processing Systems 17. Cambridge, MA, USA: MIT Press, 2006, pp. 75–82. M. Azghadi, S. Al-Sarawi, N. Iannella, and D. Abbott, ‘‘A new compact analog VLSI model for spike timing dependent plasticity,’’ in Proc. IFIP/IEEE 21st Int. Conf. Very Large Scale Integr., Oct. 2013, pp. 7–12. M. Azghadi, S. Al-Sarawi, N. Iannella, and D. Abbott, ‘‘Tunable low energy, compact and high performance neuromorphic circuit for spike-based synaptic plasticity,’’ PLoS ONE, vol. 9, no. 2, 2014, DOI: DOI: 10.1371/ journal.pone.0088326. H. Wang, R. Gerkin, D. Nauen, and G. Bi, ‘‘Coactivation and timing-dependent integration of synaptic potentiation and depression,’’ Nature Neurosci., vol. 8, no. 2, pp. 187–193, 2005. P. Sjöström, G. Turrigiano, and S. Nelson, ‘‘Rate, timing, and cooperativity jointly determine cortical synaptic plasticity,’’ Neuron, vol. 32, no. 6, pp. 1149–1164, 2001. G. Indiveri, B. Linares-Barranco, T. Hamilton, A. Van Schaik, R. Etienne-Cummings, T. Delbruck, S. Liu, P. Dudek, P. Häfliger, S. Renaud, J. Schemmel, G. Cauwenberghs, J. Arthur, K. Hynna, F. Folowosele, S. Saighi, T. Serrano-Gotarredona, J. Wijekoon, Y. Wang, and K. Boahen, ‘‘Neuromorphic silicon neuron circuits,’’ Front. Neurosci., vol. 5, no. 73, 2011, DOI: 10.3389/fnins. 2011.00073. J. E. Lisman, ‘‘A mechanism for memory storage insensitive to molecular turnover: A bistable autophosphorylating kinase,’’ Proc. Nat. Acad. Sci. USA, vol. 82, no. 9, pp. 3055–3057, 1985. D. H. O’Connor, G. M. Wittenberg, and S. S.-H. Wang, ‘‘Graded bidirectional synaptic plasticity is composed of switch-like unitary events,’’ Proc. Nat. Acad. Sci. USA, vol. 102, no. 27, pp. 9679–9684, 2005. H. Markram, J. Lübke, M. Frotscher, and B. Sakmann, ‘‘Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs,’’ Science, vol. 275, no. 5297, pp. 213–215, 1997. C. Poon and K. Zhou, ‘‘Neuromorphic silicon neurons and large-scale neural networks: Challenges and opportunities,’’ Front. Neurosci., vol. 5, no. 8, 2011, DOI: 10.3389/ fnins.2011.00108. D. Brüderle, M. Petrovici, B. Vogginger, T. Pfeil, S. Millner, A. Grübl, K. Wendt, E. Müller, M.-O. Schwartz, S. Jeltsch, J. Fieres, P. Müller, O. Breitwieser, L. Muller, A. Davison, J. Kremkow, M. Lundqvist, J. Partzsch, S. Scholze, L. Zühl, C. Mayr, A. Destexhe, M. Diesmann, T. Potjans, A. Lansner, R. Schüffny, J. Schemmel, and K. Meier, ‘‘A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems,’’ Biol. Cybern., vol. 104, no. 4, pp. 263–296, 2011. E. Neftci and G. Indiveri, ‘‘A device mismatch compensation method for VLSI [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] neural networks,’’ in Proc. IEEE Biomed. Circuits Syst. Conf., 2010, pp. 262–265. S. Choudhary, S. Sloan, S. Fok, A. Neckar, E. Trautmann, P. Gao, T. Stewart, C. Eliasmith, and K. Boahen, ‘‘Silicon neurons that compute,’’ in Artificial Neural Networks and Machine LearningVICANN 2012, vol. 7552, Berlin, Germany: Springer-Verlag, 2012, , pp. 121–128. M. Giulioni, P. Camilleri, M. Mattia, V. Dante, J. Braun, and P. Del Giudice, ‘‘Robust working memory in an asynchronously spiking neural network realized in neuromorphic VLSI,’’ Front. Neurosci., vol. 5, no. 149, 2012, DOI: 10.3389/fnins.2011.00149. N. Weste and D. Harris, CMOS VLSI Design: A Circuits and Systems Perspective. Reading, MA, USA: Addison-Wesley, 2005. S. Al-Sarawi, D. Abbott, and P. Franzon, ‘‘A review of 3-D packaging technology,’’ IEEE Trans. Compon. Packag. Manuf. Technol. B, Adv. Packag., vol. 21, no. 1, pp. 2–14, Feb. 1998. K. Likharev and D. Strukov, ‘‘CMOL: Devices, circuits, and architectures,’’ in Introducing Molecular Electronics, vol. 680, Berlin, Germany: Springer-Verlag, 2005, pp. 447–477. T. Serrano-Gotarredona, T. Masquelier, T. Prodromakis, G. Indiveri, and B. Linares-Barranco. (2013). STDP and STDP variations with memristors for spiking neuromorphic learning systems. Front. Neurosci. [Online]. 7(2). Available: http:// www.frontiersin.org/neuroscience/10.3389/ fnins.2013.00002/full S. Deiss, T. Delbruck, R. Douglas, M. Fischer, M. Mahowald, T. Matthews, and A. Whatley, ‘‘Address-event asynchronous local broadcast protocol.’’ 1994. [Online]. Available: http://www.ini.uzh.ch/~amw/ scx/aeprotocol.html K. A. Boahen, ‘‘Point-to-point connectivity between neuromorphic chips using address events,’’ IEEE Trans. Circuits Syst. II, Analog Digit. Signal Process., vol. 47, no. 5, pp. 416–434, May 2000. S. Moradi, N. Imam, R. Manohar, and G. Indiveri, ‘‘A memory-efficient routing method for large-scale spiking neural networks,’’ in Proc. Eur. Conf. Circuit Theory Design, 2013, DOI: 10.1109/ECCTD.2013. 6662203. J. Arthur, P. Merolla, F. Akopyan, R. Alvarez, A. Cassidy, S. Chandra, S. Esser, N. Imam, W. Risk, D. Rubin, R. Manohar, and D. Modha, ‘‘Building block of a programmable neuromorphic substrate: A digital neurosynaptic core,’’ in Proc. Int. Joint Conf. Neural Netw., 2012, DOI: 10.1109/ IJCNN.2012.6252637. P. Merolla, J. Arthur, R. Alvarez, J.-M. Bussat, and K. Boahen, ‘‘A multicast tree router for multichip neuromorphic systems,’’ IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 61, no. 3, pp. 820–833, Mar. 2014, DOI: 10. 1109/TCSI.2013.2284184. E. Chicca, A. Whatley, P. Lichtsteiner, V. Dante, T. Delbruck, P. Del Giudice, R. Douglas, and G. Indiveri. (2007, May). A multi-chip pulse-based neuromorphic infrastructure and its application to a model of orientation selectivity. IEEE Trans. Circuits Syst. I, Reg. Papers [Online]. 54(5), pp. 981–993 N. Imam, F. Akopyan, J. Arthur, P. Merolla, R. Manohar, and D. Modha, ‘‘A digital neurosynaptic core using event-driven QDI circuits,’’ in Proc. 18th IEEE Int. Symp. [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] Asynchron. Circuits Syst., May 2012, pp. 25–32. H. Mostafa, F. Corradi, M. Osswald, and G. Indiveri, ‘‘Automated synthesis of asynchronous event-based interfaces for neuromorphic systems,’’ in Proc. Eur. Conf. Circuit Theory Design, 2013, DOI: 10.1109/ ECCTD.2013.6662213. T. Delbrück and A. Van Schaik, ‘‘Bias current generators with wide dynamic range,’’ Analog Integr. Circuits Signal Process., vol. 43, no. 3, pp. 247–268, 2005. T. Delbruck, R. Berner, P. Lichtsteiner, and C. Dualibe, ‘‘32-bit configurable bias current generator with sub-off-current capability,’’ in Proc. IEEE Int. Symp. Circuits Syst., 2010, pp. 1647–1650. P. Gao, B. Benjamin, and K. Boahen, ‘‘Dynamical system guided mapping of quantitative neuronal models onto neuromorphic hardware,’’ IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 59, no. 10, pp. 2383–2394, Oct. 2012. J. Wijekoon and P. Dudek, ‘‘Compact silicon neuron circuit with spiking and bursting behaviour,’’ Neural Netw., vol. 21, no. 2–3, pp. 524–534, Mar.–Apr. 2008. J. Schemmel, D. Bruderle, A. Grubl, M. Hock, K. Meier, and S. Millner, ‘‘A wafer-scale neuromorphic hardware system for large-scale neural modeling,’’ in Proc. IEEE Int. Symp. Circuits Syst., 2010, pp. 1947–1950. B. Linares-Barranco and T. Serrano-Gotarredona, ‘‘On the design and characterization of femtoampere current-mode circuits,’’ IEEE J. Solid-State Circuits, vol. 38, no. 8, pp. 1353–1363, Aug. 2003. S. Moradi and G. Indiveri, ‘‘An event-based neural network architecture with an asynchronous programmable synaptic memory,’’ IEEE Trans. Biomed. Circuits Syst., vol. 8, no. 1, pp. 98–107, Feb. 2014, DOI: 10. 1109/TBCAS.2013.2255873. S. Eberhardt, T. Duong, and A. Thakoor, ‘‘Design of parallel hardware neural network systems from custom analog VLSI ‘building block’ chips,’’ in Proc. Int. Joint Conf. Neural Netw., 1989, pp. 183–190. R. J. Vogelstein, U. Mallik, J. T. Vogelstein, and G. Cauwenberghs, ‘‘Dynamically reconfigurable silicon array of spiking neurons with conductance-based synapses,’’ IEEE Trans. Neural Netw., vol. 18, no. 1, pp. 253–265, Jan. 2007. T. Pfeil, T. C. Potjans, S. Schrader, W. Potjans, J. Schemmel, M. Diesmann, and K. Meier, ‘‘Is a 4-bit synaptic weight resolution enough?VConstraints on enabling spike-timing dependent plasticity in neuromorphic hardware,’’ Front. Neurosci., vol. 6, no. 90, 2012, DOI: 10.3389/fnins. 2012.00090. E. Chicca, D. Badoni, V. Dante, M. D’Andreagiovanni, G. Salina, L. Carota, S. Fusi, and P. Del Giudice, ‘‘A VLSI recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory,’’ IEEE Trans. Neural Netw., vol. 14, no. 5, pp. 1297–1307, Sep. 2003. P. Hafliger and H. Kolle Riis, ‘‘A multi-level static memory cell,’’ in Proc. Int. Symp. Circuits Syst., 2003, vol. 1, pp. I-25–I-28. P. Hafliger, ‘‘Adaptive WTA with an analog VLSI neuromorphic learning chip,’’ IEEE Trans. Neural Netw., vol. 18, no. 2, pp. 551–572, Feb. 2007. Vol. 102, No. 5, May 2014 | Proceedings of the IEEE 735 Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges [90] T. V. Bliss and G. L. Collingridge, ‘‘A synaptic model of memory: Long-term potentiation in the hippocampus,’’ Nature, vol. 361, no. 6407, pp. 31–39, 1993. [91] C. C. Petersen, R. C. Malenka, R. A. Nicoll, and J. J. Hopfield, ‘‘All-or-none potentiation at CA3-CA1 synapses,’’ Proc. Nat. Acad. Sci. USA, vol. 95, no. 8, pp. 4732–4737, 1998. [92] D. J. Amit and S. Fusi, ‘‘Learning in neural networks with material synapses,’’ Neural Comput., vol. 6, no. 5, pp. 957–982, 1994. [93] S. Fusi, M. Annunziato, D. Badoni, A. Salamon, and D. J. Amit, ‘‘Spike-driven synaptic plasticity: Theory, simulation, VLSI implementation,’’ Neural Comput., vol. 12, no. 10, pp. 2227–2258, 2000. [94] C. Gordon and P. Hasler, ‘‘Biological learning modeled in an adaptive floating-gate system,’’ in Proc. IEEE Int. Symp. Circuits Syst., 2002, vol. 5, pp. 609–612. [95] D. Strukov, G. Snider, D. Stewart, and R. Williams, ‘‘The missing memristor found,’’ Nature, vol. 453, no. 7191, pp. 80–83, 2008. [96] K. Eshraghian, O. Kavehei, J. Chappell, A. Iqbal, S. Al-Sarawi, and D. Abbott, ‘‘Memristive device fundamentals and modeling: Applications to circuits and systems simulation,’’ Proc. IEEE, vol. 100, no. 6, pp. 1991–2007, Jun. 2012. [97] S. H. Jo, T. Chang, I. Ebong, B. B. Bhadviya, P. Mazumder, and W. Lu, ‘‘Nanoscale memristor device as synapse in neuromorphic systems,’’ Nano Lett., vol. 10, no. 4, pp. 1297–1301, 2010. [98] G. Indiveri, B. Linares-Barranco, R. Legenstein, G. Deligeorgis, and T. Prodromakis, ‘‘Integration of nanoscale memristor synapses in neuromorphic computing architectures,’’ Nanotechnology, vol. 24, no. 38, 2013, DOI: 10.1088/ 0957-4484/24/38/384010. [99] J.-S. Seo, B. Brezzo, Y. Liu, B. D. Parker, S. K. Esser, R. K. Montoye, B. Rajendran, J. A. Tierno, L. Chang, and D. S. Modha, ‘‘A 45 nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons,’’ in Proc. IEEE Custom Integr. Circuits Conf., 2011, DOI: 10.1109/CICC.2011.6055293. [100] S. B. Furber, D. R. Lester, L. A. Plana, J. D. Garside, E. Painkras, S. Temple, and A. D. Brown, ‘‘Overview of the spinnaker system architecture,’’ IEEE Trans. Comput., vol. 62, no. 12, pp. 2454–2467, Dec. 2013. [101] W. Gerstner and W. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge, U.K.: Cambridge Univ. Press, 2002. [102] H. Z. Shouval, ‘‘What is the appropriate description level for synaptic plasticity?’’ Proc. Nat. Acad. Sci. USA, vol. 108, no. 48, pp. 19103–19104, 2011. [103] A. Oliveri, R. Rizzo, and A. Chella, ‘‘An application of spike-timing-dependent plasticity to readout circuit for liquid state machine,’’ in Proc. Int. Joint Conf. Neural Netw., 2007, pp. 1441–1445. [104] P. DSouza, S.-C. Liu, and R. H. Hahnloser, ‘‘Perceptron learning rule derived from spike-frequency adaptation and spike-time-dependent plasticity,’’ Proc. Nat. Acad. Sci. USA, vol. 107, no. 10, pp. 4722–4727, 2010. [105] A. Nere, U. Olcese, D. Balduzzi, and G. Tononi, ‘‘A neuromorphic architecture for object recognition and motion anticipation using burst-STDP,’’ PloS One, vol. 7, no. 5, 2012, DOI: 10.1371/journal.pone.0036958. [106] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. Chichilnisky, and E. P. Simoncelli, ‘‘Spatio-temporal correlations and visual signalling in a [107] [108] [109] [110] [111] [112] [113] [114] complete neuronal population,’’ Nature, vol. 454, no. 7207, pp. 995–999, 2008. E. M. Izhikevich and N. S. Desai, ‘‘Relating STDP to BCM,’’ Neural Comput., vol. 15, no. 7, pp. 1511–1523, 2003. S. A. Bamford, A. F. Murray, and D. J. Willshaw, ‘‘Synaptic rewiring for topographic mapping and receptive field development,’’ Neural Netw., vol. 23, no. 4, pp. 517–527, 2010. J. M. Young, W. J. Waleszczyk, C. Wang, M. B. Calford, B. Dreher, and K. Obermayer, ‘‘Cortical reorganization consistent with spike timingVBut not correlation-dependent plasticity,’’ Nature Neurosci., vol. 10, no. 7, pp. 887–895, 2007. A. P. Davison and Y. Frégnac, ‘‘Learning cross-modal spatial transformations through spike timing-dependent plasticity,’’ J. Neurosci., vol. 26, no. 21, pp. 5604–5615, 2006. H.-Y. Hsieh and K.-T. Tang, ‘‘A spiking neural network chip for odor data classification,’’ in Proc. IEEE Asia Pacific Conf. Circuits Syst., 2012, pp. 88–91. M. Graupner and N. Brunel, ‘‘Calcium-based plasticity model explains sensitivity of synaptic changes to spike pattern, rate, and dendritic location,’’ Proc. Nat. Acad. Sci. USA, vol. 109, no. 10, pp. 3991–3996, 2012. J. Hasler and H. B. Marr, ‘‘Finding a roadmap to achieve large neuromorphic hardware systems,’’ Front. Neurosci., vol. 7, no. 118, 2013, DOI: 10.3389/fnins.2013.00118. S. Scholze, S. Schiefer, J. Partzsch, S. Hartmann, C. G. Mayr, S. Höppner, H. Eisenreich, S. Henker, B. Vogginger, and R. Schüffny, ‘‘VLSI implementation of a 2.8 gevent/s packet based AER interface with routing and event sorting functionality,’’ Front. Neurosci., vol. 5, no. 117, 2011, DOI: 10.3389/fnins.2011.00117. ABOUT THE AUTHORS Mostafa Rahimi Azghadi (Student Member, IEEE) received the M.Sc. degree in computer architecture engineering from the National University of Iran (Shahid Beheshti University), Tehran, Iran, in 2009. He is currently working toward the Ph.D. degree in electrical and electronic engineering at The University of Adelaide, Adelaide, S.A., Australia. His current research interests include neuromorphic engineering, and very large-scale integration (VLSI) implementation of learning algorithms in spiking neural networks. Mr. Rahimi was a recipient of several international awards and scholarships including: the International Postgraduate Research Scholarship (IPRS) by the Australian Government (2010), the Adelaide University Scholarship (AUS) (2010), the Japanese Neural Network Society travel award (2011), the Brain Corporation fellowship for Spiking Neural Networks (2012), the IEEE Computational Intelligence Society travel award (2012), the Research Abroad scholarship (2012), the D. R. Stranks fellowship (2012), the Doreen McCarthy Bursary (2013), the IEEE SA Section Student Travel Award (2013), and the Simon Rockliff Scholarship (2013). He has been program committee member and reviewer for several recognized international journals and conferences such as the IEEE Biomedical Circuits and Systems Conference (BioCAS) and the IEEE International Symposium on Circuits and Systems and the IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS. 736 Proceedings of the IEEE | Vol. 102, No. 5, May 2014 Nicolangelo Iannella (Member, IEEE) received the B.Sc. degree in theoretical and mathematical physics from The University of Adelaide, Adelaide, S.A. Australia, in 1990, the B.Sc. (honors) and M.Sc. (Res) degrees in theoretical physics from The Flinders University of South Australia, Bedford Park, S.A., Australia, in 1991 and 1995, respectively, and the Ph.D. degree in engineering, majoring in computational neuroscience, from The University of Electro-Communications, Tokyo, Japan, in 2009. In 1997, he joined RIKEN Brain Science Institute (BSI), Saitama, Japan, as a technician. He was a postdoctoral Researcher in RIKEN BSI. He won the prestigious Australian Research Council (ARC) Australian Postdoctoral Award (APD) fellowship, in 2010, and is currently based at the School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, S.A., Australia. His research interests include synaptic plasticity, mathematical and computational modeling of neurons and spiking neural networks, ranging from simple abstract to biophysically detailed models, systems biology and neuromorphic engineering. Dr. Iannella is a member of the Society of Neuroscience (SFN) and has been a program committee member and reviewer for several international journals and conferences. Rahimi Azghadi et al.: Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges Said F. Al-Sarawi (Member, IEEE) received the general certificate in marine radio communication and the B.Eng. degree (first class honors) in marine electronics and communication from the Arab Academy for Science and Technology (AAST), Alexandria, Egypt, in 1987 and 1990, respectively, and the Ph.D. degree in mixed analog and digital circuit design techniques for smart wireless systems with special commendation in electrical and electronic engineering from The University of Adelaide, Adelaide, S.A., Australia, in 2003. He also received the Graduate Certificate in education (higher education), in 2006, from the same university. Currently, he is the Director of the Centre for Biomedical Engineering and a founding member of Education Research Group of Adelaide (ERGA) in the University of Adelaide. His research interests include design techniques for mixed signal systems in complementary metal–oxide–semiconductor (CMOS) and optoelectronic technologies for high-performance radio transceivers, low-power and low-voltage radio-frequency identification (RFID) systems, data converters, mixed signal design, and microelectromechanical systems (MEMS) for biomedical applications. His current educational research is focused on innovative teaching techniques for engineering education, research skill development, and factors affecting students evaluations of courses in different disciplines. Dr. Al-Sarawi was awarded The University of Adelaide Alumni Postgraduate Medal (formerly Culross Prize) for outstanding academic merit at the postgraduate level. While pursuing his Ph.D., he won the Commonwealth Postgraduate Research Award (Industry). Giacomo Indiveri (Senior Member, IEEE) received the M.Sc. degree in electrical engineering from the University of Genoa, Genova, Italy, in 1992. Subsequently, he was awarded a doctoral postgraduate fellowship within the National Research and Training Program on ‘‘Technologies for Bioelectronics’’ from which he graduated summa cum laude in 1995. He received the Ph.D. degree in computer science and electrical engineering from the University of Genoa in 2004, and the ‘‘Habilitation’’ certificate in neuromorphic engineering from ETH Zurich, Zurich, Switzerland, in 2006. He is an Associate Professor at the Faculty of Science, University of Zurich, Zurich, Switzerland. He carried out research on neuromorphic vision sensors as a Postdoctoral Research Fellow in the Division of Biology, California Institute of Technology, Pasadena, CA, USA, and on neuromorphic selective attention systems as a Postdoctoral Researcher at the Institute of Neuroinformatics, University of Zurich and ETH Zurich. His current research interests lie in the study of real and artificial neural processing systems and in the hardware implementation of neuromorphic cognitive systems, using full custom analog and digital VLSI technology. Dr. Indiveri is a member of several Technical Committees (TCs) of the IEEE Circuits and Systems Society and a Fellow of the European Research Council. Derek Abbott (Fellow, IEEE) was born in South Kensington, London, U.K., in 1960. He received the B.Sc. (honors) degree in physics from Loughborough University, Loughborough, Leicestershire, U.K., in 1982 and the Ph.D. degree in electrical and electronic engineering from The University of Adelaide, Adelaide, S.A. Australia, in 1995, under K. Eshraghian and B. R. Davis. From 1978 to 1986, he was a Research Engineer at the GEC Hirst Research Centre, London, U.K. From 1986 to 1987, he was a VLSI Design Engineer at Austek Microsystems, Australia. Since 1987, he has been with The University of Adelaide, where he is presently a full Professor with the School of Electrical and Electronic Engineering. He coedited Quantum Aspects of Life (London, U.K.: Imperial College Press, 2008), coauthored Stochastic Resonance (Cambridge, U.K.: Cambridge University Press, 2012), and coauthored Terahertz Imaging for Biomedical Applications (New York, NY, USA: Springer-Verlag, 2012). He holds over 800 publications/patents and has been an invited speaker at over 100 institutions. His interests are in the area of multidisciplinary physics and electronic engineering applied to complex systems. His research programs span a number of areas of stochastics, game theory, photonics, biomedical engineering, and computational neuroscience. Prof Abbott is a Fellow of the Institute of Physics (IOP). He has won a number of awards including the South Australian Tall Poppy Award for Science (2004), the Premier’s SA Great Award in Science and Technology for outstanding contributions to South Australia (2004), and an Australian Research Council (ARC) Future Fellowship (2012). He has served as an Editor and/or Guest Editor for a number of journals including the IEEE JOURNAL OF SOLID-STATE CIRCUITS, Journal of Optics B, the Microelectronics Journal, Chaos, Smart Structures and Materials, Journal of Optics B, Fluctuation Noise Letters, and is currently on the editorial boards of the PROCEEDINGS OF THE IEEE, the IEEE PHOTONICS JOURNAL, and PLoS ONE. Vol. 102, No. 5, May 2014 | Proceedings of the IEEE 737