Exploring the Evolution of NoC-Based Spiking
Neural Networks on FPGAs
F Morgan 1, S Cawley 1, B Mc Ginley 1, S Pande 1, LJ Mc Daid2, B Glackin2, J Maher1, J Harkin2
1
Bio-Inspired Electronics and Reconfigurable Computing Research Group (BIRC), National University of Ireland, Galway.
2
Intelligent Systems Research Centre, University of Ulster, Magee Campus, Derry, Northern Ireland,
fearghal.morgan@nuigalway.ie, jg.harkin@ulster.ac.uk
Abstract—Bio-inspired paradigms such as Spiking Neural
Networks (SNNs) offer the potential to emulate the repairing and
adaptive ability of the brain. This paper presents EMBRACEFPGA, a scalable, configurable Network on Chip (NoC)-based
SNN architecture, implemented on Xilinx Virtex II-Pro FPGA
hardware. In association with a Genetic Algorithm-based
hardware evolution platform, EMBRACE-FPGA provides a
computing platform for intrinsic hardware evolution, which can
be used to explore the evolution and adaptive capabilities of
hardware SNNs. Results demonstrate the application of the
hardware SNN evolution platform to solve the XOR benchmark
problem.
Keywords: Evolvable Hardware, Spiking Neural Networks,
Network on Chip, FPGA.
I. INTRODUCTION
Bio-inspired concepts such as neural networks, evolution
and learning have attracted much attention recently because of
a growing interest in the automatic design of complex and
intelligent systems, capable of adaptation and fault tolerance.
Engineers and computer scientists have studied these
biological concepts in an effort to replicate their desired
qualities [1, 2] in computing systems. The basic processing
units in organic central nervous systems are neurons which are
interconnected in a complex pattern using synapses. The
current understanding of biological neurons is that they
communicate through pulses and employ the relative timing of
pulses to transmit information and perform computations.
Spiking Neural Networks (SNNs) [2-4] emulate real
biological neurons, conveying information through the
transmission of short transient spikes between neurons, via
their synaptic connections. A neuron fires when the sum of its
weighted input spikes exceeds a certain threshold. Figure 1
illustrates a two-layer, fully-connected, feed-forward SNN
topology and a typical spike train between two neurons, the
potential resulting in the destination neuron, and the firing of
the neuron output spike. By representing spikes as either
‘high’ or ‘low’ values, i.e. as single bits, hardware and
memory requirements are reduced dramatically compared to
those of traditional multi-layer perceptron neural networks
[35].
The authors have investigated and proposed a mixed-signal
hardware architecture (EMBRACE) [7], still to be realised as
a low power, scalable, NoC-based, embedded computing
element which supports the implementation of large scale
SNNs. The EMBRACE architecture incorporates a compact,
low power, CMOS-compatible analogue SNN neuron cell
architecture
which
offers
synaptic
Fig 1. SNN connection topology, spike train, neuron potential and output
spike firing event
densities significantly in excess of that currently achievable in
other hardware SNNs [8-11, 33]. In addition, a packet-based
NoC topology [5, 15, 17-20] is used to provide flexible, timemultiplexed inter-neuron communication channels, scalable
interconnect and reconfigurability.
This paper presents EMBRACE-FPGA, a scalable,
configurable NoC-based SNN architecture, implemented on
FPGA hardware. EMBRACE-FPGA is integrated within a
Genetic Algorithm (GA)-based intrinsic hardware Evolution
Platform (Evo Platform) to provide an adaptive computing
platform for exploration of the evolution capabilities of
hardware SNNs for a range of applications. Results are
presented
which
demonstrate
the
evolution
and
reconfigurability
capabilities of the hardware SNN
architecture running on a Xilinx Virtex II-Pro FPGA and
solving the XOR [22] problem. The paper describes the
EMBRACE-FPGA architecture and the role of NoCs in
supporting hardware evolution by providing flexible
communication for spike transmission and runtime
reconfiguration of SNN parameters. This work provides an
FPGA
hardware
SNN
proto-type
implementation
incorporating the proposed EMBRACE NoC architecture. The
paper demonstrates how EMBRACE-FPGA supports the
evolution and configuration of SNN connection topologies,
synaptic weights and neuron firing thresholds. EMBRACEFPGA can support re-evolution and re-configuration to adapt
to situations where faults or a changing environment occur.
The structure of the paper is as follows: section II
highlights related work in the hardware implementation of
SNNs. Section III describes the EMBRACE-FPGA SNN
NoC-based tile architecture and section IV describes the GAbased hardware SNN evolution system. Section V describes
the XOR hardware SNN benchmark implementation and
results. Section VI concludes the paper and proposes future
work.
II. RELATED WORK
Inspired by biology, researchers aim to implement
reconfigurable and highly interconnected arrays of neural
network elements in hardware to produce powerful signal
processing units [5-11, 32, 33]. Execution architectures for
SNN neural computing platforms can be broadly categorised
as software-based (multi-processor) [5, 34], FPGA [9, 10, 24,
33] or analogue/mixed-signal [6, 7, 11, 22, 31].
The SpiNNaker project [5] aims to develop a massively
parallel computer capable of simulating SNNs of various sizes
and topology, and with programmable neuron models.
Software implementation of the neuron model provides
flexibility for the SNN computation model. SpiNNaker is
aimed at exploring the potential of the spiking neuron as a
component from which useful systems may be engineered.
The SpinNNaker SNN architecture does not target embedded
systems applications due to its size, cost and power
requirements.
FPGA-based architectures offer high flexibility system
design. Pearson et al. [9] describe an FPGA-based array
processor architecture, which can simulate large networks
(>1k neurons). The processor architecture is a single
instruction path, multiple data path (SIMD) array processor
and uses a bus-based communication protocol which limits its
scalability. Ros et al. [10] present an FPGA-based hybrid
computing platform. The neuron model is implemented in
hardware and the network model and learning are
implemented in software. Glackin et al. [24] use a time
multiplexing technique to implement large SNN models
(with >1.9M synapses and 4.2K neurons), implemented in
software, where speed-acceleration is the key motivation, and
parallel capability of SNNs is not exploited.
Analogue design approaches benefit from compact area
implementations due to the inherent similarity with the way
electrical charge flows in the brain [6, 11]. Efficient, low area
and low power implementations of neuron interconnect and
synaptic junctions are key to scalable hardware SNN
implementations [6]. These architectures rely on digital
components for a flexible communication infrastructure.
Ehrlich et al. [23] and Schemmel [31] present FACETS, a
configurable wafer-scale mixed-signal neural ASIC system,
targeting neuron densities of 105 neurons. The work proposes
a hierarchical neural network and the use of analogue floating
gate memory for synaptic weights. The synaptic ion channel
circuit comprises op-amp and operational transconductance
amplifiers along with passive RC components. It remains to
be seen how this approach scales in relation to the size of
SNNs that can be realised in hardware. Similarly, Vogelstein
at al. [11] present a mixed-signal SNN architecture of 2,400
analogue neurons, implemented using switched capacitor
technology and communicating via an asynchronous eventdriven bus.
For large scale SNNs, hardware interconnect introduces
problems due to high levels of inter-neuron connectivity.
Often the numbers of neurons that can be realised in hardware
are limited by high fan in/out requirements [16]. Direct
neuron-to-neuron
interconnection
exhibits
switching
requirements that grow non-linearly with the network mesh
size.
III. EMBRACE-FPGA SNN ARCHITECTURE
This section describes the EMBRACE-FPGA SNN
architecture and operation. Figure 2 illustrates the
EMBRACE-FPGA Evo Platform, which includes a 2dimensional NxM neural tile array of interconnected SNN
neural tiles. Each neural tile is connected in North, East, South
and West directions.
Fig 2. EMBRACE-FPGA Evolvable Hardware SNN block diagram
Figure 3 illustrates the EMBRACE-FPGA NoC-based SNN
hardware neural tile block diagram which includes NoC router
and neural cell (neuron/synapse elements). Each neural tile
can be programmed to realise neuron-level functions.
Communication between neural tiles is achieved by routing
data packets through round robin-based *InRq and *OutRq
neural tile ports, where * represents North (N), S, E and W. A
rq/ack packet handshake protocol is used.
Fig 3. SNN neural tile block diagram, including neural cell and main elements
of the NoC router. n is the maximum number of synapses per tile.
The network of routers within EMBRACE-FPGA allows
spike signals to be propagated from source tile to destination
tile using time-multiplexing of the connecting lines between
NoC routers. Generation of spikes is dependent on temporal
spike behaviour and active synapse weights. Neuron fire
requests are buffered allowing spike packets to be queued for
processing for transmissions to the connected synapses.
Figure 4 illustrates the variation in membrane neuron
potential in one neuron, obtained using read back of the
neuron potential from the EMBRACE-FPGA hardware
platform. The neuron fires at the point where the potential
decreases sharply. The irregular step pattern of Figure 4
illustrates the asynchronous nature of incoming spikes,
received via the NoC.
monitor provides SNN output values to the host for
performance (fitness) evaluation. This evolutionary process
continues until high fitness convergence is achieved. The host
controls configuration and spike packets generated into the
EMBRACE-FPGA SNN.
V. XOR HARDWARE SNN
This section presents the EMBRACE-FPGA hardware SNN
application to a 3-neuron XOR benchmark problem. The
EMBRACE-FPGA Virtex II-Pro SNN implementation
contains a 32 neuron, 32 synapses/neuron SNN. The system
supports evolution of SNN connection topology, synaptic
weights and neuron threshold potentials. Figure 5 illustrates
the Evo platform used for evolution of the SNN-based XOR
benchmark function.
Fig 4. Neuron membrane potential obtained using read back of neuron
potential from the EMBRACE-FPGA hardware platform.
The SNN router finite state machine (Figure 3) manages
router activity using a round robin policy. The state machine
also handles incoming NoC packet requests received by the
router. Neural tile configuration data is stored in neural tile
configuration memory (cfgMem). NoC configuration packets
contain the destination neural tile address along with
configuration parameter address and data.
Training algorithms such as GAs [29] (used in this work)
can be used to strengthen or weaken neuron synapse weight
values to reflect a mapping between the SNN input and
desired output data. The EMBRACE-FPGA NoC routers
allow new synaptic weight values to be passed to the
appropriate synapse within configuration data packets. The
NoC enables the adaptability of SNNs in hardware to be
exploited, as synapses can be easily accessed for reconfiguration or even relocated to different tiles, without the
complex issue of online physical re-routing between newly
relocated synapses and neurons. EMBRACE-FPGA provides
a platform that can take full advantage of the inherent
parallelism of SNNs and also explore the potential of evolving
SNNs to meet the adaptability required for fault tolerant
embedded computing.
IV. GA -BASED HARDWARE SNN EVOLUTION SYSTEM
This section presents the GA–based intrinsic evolution
platform, used to train the NoC-based EMBRACE-FPGA
hardware SNN and to demonstrate a solution to the XOR
benchmark application. Intrinsic evolution involves
implementing and evaluating each evolved individual (from a
GA’s population) in hardware. This differs from extrinsic
evolution which evaluates each individual using simulation,
prior to selecting the best individual for implementation in
hardware. The GA evolves the SNN population parameters for
each tile (synaptic weights and neuron firing threshold). SNN
input signal spike trains representing current SNN input
values are generated on-chip (controlled by host, Figure 2).
SNN output spikes are routed to the spike monitor block
(Figure 2) which measures the output spike frequency. The
Fig 5. Evo platform for evolution of an SNN-based XOR benchmark function
The XOR problem has been used as a standard benchmark to
verify the operation or evaluate the performance of ANN [14,
22] and SNN [27] frameworks. XOR is a sub-problem of
more complicated ones [25] and is an example of a linearly
non-separable task, for which classical perceptrons fail to
realise a solution. The ability to solve the XOR (and other
non-linear tasks) is overcome by employing multiple layers of
neurons. Three interconnected and configured neural tiles
provide the two-neuron hidden SNN layer and the single SNN
output neuron for the XOR implementation. GA parameters
are listed in Table I.
Mutation Probability
0.01
Crossover Probability
0.9
Roulette-wheel selection with
Selection
a single elite individual
Population Size
50
Maximum no. of
50
generations
Table I. GA parameters for EMBRACE-FPGA evolution
Table II illustrates the fitness score assignment used for the
two input XOR benchmark, for a sequence of four 2-bit binary
inputs.
Number of correct outputs
0 1 2 3 4
Fitness Score assignment value
0 1 4 9 16
TABLE II. XOR FITNESS SCORE ASSIGNMENT
The Evo platform GA uses a population size of 25 individuals.
Each individual consists of 3 genes, one for each neuron. For
the XOR SNN, the gene is comprised of three values for each
neuron, i.e, 2 synapse weights and 1 neuron firing threshold.
Figure 6 plots the average and best fitness of the evolved
EMBRACE-FPGA hardware SNN XOR function using the
GA parameters of Table I (averaged over 20 generations).
[8]
[9]
[10]
[11]
[12]
Fig 6. Average and best fitness of the evolved EMBRACE-FPGA hardware
SNN XOR function
VI. CONCLUSIONS AND FUTURE WORK
This paper presents the EMBRACE-FPGA hardware NoCbased
SNN
architecture
and
FPGA
proto-type
implementation. The scalable EMBRACE-FPGA SNN
architecture and NoC operation have been described. The GAbased intrinsic evolution platform incorporating EMBRACEFPGA on Virtex II-Pro has been presented and used to evolve
a hardware SNN solution to the XOR problem. The paper
proposes the EMBRACE-FPGA system as an adaptive
hardware SNN computing platform which can be used to
explore the evolution capabilities of NoC-based hardware
SNNs.
Future work will include evolution of larger hardware
SNNs and various SNN connection topologies, weights and
firing thresholds for more complex applications. The
EMBRACE-FPGA architecture will be applied in fault-tolerant
applications and adaptive, changing environment applications.
The EMBRACE-FPGA research contributes to the
EMBRACE project which is researching the use of a compact,
CMOS-compatible analogue SNN neural cell within a
hierarchical SNN NoC-based topology.
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
ACKNOWLEDGMENTS
This research is supported by Science Foundation Ireland
(Grant No. 07/SRC/I1169), the Irish Research Council for
Science,
Engineering
and
Technology
(IRCSET),
International Centre for Graduate Education in Micro and
Nano Engineering (ICGEE) and Xilinx University Programme.
[27]
[28]
[29]
REFERENCES
[1] W. Maass, Computation with Spiking Neurons: The Handbook of Brain
Theory and NNs, 2nd ed., MIT Press, 2001.
[2] S. Grossberg et al., “Spiking Neurons in Neuroscience and
Technology”, Neural Networks, vol. 14 (6-7), p. 587, Jul. 2001.
[3] W. Maass, “Networks of spiking neurons: The third generation of
neural network models,” Neural Networks, vol. 10(9), p. 1659, 1997.
[4] W. Gerstner and W. Kistler, Spiking neuron models, Cambridge
University Press, New York, 2002.
[5] S. Furber et al., “SpiNNaker: Mapping NNs onto a Massively-Parallel
Chip Multiprocessor”, in Proc. WCCI’08, 2008, pp. 2849-2856.
[6] Y. Chen et al., “A Programmable Facilitating Synapse Device”, in Proc
WCCI’08, 2008, pp. 1615-1620.
[7] J. Harkin et al., “A Reconfigurable and Biologically Inspired Paradigm
for Computation using Networks-on-chip and Spiking Neural
[30]
[31]
[32]
[33]
[34]
[35]
Networks”, International Journal of Reconfigurable Computing, vol.
2009, article ID 908740, 2009.
A. Upegui, C. Pená-Reyes, and E. Sanchez, “An FPGA platform for
on-line topology exploration of spiking neural networks,”
Microprocessors and Microsystems, vol. 29, no. 5, pp. 211–223, 2005.
M. Pearson et al., “Implementing SNNs for Real-Time SignalProcessing and Control Applications: A Model-Validated FPGA
Approach”, IEEE Trans. on Neural Nets, vol. 18, no. 5, 2007
E. Ros et al., “Real-Time Computing Platform for Spiking Neurons
(RT-Spike)”, IEEE Transactions on Neural Networks, vol. 17(4), 2006.
R. Vogelstein et al., “Dynamically reconfigurable silicon array of
spiking neurons with conductance-based synapses”, IEEE Transactions
on Neural Networks, vol.18, no. 1, pp. 253 – 265, 2007.
B. McGinley et al., “Reconfigurable Analogue Hardware Evolution of
Adaptive Spiking Neural Networks Controllers”, in Proc. GECCO’08,
2008, pp. 289–290.
P. Rocke et al., “Investigating the suitability of FPAAs for Evolved
Hardware Spiking Neural Networks”, in Proc. ICES ‘08, 2008, p. 118.
J. Maher et al., "Intrinsic Hardware Evolution of Neural Networks in
Reconfigurable Analogue and Digital Devices", in Proc. FCCM’06,
2006, pp. 321-322.
J. Harkin et al., “Reconfigurable Platforms and the Challenges for
Large-scale Implementations of Spiking Neural Network”, in Proc.
FPL’08, 2008, pp. 483-486.
L. P. Maguire et al., “Challenges for large-scale implementations of
spiking neural networks on FPGAs”, Neurocomputing, vol. 71, no. 1-3,
pp. 13–29, 2007.
A. DeHon, R. Rubin, “Design of FPGA Interconnect for Multilevel
Metallization”, IEEE Tran. VLSI Sys, vol. 12(10), pp. 1038-1050, 2004.
L. Benini, G. DeMicheli, “Networks on Chips: A New SoC Paradigm”,
Computer, vol. 35 (1), pp. 70-78, Jan. 2002.
J. Harkin et al., “Novel Interconnect Strategy for Large Scale
Implementations of NNs”, in Proc. IEEE SMCIA'07, 2007.
S. Jovanovic et al. "CuNoC: A Scalable Dynamic NoC for
Dynamically Reconfigurable FPGAs", in Proc. FPL’07, 2007, p. 753.
K. Goossens, et al., “Hardwired NoCs in FPGAs to unify Functional
and Configuration Interconnects”, in Proc. of Int. Symposium on
Networks on Chip, 2008.
H. El-Bakry, “Modular neural networks for solving high complexity
problems,” in Proc. of Int. Joint Conference on Neural Networks, 2003.
M. Ehrlich et al., “Wafer-Scale VLSI implementations of pulse coupled
neural networks”, in Proc. IEEE SSD’07, 2007.
B. Glackin et al., “Novel Approach for the Impl. of Large-Scale SNNs
on FPGAs”, in Proc. Artificial NNs Conf., pp. 552-563, 2005.
S. Fahlman, “An Empirical Study of Learning Speed in Backpropagation Networks”, Tech. report, 1988.
O. Booij and H. Nguyen, “A gradient descent rule for spiking neurons
emitting multiple spikes”, Information Processing Letters, vol. 95, no.
6, pp. 552–558, 2005.
P. Rocke et al., "Reconfigurable Hardware Evolution Platform for a
Spiking Neural Network Robotics Controller", LNCS, vol. 4419, pp.
373-378, 2007.
J. Holland, Adaptation in natural and artificial systems, University of
Michigan press, 1975.
J. Schemmel, J. Fieres and K. Meier, “Wafer-Scale Integration of
Analogue Neural Networks”, in Proc. of IEEE International Joint
Conference on Neural Networks, 2008, pp. 431-438.
L. Prechelt, “Proben 1: A Set of Benchmarks and Benchmarking rules
for Neural Network Training Algorithms”, Universität Karlsruhe,
Germany, Fakultät für Informatik Technical Report 21/94, 1994.
J. Schemmel, J. Fieres and K. Meier, “Wafer-Scale Integration of
Analogue Neural Networks”, in Proc. of IEEE International Joint
Conference on Neural Networks, 2008, pp. 431-438.
L. Prechelt, “Proben 1: A Set of Benchmarks and Benchmarking rules
for Neural Network Training Algorithms”, Universität Karlsruhe,
Germany, Fakultät für Informatik Technical Report 21/94, 1994.
D. B. Thomas and W. Luk, "FPGA Accelerated Sim. of Biologically
Plausible Spiking Neural Networks", in Proc. FCCM’09, 2009.
H. Markram, “The Blue Brain Project”, Nature Reviews Neuroscience,
vol. 7, pp. 153-160, Feb. 2006.
S. Haykin, Neural Networks: A Comprehensive Foundation, Macmillan
College Publishing Company, 1994.