Figures
Abstract
Animals rely on different decision strategies when faced with ambiguous or uncertain cues. Depending on the context, decisions may be biased towards events that were most frequently experienced in the past, or be more explorative. A particular type of decision making central to cognition is sequential memory recall in response to ambiguous cues. A previously developed spiking neuronal network implementation of sequence prediction and recall learns complex, high-order sequences in an unsupervised manner by local, biologically inspired plasticity rules. In response to an ambiguous cue, the model deterministically recalls the sequence shown most frequently during training. Here, we present an extension of the model enabling a range of different decision strategies. In this model, explorative behavior is generated by supplying neurons with noise. As the model relies on population encoding, uncorrelated noise averages out, and the recall dynamics remain effectively deterministic. In the presence of locally correlated noise, the averaging effect is avoided without impairing the model performance, and without the need for large noise amplitudes. We investigate two forms of correlated noise occurring in nature: shared synaptic background inputs, and random locking of the stimulus to spatiotemporal oscillations in the network activity. Depending on the noise characteristics, the network adopts various recall strategies. This study thereby provides potential mechanisms explaining how the statistics of learned sequences affect decision making, and how decision strategies can be adjusted after learning.
Author summary
Humans and other animals often benefit from exploring multiple alternative solutions to a given problem, rather than adhering to a single, global optimum. Such explorative behavior is frequently attributed to noise in the neuronal dynamics. Supplying each neuron or synapse in a neuronal circuit with noise, however, does not necessarily lead to explorative dynamics. If decisions are triggered by the compound activity of ensembles of neurons or synapses, noise averages out, unless it is correlated within these ensembles. As an analogy, consider a particle in a still fluid: despite the constant bombardment by surrounding molecules, a large particle will hardly undergo any Brownian motion, because the momenta of the impinging molecules largely cancel each other. Only if the molecules move in a coherent manner, such as in a turbulent fluid, they can have a substantial influence on the particle’s motion. This modeling study exploits this effect to equip a neuronal sequence-processing circuit with explorative behavior by introducing configurable, locally coherent noise. It contributes to an understanding of the neuronal mechanisms underlying different decision strategies in the face of ambiguity, and highlights the role of coherent network activity such as traveling waves during sequential memory recall.
Citation: Bouhadjar Y, Wouters DJ, Diesmann M, Tetzlaff T (2023) Coherent noise enables probabilistic sequence replay in spiking neuronal networks. PLoS Comput Biol 19(5): e1010989. https://doi.org/10.1371/journal.pcbi.1010989
Editor: Boris S. Gutkin, École Normale Supérieure, College de France, CNRS, FRANCE
Received: July 14, 2022; Accepted: March 2, 2023; Published: May 2, 2023
Copyright: © 2023 Bouhadjar et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The documented workflow and source code necessary to reproduce our findings are provided online at: https://doi.org/10.5281/zenodo.6378376.
Funding: This project was funded by the Helmholtz Association Initiative and Networking Fund (project number SO-092, Advanced Computing Architectures) [YB, DJW, MD, TT], and the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2) [YB, MD, TT] and No. 945539 (Human Brain Project SGA3) [YB, MD, TT]. Open access publication funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, 491111487) [YB, MD, TT]. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Our brains are constantly processing sequences of events, such as during listening to a song or perceiving the texture of an object. Through repeated exposure to these sequences, we effortlessly learn to predict upcoming events. In many circumstances, we have to make a decision of what elements to recall next in response to a cue. A number of previous modeling studies have proposed spiking neuronal network implementations of sequence learning and replay [1–5]. The spiking temporal-memory (TM) model described in [5] constitutes a biologically more detailed reformulation of the abstract TM algorithm proposed in [6], and provides an energy efficient sequence processing mechanism with high storage capacity by virtue of its sparse activity. It learns complex sequences in an unsupervised, continual manner using biological, local learning rules. After learning, the model successfully predicts upcoming sequence elements in a context dependent manner, and signals the occurrence of non-anticipated stimuli. In contrast to the original TM model in [6], the spiking TM model employs a continuous-time dynamics and predicts that sequences can be successfully learned and processed for a range of sequence speeds with lower and upper bounds determined by electrophysiological parameters such as synaptic and neuronal time constants.
The spiking TM model can be configured into a replay mode where it autonomously recalls learned sequences in response to a cue stimulus. In nature, such cues are often incomplete or ambiguous, and it is not always clear what sequence to recall given the current context. Despite this ambiguity, we usually come to a clear decision on what sequence to recall. A key factor in decision making is reward [7, 8]. In this regard, the optimal decision strategy is the one that maximizes the reward, and is hence referred to as the maximization or exploitation strategy. A number of studies demonstrate that decisions are often made in an apparently suboptimal manner, such as probability matching [9, 10]. In binary choice tasks, for example, where the probability of payoff is higher for one of the two possible choices, it appears most reasonable to always decide for this high-payoff option. Instead, however, humans and other animals often decide for each of the two choices with a probability that approximately matches the payoff probability. While this behavior appears unreasonable at first glance, it may in fact be optimal when taking into account previous (pre-experiment) experiences, such as prior knowledge of changing reward contingencies. In cases where the reward probability or amplitudes change over time, a more explorative behavior is beneficial [7, 11]. Previous studies suggest that decisions are not only determined by rewards, but also by the frequency of previously experienced input patterns [12, 13]. Accordingly, suboptimal decision strategies may at least partly arise as a consequence of this additional influence of occurrence frequencies.
A number of previous studies propose neuronal network models of decision making in the face of ambiguous or incomplete stimuli. The majority of these models employ some form of intrinsic stochastic dynamics or uncorrelated noise to generate explorative behavior [14–18]. Noise has been introduced in the form of random or non-task-related synaptic background inputs [18], or in the form of synaptic stochasticity [17]. An alternative solution is proposed in [16, 19], where the “noise” is generated by the complex but deterministic dynamics of the functional network itself, without any additional sources of stochasticity. In most models, the noise targeting different neurons or synapses is effectively uncorrelated. Supplying each element in a neuronal circuit with uncorrelated noise, however, does not necessarily lead to explorative dynamics: state variables arising from superpositions of many noisy, uncorrelated components become effectively deterministic as a result of noise averaging [19]. The total input current of a neuron generated from superpositions of many synaptic inputs, for example, is hardly affected by the variability in the individual synaptic responses. Similarly, in models where individual states are encoded by the activity of neuronal subpopulations [15], the state representations become quasi deterministic if the single-neuron noise components are uncorrelated. Compensating this noise averaging effect by increasing the noise amplitude appears to be an obvious strategy, but may be hard to realize by the biological system.
An alternative, natural solution to the noise-averaging problem is to employ locally correlated noise. In biological neuronal networks, coherent noise may arise by different mechanisms: neighboring neurons typically receive inputs from partly overlapping presynaptic neuron populations. The synaptic input currents to these neurons are therefore correlated. In the literature, this type of correlation, which results from the anatomy of neurons and neuronal circuits, is referred to as shared-input correlation [20, 21]. A second type of correlation in synaptic input currents arises from correlations in the presynaptic spiking activity [22–24]. These dynamical correlations occur during stationary network states, or can be generated by different types of nonstationary activities, such as global oscillations in the population activity [25, 26] or traveling waves of activity propagating across the neuronal tissue [27–30].
This study addresses the problem of sequential decision making in the face of ambiguity and the role of coherent noise in shaping decision strategies. We investigate how the spiking TM model in [5] recalls sequences in response to ambiguous cues in the presence of locally coherent noise, to what extent noise averaging can be overcome by increasing the noise amplitude, and how different recall strategies can be achieved by adjusting the noise characteristics. We further explore whether shared synaptic input and random stimulus locking to spatiotemporal oscillations can serve as appropriate, natural sources of coherent noise. In Materials and methods, we provide a detailed description of the task and the network model.
Results
A spiking neural network recalls sequences in response to ambiguous cues
In this section, we provide a brief overview of the model and the task, illustrate how the network learns overlapping sequences occurring with different frequencies during the training, and show how these occurrence frequencies are encoded in the network. We then study the network responses to ambiguous cues and the influence of the occurrence frequencies on the recall behavior in the absence or presence of noise.
Similar to [5], the model consists of a randomly and sparsely connected network of NE excitatory neurons (population ) and a single inhibitory neuron (Fig 1A). Each excitatory neuron receives KEE excitatory inputs from other randomly chosen neurons in . Excitatory neurons are subdivided into M subpopulations, each containing neurons with identical stimulus preference: in the absence of any additional connections, all neurons in a given subpopulation fire a spike upon the presentation of a specific sequence element. The inhibitory neuron is recurrently connected to the excitatory neurons. In contrast to [5] where each excitatory subpopulation is equipped with its own inhibitory neuron, we here use a single inhibitory neuron to implement a winner-take-all (WTA) competition between the subpopulations of excitatory neurons. At the same time, the inhibitory neuron mediates the competition between neurons within subpopulations and thereby leads to sparse activity and context sensitivity, as described in [5] and below. The network is driven by external inputs, each representing a specific sequence element (“A”, “B”, …), and feeds all neurons in the subpopulation that have the same stimulus preference. Neurons are modeled as point neurons with the membrane potential evolving according to the leaky integrate-and-fire dynamics [31]. The total synaptic input current of excitatory neurons is composed of currents in distal dendritic branches, inhibitory currents, and currents from external sources, see Eq (5). The inhibitory neuron receives only inputs from excitatory neurons. The dynamics of dendritic currents include a nonlinearity describing the generation of dendritic action potentials (dAPs), see Eq (10). Synapses between excitatory neurons are plastic and subject to spike-timing-dependent plasticity and homeostatic control. Details on the network model are given in Materials and methods.
A) The architecture constitutes a recurrent network of subpopulations of excitatory neurons (filled gray circles) and a single inhibitory neuron (Inh). Each excitatory subpopulation contains neurons with identical stimulus preferences. Excitatory neurons are stimulated by external sources providing sequence-element specific inputs “A”,“F”, “B”, etc. Connections between and within the excitatory subpopulations are random and sparse. The inhibitory neuron is recurrently connected to all excitatory neurons. In the depicted example, the network is repetitively presented with two sequences {A,F,B,D} (brown) and {A,F,C,E} (blue) during learning. The sequence {A,F,C,E} occurs twice as often as {A,F,B,D}. B) During learning, the network forms sequence specific subnetworks (blue and brown arrows representing {A, F, B, D} and {A,F,C,E}, respectively) as a result of the synaptic plasticity dynamics. The connections between subpopulations representing the sequence shown more often are stronger (thick arrows). C) The network can be configured into a replay mode by increasing the neuronal excitability. During the replay mode, the network is presented with a cue stimulus representing the first sequence element “A”. In addition, the excitatory subpopulations receive input from distinct sources of background noise (gray traces) which is not present during learning. In the replay mode, the synaptic plasticity is switched off.
During the learning, the network is exposed to repeated presentations of S sequences s1, …, sS, such that each sequence si occurs with a specific frequency pi (for details on the learning protocol, see Materials and methods). For illustration, we focus here on a simple set of two sequences {A,F,B,D} and {A,F,C,E}, where the first sequence is shown with a relative frequency p1 = p and the second with p2 = 1 − p (e.g., p = 0.2 in Fig 2A). In the following, we refer to {A,F,B,D} as sequence 1 and to {A,F,C,E} as sequence 2. Before learning, presenting a sequence element causes all neurons in the respective subpopulation to fire. During the learning process, the repetitive sequential presentation of sequence elements increases the strength of connections between the corresponding subpopulations to a point where the activation of a certain subpopulation by an external input generates dAPs in a specific subset of neurons in the subpopulation representing the subsequent element. The generation of the dAPs results in a long-lasting depolarization (∼ 50 − 500 ms) of the soma. We refer to neurons that generate a dAP as predictive neurons. When receiving an external input, predictive neurons fire earlier as compared to non-predictive neurons. If a group of at least ρ neurons are predictive within a certain subpopulation, their advanced spikes initiate a fast and strong inhibitory feedback to all excitatory neurons, ultimately suppressing the firing of non predictive neurons. After learning, the model develops specific subnetworks representing the learned sequences (Fig 1B), such that the presentation of a sequence element leads to a context dependent prediction of the subsequent element [5]. As a result of Hebbian learning, the synaptic weights in the subnetwork corresponding to the most frequent sequence during learning are on average stronger than those for the less frequent sequence (Figs 1B, 3A and 4A). In the prediction mode, this asymmetry in synaptic weights plays no role. For ambiguous stimuli, all potential outcomes are predicted, i.e., the network predicts both “C” and “B” simultaneously in response to stimuli “A” and “F”, irrespective of the training frequencies.
A) During learning, the model is exposed to two (or more) competing sequences with different frequencies. Here, sequence 1 ({A,F,B,D}; brown) is shown twice as often as sequence 2 ({A,F,C,E}; blue). The respective normalized training frequencies p1 = 1/3 and p2 = 2/3 are depicted by the histogram. B) During replay, the network autonomously recalls the sequences in response to an ambiguous cue (first sequence element “A”; open black squares) according to different strategies. Maximum probability (max-prob): only the sequence with the highest training frequency is replayed. Probability matching (prob. matching): the replay frequency of a sequence matches its training frequency. Full exploration: all sequences are randomly replayed with the same frequency, irrespective of the training frequency. Histograms represent the replay frequencies and , respectively.
A) Sketch of subpopulations of excitatory neurons (boxes) representing the elements of the two sequences {A,F,C,E} (seq. 2) and {A,F,B,D} (seq. 1). The subpopulations “C” and “B” are unfolded showing their respective neurons. The arrows depict the connections after learning the task shown in Fig 2A. The line thickness represents the population averaged synaptic weight. The presentation of the character “A” constitutes an ambiguous cue during replay. The inhibitory neuron (Inh) mediates competition between subpopulations through the winner-take-all (WTA) mechanism. B,C,D) Spiking activity in the subpopulations depicted in panel A in response to three repetitions of the ambiguous cue “A” (black triangles at the top and vertical dotted lines) for three different noise configurations σ = 0 pA, c = 0 (B), σ = 26 pA, c = 0 (C), and σ = 26 pA, c = 1 (D). Brown, blue, and silver dots mark somatic spikes of excitatory neurons corresponding to sequence 1, sequence 2, and both, respectively. For clarity, only the sparse subsets of active neurons in each population are shown. Red dots mark spikes of the inhibitory neuron. Panels C and D depict the representative recall behavior. See Fig 4 for a detailed statistics across trials and network realizations. See Table 9 for model parameters.
Dependence of A) the compound weights (PSC amplitudes) wBF (brown) and wCF (blue; see Fig 3A), B–D) the population averaged response latencies tB and tC (subpopulation averaged time of first spike after the cue “A”; see Eq (1) for subpopulations “B” (brown) and “C” (blue), and E–G) the relative replay frequencies and of sequences 1 (brown) and 2 (blue), the failure rate f∅ (gray) and the joint probability of replaying both sequences (silver) on the training frequency p1 = p of sequence 1. Note that the inhibition is disabled when measuring the latencies to ensure that both competing populations “B” and “C” elicit spikes. Panels B–G depict results for three different noise configurations σ = 0 pA, c = 0 (B,E), σ = 26 pA, c = 0 (C,F), and σ = 26 pA, c = 1 (D,G). In panel A, circles and error bars depict the mean and the standard deviation across different network realizations. In panels B–D, circles and error bars represent the mean and the standard deviation across Nt = 151 trials (cue repetitions), averaged across 5 different network realizations. In panels E–G, circles represent the mean across Nt = 151 trials, averaged across 5 different network realizations. See Table 9 for remaining parameters. Same task as described in Fig 2.
The model can be configured into a replay mode, where the network autonomously replays learned sequences in response to a cue stimulus. This is achieved by changing the excitability of the neurons such that the activation of a dAP alone can cause the neurons to fire [5]. In addition, the synaptic plasticity is disabled during replay to preserve the encoding of the training frequencies in the synaptic weights (Fig 4A; see also Discussion). In the replay mode, we present ambiguous cues and study whether the network can replay sequences following different strategies (Fig 2B). We refer to the “maximum probability” strategy (Fig 2B, left) as the case where the network exclusively replays the sequence with the highest occurrence frequency during training. When adopting the “probability matching” strategy, the network replays sequences with a frequency that matches the training frequency (Fig 2B, middle). The “full exploration” strategy refers to the case where all sequences are randomly replayed with the same frequency, irrespective of the training frequency (Fig 2B, right). In Fig 3, we illustrate the network’s decision behavior by presenting the ambiguous cue stimulus “A” three times. In the absence of noise, the network adopts the maximum probability strategy (Fig 3B): as a result of the higher weights between the neurons representing the more frequent sequence, the dAPs are activated earlier in these neurons, which advances their somatic firing times with respect to the neurons representing the less frequent sequence. This advanced response time quickly activates the inhibitory neuron, which suppresses the activity of the other neurons.
To assess the replay performance, we present the ambiguous cue “A” for Nt trials and examine the replay frequencies and of the two sequences s1 = {A,F,B,D} and s2 = {A,F,C,E} as a function of their relative occurrence frequencies pi during training. We define the sequences {A,F,B,D} or {A,F,C,E} to be successfully replayed if more than 0.5ρ = 10 neurons in the last subpopulations “E” or “D” have fired, respectively (for details on the assessment of the replay statistics, see Materials and methods). In the absence of noise, the network replays only the sequence with the highest training frequency p (Fig 4E). To understand this behavior, we inspect the response latencies tB/C of the subpopulations “B” and “C” as a function of the training frequencies (Fig 4B). Here, the response latency (1) of the subpopulation representing sequence element x ∈ {B,C} corresponds to the population average of the single-neuron response latencies ti (time of first spike after the cue) for each active neuron in this subpopulation. Averaged across trials, the response latency is smaller for the subpopulation participating in the sequence with the higher frequency. The response latencies tB and tC decrease with increasing the respective training frequencies. In the absence of noise, the distribution of the response latencies tB/C across trials is very narrow (Fig 4B). Consequently, neurons representing the most frequent sequence fire earlier in all trials. For training frequencies between 0.4 and 0.6, the difference between tB and tC in some network realizations is small compared to the response latency of the WTA circuit. Hence, both sequences are occasionally replayed simultaneously (Fig 4E).
To foster exploratory behavior, i.e., to enable occasional replay of the low-frequency sequence, we equip the excitatory neurons with background noise. For simplicity, this background noise is added only during replay, but not during the learning (see Discussion). In this work, we investigate two different forms of background noise. Here, we first consider noise provided in the form of stationary synaptic background input (see below for an alternative form of noise). To this end, each subpopulation of excitatory neurons receives input from its private pool of independent excitatory and inhibitory Poissonian spike sources (Fig 1C). The background noise is parameterized by the noise amplitude σ (standard deviation of the synaptic input current arising from these background inputs) and the noise correlation c (see Fig 1C and Materials and methods). Inputs to neurons of the same subpopulation are correlated by an extent parameterized by c. Neurons in different subpopulations receive uncorrelated inputs. The noise amplitude σ is chosen such that the subthreshold membrane potentials of the excitatory neurons are fluctuating without eliciting additional spikes. As a consequence, the distributions of response latencies tB/C across trials may be broadened and partly overlap (Fig 4C and 4D). As we will show in the following, the network can adopt different replay strategies (Fig 2B) depending on the amount of this overlap. Note that noise is injected only during replay, but not during learning. During training, the weak noise employed here hardly affects the network behavior as the external inputs (stimulus) are strong and lead to a reliable, immediate responses.
With uncorrelated noise (c = 0), the replay behavior remains effectively non-explorative, i.e., only the high-frequency sequence is replayed in response to the cue (Fig 3C). This is explained by the fact that each sequence element is represented by a subset of ρ neurons, or in other words, that the response latency tx in Eq (1) is a population averaged quantity. Its across-trial variance (2) is determined by the population size ρ, the population averaged spike-time variance , and the population averaged spike-time correlation coefficient , with Cov(ti, tj) denoting the spike-time covariance for two neurons i and j. Here, we use the subscript “s” to indicate that vs and cs refer to the (co-)variability in the (first) “spike” times. The spike-time statistics vs and cs depend on the input noise statistics σ and c in a unique and monotonous manner [32, 33]. In the absence of correlations (c = cs = 0), the across-trial variance v of tx vanishes for large population sizes ρ. For finite population sizes, v is non-zero but small (Fig 4C). The effect of the synaptic background noise on the variability of response latencies largely averages out. Hence, the average advance in the response of the population representing the high-frequency sequence cannot be overcome by noise; the network typically replays only the sequence with the higher occurrence frequency during training (Fig 4F). For small differences in the training frequencies (p ≈ 0.5), the network occasionally fails to replay any sequence or replays both sequences. The mechanism underlying this behavior is explained below.
Noise averaging is efficiently avoided by introducing noise correlations. For perfectly correlated noise and, hence, perfectly synchronous spike responses (c = cs = 1), the across-trial variance v of the response latency t is identical to the across-trial variance vs of the individual spike responses, i.e., v = vs, irrespective of the population size ρ; see Eq (2). For smaller but non-zero spike correlations (0 < cs < 1), the latency variance v is reduced but doesn’t vanish as ρ becomes large. Hence, in the presence of correlated noise, the across-trial response latency distributions for two competing populations have a finite width and may overlap (Fig 4D), thereby permitting an occasional replay of the sequence observed less often during training (Figs 3D and 4G and S6 Fig). Replay, therefore, becomes more exploratory, such that the occurrence frequencies during training are gradually mapped to the frequencies of sequence replay. With an appropriate choice of the noise amplitude and correlation, even an almost perfect match between training and replay frequencies can be achieved (probability matching; Fig 4G). For a training frequency p = 0.2, the replay frequency matches p already after about 20 training episodes (S5 Fig).
The results presented so far can be extended towards more than two competing sequences. As a demonstration, we train the network using five sequences {A,F,B,D}, {A,F,C,E}, {A,F,G,H}, {A,F,I,J}, and {A,F,K,L} presented with different relative frequencies. By adjusting the noise amplitude σ and correlation c, the replay frequencies can approximate the training frequencies (Fig 5).
During learning, five competing, partly overlapping sequences s1 = {A, F, B, D}, s2 = {A,F,C,E}, s3 = {A,F,G,H}, s4 = {A,F,I,J}, and s5 = {A,F,K,L} are repetitively presented with relative training frequencies p1 = 0.1, p2 = 0.14, p3 = 0.2, p4 = 0.23, p5 = 0.33, respectively (dotted red lines). After learning, the network autonomously replays the learned sequences in response to the ambiguous cue “A” with frequencies depicted by the blue bars. Parameters: σ = 12 pA, c = 1, τh = 4620 ms, z* = 21, Ne = 101, M = 12. See Table 9 for remaining parameters.
Noise averaging cannot be overcome by increasing noise amplitude
For subpopulations of finite size ρ, the variance v of the response latency t remains finite, and can be increased by scaling up the variance of the noise, even without correlation; see Eq (2). However, this solution cannot be applied to network models where a decision is mediated by a fast WTA circuit. In the presence of uncorrelated noise with high amplitude, the spikes from all neurons, in all competing subpopulations, are similarly dispersed. A large dispersion in spike times prohibits a fast and reliable activation of inhibition by one of the competing subpopulations. The WTA mechanism, therefore, fails at selecting a unique sequence. Consequently, both sequences run through in most of the trials (Fig 6A). An additional problem of the uncorrelated noise is that it impairs the propagation of the activity across the subpopulations of neurons. As our model relies on the propagation of synchronously firing neurons, the spike time dispersion resulting from the uncorrelated noise bears the risk that the spikes generated may be too dispersed to trigger dAPs in the next subpopulation (Fig 6). As a result of these two problems, more explorative behavior cannot be achieved by increasing the amplitude of uncorrelated noise. Instead, the probability of simultaneous replay (no decision) and the failure rate increase (Fig 6B).
A) Brown, blue, and silver dots mark somatic spikes of excitatory neurons belonging to sequence {A,F,B,D} (seq. 1), sequence {A,F,C,E} (seq. 2), or both, respectively. Red dots mark spikes from the inhibitory neuron. Each trial is initiated by stimulating the first element in the sequence (“A”, see dark arrows and vertical dashed lines). During training, the sequences 1 and 2 are shown with relative frequencies p1 = 0.3 and p2 = 0.7, respectively. B) Dependence of the relative replay frequencies and of sequence 1 (brown) and sequence 2 (blue), the failure rate f∅ (gray), and the joint probability of replaying both sequences (silver) on the relative training frequency p1 = p of sequence 1. Circles represent the mean across Nt = 151 trials averaged across 5 network realizations. Parameters: σ = 52 pA and c = 0. See Table 9 for the remaining parameters. Same task as described in Fig 2.
Noise correlations lead to more synchronous responses, thereby reducing the overlap between the within-trial latency distributions of the two competing populations “B” and “C” (Fig 3D). In each trial, the WTA dynamics is therefore triggered by just one of the two populations, rather than by both. Further, synchronous firing leads to a more robust activation of the subsequent subpopulation, and hence, a more robust replay. Hence, noise correlations help not only in generating more explorative behavior, but also in reducing replay failures and the chance of simultaneous activation of competing sequences (Fig 4G).
Noise amplitude and level of correlation control replay strategy
Psychophysics studies show that humans and other animals can flexibly change their decision strategies in the face of uncertainty or ambiguity [7, 11]. In the context of the model proposed here, this behavior is reproduced by adjusting the characteristics of the noise: by varying the noise amplitude, the model can be tuned to adopt a maximum-probability (Fig 7A), a probability-matching (Fig 7B), or an even more exploratory replay strategy (Fig 7C), provided the noise correlations are sufficiently strong. Similarly, it may be possible to change the replay behaviors by varying the noise correlation level (S1 Fig), if some of the model parameters are adjusted during replay, especially to ensure a robust activity propagation of the less frequent sequence (e.g., by decreasing JEI). In nature, a modulation of the noise amplitude is achieved by changing the firing rate of the presynaptic neurons providing the background noise, or the excitability of the target neurons via neuromodulatory [34] or attention signals [35].
Dependence of the relative replay frequencies and of sequence 1 (brown) and sequence 2 (blue), the failure rate f∅ (gray) and the joint probability of replaying both sequences (silver) on the relative training frequency p1 = p of sequence 1 for different noise amplitudes σ = 0 pA (A), σ = 26 pA (B), and σ = 104 pA (C) with correlation coefficient c = 1. Circles represent the mean across Nt = 151 trials, averaged across 5 different network realizations. See Table 9 for remaining parameters. Same task as described in Fig 2.
So far, we discussed shared stationary presynaptic input as a potential source of correlated noise occurring in nature. Shared input correlations resulting from the anatomy of cortical circuits are low [36–39]. To generate explorative replay behavior in the context of our model, however, the level of noise correlation needs to be substantial (c ∼ 1). In the following section, we therefore propose an alternative form of noise, where high correlations arise from the network dynamics.
Random stimulus locking to spatiotemporal oscillations as natural form of noise
In vivo cortical activity is rarely stationary. Usually, it is characterized by substantial temporal and spatial fluctuations, often occurring in the form of transient spatiotemporal oscillations, i.e., cortical waves [27, 40–42]. In the presence of traveling cortical waves, nearby neurons share the same oscillation phase, whereas distant neurons experience different phases (Fig 8). At the time of stimulus arrival, neurons in the up phase are more excitable and tend to fire earlier than neurons in a down phase. Cortical waves can be locked to external stimuli or events such as saccades [43], but they also occur spontaneously without locking to external cues [44]. Here, we exploit this finding and assume that the cue onset times are random with respect to the oscillation phase, thereby introducing a locally coherent form of trial-to-trial variability during replay.
A) Snapshot of a wave of activity traveling across a cortical region at time t1 of the 1st stimulus onset. Grayscale depicts wave amplitudes in different regions. Brown and blue rectangles mark populations of neurons with stimulus preferences “B” and “C”, respectively. B) Background inputs to neurons in populations “B” and “C” at different times. Background inputs to each population “B” and “C” at different times. Background inputs to neurons within each population are in phase due to their spatial proximity. Background inputs to different populations are phase shifted. Arrows on the top depict stimulus onset times. The times t1, t2, … indicate input arrival to populations “B” and “C” (dashed vertical lines are random, not locked to the background activity).
To investigate the effect of this type of variability on the replay performance, we first train the network in the absence of any background input using the same two-sequence task and training setup discussed in earlier sections. During replay, we inject an oscillating background current with amplitude a and frequency f into all excitatory neurons (see Materials and methods). Neurons within a given subpopulation share the same oscillation phase. Phases for different subpopulations are randomly drawn from a uniform distribution between 0 and 2π. The replay performance of the network is assessed by monitoring the network responses to repetitive presentations of an external cue “A” with random, uniformly distributed inter-cue intervals . The analysis is repeated for a range of training frequencies p, oscillation amplitudes a, and frequencies f.
Depending on the choice of the oscillation amplitude a and frequency f, the network replicates different replay strategies (Fig 9). For low-amplitude oscillations, the model replays only the sequence with the higher training frequency (max-prob). With increasing oscillation amplitude, it becomes more explorative and occasionally replays the less frequent sequence. By adjusting the oscillation amplitude, the replay frequency can be closely matched to the training frequency. This behavior of the model is observed for a range of physiological frequency bands such as alpha (∼ 10 Hz), beta (∼30 Hz), and gamma (∼ 70 Hz) [45, 46]. Higher oscillation frequencies are less effective due to the low-pass characteristics of neuronal membranes and synapses. Consequently, increasing the oscillation frequency leads to a more reliable replay of the most frequent sequence. For slow oscillations with long periods that are large compared to the average inter-cue interval, the network responses in subsequent trials are more correlated. For sufficiently many trials, however, the network can still explore different solutions.
Dependence of the relative replay frequencies and of sequences 1 (brown) and 2 (blue), the failure rate f∅ (gray), and the joint probability of replaying both sequences (silver) on the relative training frequency p1 = p of sequence 1 for different amplitudes a ∈ {0, 10, 20} and frequencies of the background oscillations: f = 10 Hz (B,C), f = 30 Hz (A,D,E), and f = 70 Hz (F,G). Circles represent the mean across Nt = 181 trials, averaged across 5 network realizations. See Table 9 for remaining parameters. Same task as described in Fig 2.
To conclude: cortical waves in a range of physiological frequencies represent a form of highly fluctuating and locally correlated background activity. The absence of a systematic stimulus locking to this activity constitutes a natural source of randomness that does not average out and is hence well suited to generate robust exploratory behavior. The degree of exploratoriness, i.e., the decision strategy, can be adjusted in a biologically plausible manner by controlling the wave amplitude or frequency.
Discussion
This work proposes a spiking neuronal network model performing probabilistic sequential memory recall in response to ambiguous cues. Explorative recall is achieved by providing the network with locally coherent noise. We explore two forms of this noise, implemented either in the form of shared synaptic input or a random stimulus locking to global spatiotemporal oscillations in the neuronal activity. The model explains the emergence of different recall strategies by adjusting the noise characteristics, such as the noise or oscillation amplitude, as well as the noise correlation or oscillation frequency.
The sequence processing model proposed here relies on a form of population encoding. In the absence of correlations, noise injected to single neurons therefore largely averages out and leads to a quasi-deterministic and non-exploratory behavior. Locally correlated noise, in contrast, permits an explorative recall behavior where the sequence frequency during learning can be gradually mapped to the recall frequency. Furthermore, noise correlations foster synchronization between neurons within subpopulations, and thereby lead to a more robust context-specific activation of sequences during recall. The problem of noise averaging and the proposed solution are not unique to the model presented here, but are generic for all systems where relevant state variables arise from superpositions of many noisy, uncorrelated components. Fluctuations in the total input current of a single neuron resulting from superpositions of thousands of synaptic inputs, for example, can be efficiently controlled by the level of correlation in the presynaptic activity [47]. Similarly, explorative behavior in other models of population based probabilistic computing [15] can be enhanced by equipping neurons within each population with correlated noise.
Correlation in neuronal firing can originate from both anatomical constraints or network dynamics [23, 24]. In this study, we investigate both types. The first type of noise is implemented in the form of irregular synaptic background input [48–51], where the correlation between neurons of the same subpopulation is resulting from shared presynaptic sources [20, 52]. From an anatomical perspective, this is reasonable as neighboring neurons indeed receive a considerable amount of inputs from identical presynaptic neurons. However, we show that the level of shared-input correlation required for an effective avoidance of noise averaging and maintenance of near synchronous activity is rather high, which contradicts anatomical studies reporting small connection probabilities in local cortical circuits, and hence, low levels of shared input correlation [36–39]. We therefore propose a second, biologically more plausible type of coherent noise resulting from a random stimulus locking to an intrinsic spatiotemporal coherent activity pattern on a large spatial scale, such as waves of cortical activity. Coherent spatiotemporal activity patterns in the cortex are observed in many different forms and under various conditions, including different sleep states, but also in awake behaving animals [27, 42, 45, 46]. Cortical waves can occur spontaneously without being locked to external cues [44]. It is therefore reasonable to assume that the onset time of an external cue is random with respect to the internal state. As shown in this study, this randomness constitutes a natural, locally coherent form of across-trial variability suitable to equip neuronal networks with exploratory behavior. As shown in [44], the timing and position of spontaneous cortical waves before stimulus onset are predictive of the stimulus evoked response and the target detection performance. This is consistent with the model proposed here: the phase of the background oscillation during cue presentation determines the decision outcome. During active vision, cortical waves in the visual cortex have been observed to be tightly locked to the saccade onset [43] and to continue into successive fixation periods [53]. The visual cue, i.e., the fixation onset, is therefore locked to this saccade-triggered oscillating background activity. The eye-movement related modulation of neuronal excitability may hence constitute a mechanism to suppress across-trial variability and lead to more stereotype and reliable responses [44, 54].
In this study, we employ ongoing activity waves as a specific form of coherent spatiotemporal activity, and show that explorative behavior is generated for a range of plausible oscillation frequencies. We propose that a similar behavior can be achieved for other non-oscillatory forms of coherent activity, such as transient propagating wave fronts or bumps [55–57], as well as by other factors modulating the excitability of neighboring neurons in a coherent manner, such as transient neuromodulatory signals. The use of ongoing oscillatory background activity with constant frequency and phase differences is a simplification of this study. A more realistic scenario would be one where each oscillation episode lasts for only few tens or hundreds of milliseconds, and is followed by a new pattern with different phase characteristics. This, however, would not lead to a qualitatively new type of replay behavior as long as two characteristics are preserved: first, at the time of the stimulus arrival, neurons in the same subpopulation experience the same oscillation phase, while neurons in different subpopulations are exposed to different phases, and second, the cue is presented at a different oscillation phase in each trial.
By changing the noise characteristics (such as the amplitude or frequency of the background activity, or the level of correlation), the model proposed in this study can replay competing sequences according to different strategies. For low levels of noise, the network systematically replays the sequence that occurred most often during learning (max-prob). For higher noise levels, it can match the replay frequency to the occurrence frequency during training (probability matching), or become even more explorative. This offers a potential mechanistic explanation of how animals can adjust their decision strategy based on environmental conditions [7]. In the living brain, the noise properties could be controlled by neuromodulatory signals or by inputs from other brain areas (e.g., during attention; [58]). Our and many other studies predict that, in cases where the decision strategy is shifted towards exploration, more energy needs to be provided for noise generation. In line with this prediction, the work in [59] shows that explorative behavior is accompanied by an increase in the BOLD signal amplitude in cortical areas associated with decision making.
In this study, we equip the network with noise only during sequence replay, but not during training. From a biological point of view, the assumption of vanishing noise during training is not necessarily implausible: as shown in this study, a random locking of the stimulus to an intrinsic coherent spatiotemporal activity pattern may constitute the main cause of exploratory behavior during sequential memory recall. Activity patterns such as traveling waves, however, are not constantly present in the cortex. They may be suppressed during learning, and only added during memory recall to a task-specific extent. Apart from this, the assumption of noise-free training is not critical: in [5], we have shown that the spiking TM model can successfully learn complex sequences in the presence of low and moderate levels of uncorrelated background noise (see supplementary figures S6 and S7 in [5]). Only for large noise amplitudes, the learning performance is impaired as the WTA dynamics are disrupted. If the noise is locally correlated, this effect is less severe because correlated noise increases the response variability across trials, but keeps the variability across neurons in each subpopulation small. Hence, the WTA dynamics remain functional in each trial.
A number of previous studies suggest that synaptic stochasticity, i.e., the variability in postsynaptic responses including synaptic failure [60], may constitute an efficient source of noise for probabilistic computations in neuronal circuits [17, 61]. The total input to a neuron resulting from large ensembles of synapses, however, is likely to be subject to noise averaging. This is in line with an in-vitro study showing that synaptic stochasticity has only a marginal effect on the variability of postsynaptic responses [62]. Averaging of synaptic noise could only be avoided if the variability of synaptic responses was correlated across synapses. To date, it remains unclear how such correlations could potentially arise. Localized neuromodulatory signals or shared presynaptic spike histories may play a role in this.
The spiking TM model employed in this study can adopt a probability-matching strategy because the plasticity dynamics during learning leads to an approximately linear mapping of the relative sequence frequencies during training to the synaptic weights between neurons representing consecutive sequence elements (Fig 4A). The information about the training frequencies is hence stored in the synaptic weights. In this study, we freeze the synaptic weights and preserve this mapping by deactivating the synaptic plasticity dynamics after learning. The spiking TM model can learn the order of items in sequences for a range of different inter-stimulus intervals, but not the timing or the duration of sequence elements. In the replay mode, sequences are replayed with a constant high speed which is mainly determined by the synaptic and neuronal time constants, irrespective of the sequence speed during training [5]. This behavior is reminiscent of the fast, compressed sequence replay observed in hippocampus and neocortex during sleep [63–67]. For our choice of parameters, the inter-element interval during autonomous replay is about 30 ms, which is smaller than the inter-stimulus interval ΔT = 40 ms during training. With an intact plasticity dynamics during replay, the potentiation of synapses between neurons representing consecutive sequence elements would therefore be substantially stronger than during training, because the spike-timing dependent weight increment increases with decreasing pre-post spike intervals in an exponential manner. As the synaptic weights are limited by a hard upper bound Jmax (clipping), they would more easily be driven into saturation, such that the information about the training frequency is lost. As a consequence, competing sequences would be replayed in the presence of correlated noise with similar frequencies, irrespective of the training frequencies (“full exploration”; see S3 Fig). In the absence of noise or for uncorrelated noise, the network still adopts the max-prob strategy. A modification of the STDP dynamics or a thorough tuning of the plasticity parameters may preserve the probability matching performance, even without disabling the plasticity after learning. Alternatively, the spiking TM model may be extended and equipped with additional mechanisms that enable slow sequence replay or even a learning of the sequence speed [3].
For illustration, we have restricted this study to relatively simple sets of S = 2 (Figs 3, 4, 6, 7 and 9) or S = 5 sequences (Fig 5) with C = 4 elements per sequence and 2 overlapping characters. In [5], we have demonstrated that the spiking TM model can successfully learn larger ensembles (up to 6) of longer sequences (up to 12 elements) with larger overlap (up to 10 elements). A systematic investigation of the spiking TM capacity accounting for the maximum number S and length C of sequences as well as the maximum amount of overlap (history dependence) will be subject of future studies (see also [68]). For a larger number S of competing sequences, probability matching becomes harder because the differences pi − pj between the relative training frequencies pi (i = 1, …, S) in general become smaller, a consequence of 0 ≤ pi ≤ 1 and . Similar training frequencies lead to similar synaptic weights during the learning process, and in turn, to similar cue response latencies. It is therefore more likely that the winner-take-all dynamics does not come to a unique decision and leads to the joint replay of multiple competing sequences. For the specific choice of noise parameters σ and c used here, the replay frequency approximately match the training frequencies.
The spiking TM model introduced in [5] can learn sequences with repeating elements, provided these elements are not immediately following each other. Learning a sequence {A,B,C,B}, for example, is possible, whereas learning of {A,B,B,C} is not. The plasticity dynamics employed in [5] and in this study prohibits a strengthening of connections between synchronously active neurons, i.e., neurons with the same stimulus preference (belonging to the same subpopulation). If the time difference between a presynaptic and a postsynaptic spike is smaller than Δtmin = 4ms, a synapse between these neurons is neither potentiated by STDP nor affected by the homeostatic component (see Eqs (13) and (14) in Table 6:Plasticity). Without this restriction, connections between neurons within a subpopulation would quickly grow, in particular at an early learning stage where all neurons within a subpopulations fire in a non-sparse, synchronous manner. As a consequence, the activation of a subset of neurons within some subpopulation would immediately activate other neurons in the same population, and hence trigger a self-prediction. For a sequence {A,B,B,C}, such a self-prediction is indeed wanted, but only in response to the 2nd element. The 1st and the 3rd element must not lead to a self-prediction. Sequences with immediately repeating characters hence require a modification of the plasticity dynamics to permit the strengthening of connections between neurons corresponding to the same character, and at the same time, suppress an excessive growth of synapses between synchronously active neurons.
In the spiking TM model, postsynaptic currents are described by a current-based (CUBA) model where each presynaptic spike triggers a stereotype current response, irrespective of the postsynaptic membrane potential. Real synaptic (and other ionic) currents are mediated by conductances and are determined by the distance of the membrane potential from the respective reversal potential. In combination with point neuron models, the use of conductance-based (COBA) synapses is however problematic as each synapse would feel the same membrane potential, irrespective of its type. In real neurons, synapses on different parts of the neurons, e.g., different dendritic branches, experience different membrane potentials. In this study, we therefore decide in favor of the CUBA synapse model. The neglect of the voltage dependence of the synaptic current is particularly relevant for inhibitory currents. The activation of current-based inhibitory synapses can arbitrarily hyperpolarize the cell membrane (see S6 Fig). With a conductance-based (COBA) synapse model, in contrast, the membrane potential is bounded from below by the Cl− reversal potential which is close to the resting potential. Future studies need to investigate to what extent the inhibition-mediated competition mechanisms employed in this study and in [5] are altered if inhibitory currents are described by a COBA model. Further, in the spiking TM model, inhibition is for simplicity mediated by a single inhibitory neuron with very strong and very fast outgoing connections. Future versions of the model could replace this inhibitory neuron by a recurrently connected network of inhibitory neurons with realistic inhibitory weights and time constants. The inhibitory response would still be very fast due to the fast-tracking property of such networks [69].
Overall, our work ties together concepts from sequence processing and decision making in the face of ambiguity. It demonstrates that locally coherent noise is a potential mechanism underlying exploratory behavior, and shows that a random stimulus locking to coherent background activity such as cortical waves constitutes a natural and efficient form of such noise.
Materials and methods
In the following, we provide an overview of the task and the training protocol, the network model, and the analysis of the sequence replay statistics. A detailed description of the model and a list of parameter values are provided in Tables 1–8 and Table 9, respectively.
Parameter values are given in Table 9.
Parameter values are given in Table 9.
Parameter values are given in Table 9.
Parameter values are given in Table 9.
Parameter values are given in Table 9.
Parameter values are given in Table 9.
Parameter values are given in Table 9.
Parameter values are given in Table 9.
Parameters derived from other parameters are marked in gray. Curly brackets depict a set of values corresponding to different experiments. Bold numbers depict default values.
Learning protocol and task
During learning, the network is continuously exposed to repeated presentations of an ensemble of S sequences of ordered discrete items ζij. The order of the sequence elements within a given sequence represents the temporal order of the item occurrence. To investigate the sequence recall performance in the presence of ambiguity, we design the sequences such that they overlap in the first two elements ζ1 = ζi1 and ζ2 = ζi2 (i ∈ [1, …, S]).
The training period is subdivided into Ne episodes. Each training episode is composed of L sequences picked from the set {s1, s2, s3, …, pS} of S training sequences with relative frequencies p1, p2, p3, …, pS, respectively, such that . During training, this set of L sequences is presented repetitively (Ne times) with fixed order. Randomizing the sequence order during training doesn’t affect the results provided the relative frequencies are preserved (S4 Fig). The total number piLNe of presentations of a specific sequence si during training is proportional to the training frequency pi.
After successful learning, the presentation of some sequence element leads to a context dependent prediction of the subsequent stimulus. In case the prediction is wrong the network generates a mismatch signal [5]. As the learned sequence overlap in the first two elements, choosing the cue to be the first sequence element (ζ1) results in an ambiguity. Here, we investigate the replay frequency of a given sequence si as a function of its training frequency pi and study whether the network can choose between different replay strategies (see Fig 2 and main text).
Network model
Network structure.
The network consists of a population of NE excitatory (“E”) neurons and a single inhibitory (“I”) neuron. The neurons in are randomly and recurrently connected, such that each neuron in receives KEE excitatory inputs from other neurons in . Excitatory neurons are recurrently connected to the single inhibitory neuron. The excitatory population is subdivided into M non-overlapping subpopulations , each of them containing neurons with identical stimulus preference (“receptive field”). Each subpopulation thereby represents a specific element within a sequence.
External inputs during learning.
The network is driven by an ensemble of M external inputs. Each of these external inputs xk represents a specific sequence element (“A”, “B”, …), and feeds all neurons in the subpopulation that have the same stimulus preference. The occurrence of a specific sequence element ζi,j at time ti,j is modeled by a single spike xk(t) = δ(t − ti,j) generated by the corresponding external source xk.
During training, subsequent sequence elements ζi,j and ζi,j+1 within a sequence si are presented with an inter-stimulus interval ΔT = ti,j+1 − ti,j. Subsequent sequences si and si+1 are separated in time by an inter-sequence time interval .
External inputs during replay.
After learning the set of sequences S, we present cue signals encoding for first sequence elements ζ⋅,1 by repetitively activating the corresponding external spike source xk (see above) at Nt time points t1, t2, …, . Subsequent cues are separated by an inter-trial interval ΔTcue,j = tj+1 − tj. In section “A spiking neural network recalls sequences in response to ambiguous cues”, ΔTcue,j is constant and in section “Random stimulus locking to spatiotemporal oscillations as natural form of noise”, ΔTcue,j is randomly and uniformly distributed between umin and umax.
During the replay, excitatory neurons are additionally driven by a background input implemented either in the form of asynchronous irregular synaptic bombardment (see “A spiking neural network recalls sequences in response to ambiguous cues”) or oscillatory inputs (see “Random stimulus locking to spatiotemporal oscillations as natural form of noise”). The first is realized using ensembles of excitatory and inhibitory spike sources and (k ∈ [1, …, M]), each composed of n elements. Each source is an independent realization of a Poisson point process with a rate ν. Excitatory neurons in the same subpopulation receive KEQ inputs with weight JEQ from the ensemble and KEV inputs with weights JEV = −JEQ from the ensemble . Spikes from and give rise to a jump in the synaptic current of the postsynaptic cell followed by an exponential decay with a time constant τEQ and τEV = τEQ, respectively. The time average input current of a neuron i is (20) and the variance across time (21) where J = JEQ = −JEV, τB = τEQ = τEV, and K = KEQ = KEV. Given that the populations of background sources are of a finite size, there is a probability that two neurons in the same subpopulation pick a certain number of identical sources, this gives rise to the so called shared input correlation. The correlation coefficient in the input current is governed by (22) With this relationship, we can now vary the correlation coefficient by fixing K and varying n. For the special case where c is zero, we assume that each neuron has its own set of independent Poissonian sources. The second type of background input is implemented using an ensemble of M sinusoidal current generators gk, each with a frequency f, amplitude a, and a phase ϕk (k ∈ [1, …, M]). Excitatory neurons in the same subpopulation Mk receive oscillatory inputs from the same source gk.
Note that the additional background noise described above is not present during the training.
Neuron and synapse model.
For all types of neurons, the temporal evolution of the membrane potential is given by the leaky integrate-and-fire model Eq 4. The total synaptic input current of excitatory neurons is composed of currents in distal dendritic branches, inhibitory currents, and currents from external sources. The inhibitory neuron receives only inputs from excitatory neurons. Individual spikes arriving at dendritic branches evoke alpha-shaped postsynaptic currents, see Eq 6. The dendritic current includes an additional nonlinearity describing the generation of dendritic action potentials (dAPs; NMDA spikes): if the dendritic current IED exceeds a threshold θdAP, it is instantly set to the dAP plateau current IdAP, and clamped to this value for a period of duration τdAP, see Eq 10. This plateau current leads to a long lasting depolarization of the soma. The dendritic input current IED constitutes a simplified, phenomenological description of the effect of NMDA spikes on the somatic membrane potential [70–72]. Similar models have been introduced in previous theoretical studies [73, 74]. For simplicity, we equip each excitatory neuron with only a single dendritic branch, i.e., a single dendritic input current IED. We employ alpha-function shaped postsynaptic dendritic currents with finite rise times to ensure that the response latencies during cue-triggered sequence replay depend on the synaptic weights of connections between excitatory neurons, and hence, on the occurrence frequencies of the learned sequences during training (see section “A spiking neural network recalls sequences in response to ambiguous cues”). Inhibitory inputs to excitatory neurons as well as excitatory inputs to the inhibitory neuron trigger exponential postsynaptic currents, see Eqs (7) and (8). The weights JIE of excitatory synapses on the inhibitory neuron are chosen such that the collective firing of a subset of ρ excitatory neurons in the corresponding subpopulation causes the inhibitory neuron to fire. The weights JEI of inhibitory synapses on excitatory neurons are strong such that each inhibitory spike prevents all excitatory neurons in the network from firing within a time interval of few milliseconds. External inputs are composed of currents resulting from the presentation of the sequence elements or currents from background inputs (see Inputs in Table 7). All synaptic time constants, delays, and weights are connection-type specific.
Plasticity.
Only excitatory to excitatory (EE) synapses are plastic. All other connections are static. The dynamics of the EE synaptic weights Jij evolve according to a combination of an additive spike-timing-dependent plasticity (STDP) rule [75] and a homeostatic component [76, 77]. During the replay mode, the plasticity is disabled and the EE weights are kept constant (see Table 6 for details about the plasticity).
Network realizations and initial conditions.
For every network realization, the connectivity and the initial weights are drawn randomly and independently. All other parameters are identical for different network realizations. The initial values of all state variables are given in Tables 8 and Table 9.
Simulation details.
The network simulations are performed in the neural simulator NEST [78] under version 3.0 [79]. The differential equations and state transitions defining the excitatory neuron dynamics are expressed in the domain specific language NESTML [80, 81] which generates the required C++ code for the dynamic loading into NEST. Network states are synchronously updated using exact integration of the system dynamics on a discrete-time grid with step size Δt [82]. The full source code for the implementation with a list of other software requirements is available at Zenodo: https://doi.org/10.5281/zenodo.6378376.
Sequence replay statistics
We define a sequence si to be replayed in response to a cue if more than 0.5ρ neurons in the subpopulation representing the last element in si fire. The parameter ρ corresponds to the minimal number of neurons that is required to trigger the WTA circuit. It therefore represents the minimal number of active neurons in a subpopulation after successful learning. In the absence of noise, the actual number of active neurons in a subpopulation after successful learning is indeed close to ρ (see [5]). In the present study, we find a similar behavior in the presence of correlated noise (see S2 Fig).
Consider the set of S sequences learned by the network. Let denote the power set of , i.e., the set of all subsets of , including the empty set and itself. We define the relative replay frequency of each subset of sequences as the normalized number of exclusive replays of this subset , such that (23)
For two sequences s1 and s2, for example, we monitor the four different replay frequencies f∅ (no sequence is replayed), (only s1 is replayed), (only s2 is replayed), and (both s1 and s2 are replayed). In this work, we refer to f∅ as the “failure rate”. Simultaneous replay of both sequences () refers to cases where the network fails at coming to a unique decision.
Supporting information
S1 Fig. Adjusting level of correlation permits different replay strategies.
Dependence of the relative replay frequencies and of sequences 1 (A, B) and 2 (C, D) on the training frequency p1 = p of sequence 1 for three different correlation levels c = 0, c = 0.8, and c = 1 (A, C), and for a range of correlations (B, D). Parameters: noise amplitude σ = 15 pA and inhibitory weight during replay JEI = −430.51 pA adjusted only for connections from the inhibitory neuron to the subpopulation F. The replay frequencies are computed as the mean across Nt = 151 trials, averaged across 5 different network realizations. See Table 9 for remaining parameters.
https://doi.org/10.1371/journal.pcbi.1010989.s001
(EPS)
S2 Fig. Response sparsity during replay in the presence of correlated noise.
Dependence of the number of active neurons in the subpopulation corresponding to the last element in {A, F, B, D} (brown) and {A, F, C, E} (blue) on the relative training frequency of sequence 1. The dotted gray horizontal line depicts the target number of active neurons per subpopulation after learning. Noise parameters: σ = 26 pA, c = 1. See Table 9 for remaining parameters.
https://doi.org/10.1371/journal.pcbi.1010989.s002
(EPS)
S3 Fig. Sequence replay in the presence of ongoing synaptic plasticity.
Dependence of A) the compound weights (PSC amplitudes) wBF (brown) and wCF (blue), B) the population averaged response latencies tB and tC for subpopulations “B” (brown) and “C” (blue), C) the relative replay frequencies and of sequences 1 (brown) and 2 (blue), the failure rate f∅ (gray) and the joint probability of replaying both sequences (silver) on the training frequency of sequence 1. In panel A, circles and error bars depict the mean and the standard deviation across different network realizations. In pane B, circles and error bars represent the mean and the standard deviation across Nt = 101 trials (cue repetitions), averaged across 5 different network realizations. Note that we run the replay for 200 trials but plotted the statistic of only the last 101 trials. In panel C, circles represent the mean across Nt = 101 trials, averaged across 5 different network realizations. Noise parameters: σ = 26 pA, c = 1 (right). See Table 9 for remaining parameters. The data depicted here are results from simulations with enabled synaptic plasticity dynamics during replay. For the results shown in Fig 4, in contrast, the plasticity is disabled during replay to preserve the synaptic weight configuration after the training.
https://doi.org/10.1371/journal.pcbi.1010989.s003
(EPS)
S4 Fig. Sequence replay for randomized sequence order during training.
Dependence of the relative replay frequencies of sequences 1 (brown) and 2 (blue), the failure rate (gray) and the joint probability of replaying both sequences (silver) on the training frequency of sequence 1 for three different noise configurations σ = 0 pA, c = 0 (left), σ = 26 pA, c = 0 (middle), and σ = 26 pA, c = 1 (right). Circles represent the mean across Nt = 151 trials, averaged across 5 different network realizations. The data depicted here is generated using the same setting as in Fig 4F and 4G, but with a randomized order of sequences during the training.
https://doi.org/10.1371/journal.pcbi.1010989.s004
(EPS)
S5 Fig. Effect of the learning duration on the probability matching performance.
Dependence of the replay frequencies of sequences 1 (brown) and 2 (blue) of sequence set I, the failure rate (gray) and the joint probability of replaying both sequences (silver) on the number of training episodes. Each episode refers to a set of ten sequences, where each sequence is picked from the set {s1, s2} with relative frequencies p1 = 0.2 (brown dotted horizontal line) and p2 = 1 − p1 = 0.8 (blue dotted horizontal line), respectively. Noise parameters: σ = 20 pA, c = 1.
https://doi.org/10.1371/journal.pcbi.1010989.s005
(EPS)
S6 Fig. Spiking activity (top) and membrane potentials (bottom) at the end of the training and during replay.
A,C) During training (left), the network is exposed to repeated presentations of sequence 1 {A, F, B, D} and sequence 2 {A, F, C, E} (sequence set I) with training frequencies p1 = 0.4 and p2 = 0.6, respectively. Here, only the responses to a single presentation of sequence 1 (black triangles in panel A) are shown at the end of the training period (after 20 episodes). C,D) Autonomous replay of sequence 1 in response to activation of sequence element “A” (black triangle in panel C). For clarity, panels A and B show only a small fraction of neurons in each population. Traces in panels C and D depict membrane potentials of two neurons in populations “B” (brown) and “C” (blue), participating in sequences 1 and 2, respectively. During replay, neurons are subject to correlated background noise (σ = 26 pA, c = 1). The resulting membrane potential fluctuations are however small and barely visible in panel D, due to the large hyperpolarizations caused by the global inhibitory feedback. Small bars in panels C and D depict somatic spikes (threshold crossings). In panel D, neurons in both populations “B” (brown) and “C” (blue) generate dAPs (predictions) at about 75ms in response to the ambiguous history “A” and “F”. The voltage of the neuron in population “B” (brown) reaches the spike threshold θE (dotted line) first, generates a somatic spike (brown bar), and contributes to the inhibitory feedback leading to the fast and strong hyperpolarization of the neuron in population “C” (blue), and all other excitatory neurons in the network (not shown here).
https://doi.org/10.1371/journal.pcbi.1010989.s006
(EPS)
Acknowledgments
The authors thank Abigail Morrison, Alexander René and Robin Gutzen for valuable discussions on the project.
References
- 1. Klampfl S, Maass W. Emergence of dynamic memory traces in cortical microcircuit models through STDP. J Neurosci. 2013;33(28):11515–11529. Available from: https://doi.org/10.1523/jneurosci.5044-12.2013 pmid:23843522
- 2. Klos C, Miner D, Triesch J. Bridging structure and function: A model of sequence learning and prediction in primary visual cortex. PLOS Comput Biol. 2018;14(6):e1006187. Available from: https://doi.org/10.1371/journal.pcbi.1006187 pmid:29870532
- 3. Maes A, Barahona M, Clopath C. Learning spatiotemporal signals using a recurrent spiking network that discretizes time. PLOS Comput Biol. 2020;16(1):e1007606. Available from: https://doi.org/10.1371/journal.pcbi.1007606 pmid:31961853
- 4. Cone I, Shouval HZ. Learning precise spatiotemporal sequences via biophysically realistic learning rules in a modular, spiking network. eLife. 2021;10:e63751. Available from: https://doi.org/10.7554/elife.63751 pmid:33734085
- 5. Bouhadjar Y, Wouters DJ, Diesmann M, Tetzlaff T. Sequence learning, prediction, and replay in networks of spiking neurons. PLOS Comput Biol. 2022;18(6):e1010233. Available from: https://doi.org/10.1371/journal.pcbi.1010233 pmid:35727857
- 6. Hawkins J, Ahmad S. Why neurons have thousands of synapses, a theory of sequence memory in neocortex. Front Neural Circuits. 2016;10:23. Available from: https://doi.org/10.3389/fncir.2016.00023 pmid:27065813
- 7. Cohen JD, McClure SM, Yu AJ. Should I stay or should I go? How the human brain manages the trade-off between exploitation and exploration. Philos Trans R Soc B. 2007 March;362(1481):933–942. Available from: https://doi.org/10.1098/rstb.2007.2098
- 8. O’Doherty JP, Cockburn J, Pauli WM. Learning, Reward, and Decision Making. Annu Rev Psychol. 2017 January;68(1):73–100. Available from: https://doi.org/10.1146/annurev-psych-010416-044216 pmid:27687119
- 9. Vulkan N. An Economist’s Perspective on Probability Matching. J Econ Surv. 2000 February;14(1):101–118. Available from: https://doi.org/10.1111/1467-6419.00106
- 10. Myers JL. Probability learning and sequence learning. Handbook of Learning and Cognitive Processes, ed WK Estes. 2014;p. 171–205.
- 11. Shanks DR, Tunney RJ, McCarthy JD. A re-examination of probability matching and rational choice. J Behav Decis Mak. 2002;15(3):233–250. Available from: https://doi.org/10.1002/bdm.413
- 12.
Bod R, Hay J, Jannedy S. Probabilistic linguistics. MIT press; 2003.
- 13. Hansen KA, Hillenbrand SF, Ungerleider LG. Effects of Prior Knowledge on Decisions Made Under Perceptual vs. Categorical Uncertainty. Front Neurosci. 2012;6:163. Available from: https://doi.org/10.3389/fnins.2012.00163 pmid:23162424
- 14. Buesing L, Bill J, Nessler B, Maass W. Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons. PLOS Comput Biol. 2011;7:e1002211. pmid:22096452
- 15. Legenstein R, Maass W. Ensembles of Spiking Neurons with Noise Support Optimal Probabilistic Inference in a Dynamically Changing Environment. PLOS Comput Biol. 2014 October;10(10):e1003859. Available from: https://doi.org/10.1371/journal.pcbi.1003859 pmid:25340749
- 16. Hartmann C, Lazar A, Nessler B, Triesch J. Where’s the noise? Key features of spontaneous activity and neural variability arise through learning in a deterministic network. PLOS Comput Biol. 2015;11(12):e1004640. pmid:26714277
- 17. Neftci EO, Pedroni BU, Joshi S, Al-Shedivat M, Cauwenberghs G. Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines. Front Neurosci. 2016 June;10:241. Available from: https://doi.org/10.3389/fnins.2016.00241 pmid:27445650
- 18. Jordan J, Petrovici MA, Breitwieser O, Schemmel J, Meier K, Diesmann M, et al. Deterministic networks for probabilistic computing. Sci Rep. 2019;9:18303. Available from: https://www.nature.com/articles/s41598-019-54137-7 pmid:31797943
- 19. Dold D, Bytschok I, Kungl AF, Baumbach A, Breitwieser O, Senn W, et al. Stochasticity from function—why the bayesian brain may need no noise. Neural Netw. 2019;119:200–213. pmid:31450073
- 20. Kriener B, Tetzlaff T, Aertsen A, Diesmann M, Rotter S. Correlations and population dynamics in cortical networks. Neural Comput. 2008;20:2185–2226. pmid:18439141
- 21. Tetzlaff T, Rotter S, Stark E, Abeles M, Aertsen A, Diesmann M. Dependence of neuronal correlations on filter characteristics and marginal spike-train statistics. Neural Comput. 2008 September;20(9):2133–2184. Available from: https://doi.org/10.1162/neco.2008.05-07-525 pmid:18439140
- 22. Renart A, De La Rocha J, Bartho P, Hollender L, Parga N, Reyes A, et al. The asynchronous State in Cortical Circuits. Science. 2010 January;327:587–590. Available from: https://doi.org/10.1126/science.1179850 pmid:20110507
- 23. Tetzlaff T, Helias M, Einevoll GT, Diesmann M. Decorrelation of Neural-Network Activity by Inhibitory Feedback. PLOS Comput Biol. 2012;8(8):e1002596. Available from: https://doi.org/10.1371/journal.pcbi.1002596 pmid:23133368
- 24. Helias M, Tetzlaff T, Diesmann M. The correlation structure of local cortical networks intrinsically results from recurrent dynamics. PLOS Comput Biol. 2014;10(1):e1003428. Available from: https://doi.org/10.1371/journal.pcbi.1003428 pmid:24453955
- 25. Brunel N, Hakim V. Fast Global Oscillations in Networks of Integrate-and-Fire Neurons with Low Firing Rates. Neural Comput. 1999 October;11(7):1621–1671. Available from: https://doi.org/10.1162/089976699300016179 pmid:10490941
- 26. Brunel N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci. 2000;8(3):183–208. Available from: https://doi.org/10.1023/a:1008925309027 pmid:10809012
- 27. Sato TK, Nauhaus I, Carandini M. Traveling Waves in Visual Cortex. Neuron. 2012 July;75(2):218–229. Available from: https://doi.org/10.1016/j.neuron.2012.06.029 pmid:22841308
- 28. Takahashi K, Kim S, Coleman TP, Brown KA, Suminski AJ, Best MD, et al. Large-scale spatiotemporal spike patterning consistent with wave propagation in motor cortex. Nat Commun. 2015 May;6(7169):1–11. Available from: https://doi.org/10.1038/ncomms8169 pmid:25994554
- 29. Roxin A, Brunel N, Hansel D. The role of delays in shaping spatio-temporal dynamics of neuronal activity in large networks. Phys Rev Lett. 2005 June;94(23):238103. Available from: https://doi.org/10.1103/physrevlett.94.238103 pmid:16090506
- 30. Senk J, Korvasová K, Schuecker J, Hagen E, Tetzlaff T, Diesmann M, et al. Conditions for wave trains in spiking neural networks. Phys Rev Res. 2020 May;2(2). Available from: https://doi.org/10.1103/physrevresearch.2.023174
- 31. Lapicque L. Recherches quantitatives sur l’excitation électrique des nerfs traitée comme une polarization. J Physiol Pathol Gen. 1907;9:620–635.
- 32. Goedeke S, Diesmann M. The mechanism of synchronization in feed-forward neuronal networks. New J Phys. 2008;10:015007.
- 33. De la Rocha J, Doiron B, Shea-Brown E, Kresimir J, Reyes A. Correlation between neural spike trains increases with firing rate. Nature. 2007 august;448(16):802–807. pmid:17700699
- 34. Atherton LA, Dupret D, Mellor JR. Memory trace replay: the shaping of memory consolidation by neuromodulation. Trends Neurosci. 2015 September;38(9):560–570. Available from: https://doi.org/10.1016/j.tins.2015.07.004 pmid:26275935
- 35. Baluch F, Itti L. Mechanisms of top-down attention. Trends Neurosci. 2011 April;34(4):210–224. Available from: https://doi.org/10.1016/j.tins.2011.02.003 pmid:21439656
- 36.
Abeles M. Corticonics: Neural Circuits of the Cerebral Cortex. 1st ed. Cambridge: Cambridge University Press; 1991.
- 37.
Braitenberg V, Schüz A. Cortex: Statistics and Geometry of Neuronal Connectivity. 2nd ed. Berlin: Springer-Verlag; 1998.
- 38. Shadlen MN, Newsome WT. The Variable Discharge of Cortical Neurons: Implications for Connectivity, Computation, and Information Coding. J Neurosci. 1998;18(10):3870–3896. Available from: https://doi.org/10.1523/jneurosci.18-10-03870.1998 pmid:9570816
- 39. Song S, Sjöström P, Reigl M, Nelson S, Chklovskii D. Highly nonrandom features of synaptic connectivity in local cortical circuits. PLOS Biol. 2005;3(3):e68. pmid:15737062
- 40. Nauhaus I, Busse L, Carandini M, Ringach DL. Stimulus contrast modulates functional connectivity in visual cortex. Nat Neurosci. 2009;12:70–76. Available from: https://doi.org/10.1038/nn.2232 pmid:19029885
- 41. Muller L, Destexhe A. Propagating waves in thalamus, cortex and the thalamocortical system: Experiments and models. J Physiol. 2012 September;106(5-6):222–238. Available from: https://doi.org/10.1016/j.jphysparis.2012.06.005. pmid:22863604
- 42. Denker M, Zehl L, Kilavik BE, Diesmann M, Brochier T, Riehle A, et al. LFP beta amplitude is linked to mesoscopic spatio-temporal phase patterns. Sci Rep. 2018 March;8(1):1–21. Available from: https://doi.org/10.1038/s41598-018-22990-7 pmid:29581430
- 43. Zanos TP, Mineault PJ, Nasiotis KT, Guitton D, Pack CC. A Sensorimotor Role for Traveling Waves in Primate Visual Cortex. Neuron. 2015 February;85(3):615–627. Available from: https://doi.org/10.1016/j.neuron.2014.12.043 pmid:25600124
- 44. Davis ZW, Muller L, Martinez-Trujillo J, Sejnowski T, Reynolds JH. Spontaneous travelling cortical waves gate perception in behaving primates. Nature. 2020;587(7834):432–436. pmid:33029013
- 45.
Buzsáki G. Rhythms of the Brain. Oxford University Press; 2006. Available from: https://doi.org/10.1093/acprof:oso/9780195301069.001.0001.
- 46. Buzsáki G, Draguhn A. Neuronal Oscillations in Cortical Networks. Science. 2004;304:1926–1929. pmid:15218136
- 47. Salinas E, Sejnowski TJ. Correlated neuronal activity and the flow of neural information. Nat Rev Neurosci. 2001;2(8):539–550. pmid:11483997
- 48. Faisal AA, Selen LP, Wolpert DM. Noise in the nervous system. Nat Rev Neurosci. 2008;9(4):292–303. pmid:18319728
- 49. Fellous J, Tiesinga P, Thomas P, Sejnowski T. Discovering spike patterns in neuronal responses. J Neurosci. 2004 March;12(24):2989–3001. pmid:15044538
- 50. Destexhe A, Rudolph M, Fellous JM, Sejnowski TJ. Fluctuating synaptic conductances recreate in vivo-like activity in neocortical neurons. Neuroscience. 2001 November;107(1):13–24. Available from: https://doi.org/10.1016/s0306-4522(01)00344-x pmid:11744242
- 51. Holt GR, Softky WR, Koch C, Douglas RJ. Comparison of discharge variability in vitro and in vivo in cat visual cortex neurons. J Neurophysiol. 1996 May;75(5):1806–1814. Available from: https://doi.org/10.1152/jn.1996.75.5.1806 pmid:8734581
- 52. Stroeve S, Gielen S. Correlation Between Uncoupled Conductance-Based Integrate-and-Fire Neurons Due to Common and Synchronous Presynaptic Firing. Neural Comput. 2001;13(9):2005–2029. pmid:11516355
- 53. Ito S, Hansen M, Heiland R, Lumsdaine A, Litke A, Beggs J. Extending Transfer Entropy Improves Identification of Effective Connectivity in a Spiking Cortical Network Model. PLOSONE. 2011;6.
- 54. Barczak A, Haegens S, Ross DA, McGinnis T, Lakatos P, Schroeder CE. Dynamic modulation of cortical excitability during visual active sensing. Cell reports. 2019;27(12):3447–3459. pmid:31216467
- 55. Ermentrout B. Neural networks as spatio-temporal pattern-forming systems. Rep Prog Phys. 1998;61(4):353–430. Available from: https://doi.org/10.1088/0034-4885/61/4/002
- 56. Coombes S. Waves, bumps, and patterns in neural field theories. Biol Cybern. 2005 July;93:91–108. Available from: https://doi.org/10.1007/s00422-005-0574-y pmid:16059785
- 57. Muller L, Chavane F, Reynolds J, Sejnowski TJ. Cortical travelling waves: mechanisms and computational principles. Nat Rev Neurosci. 2018 March;19(5):255–268. Available from: https://doi.org/10.1038/nrn.2018.20 pmid:29563572
- 58. Cohen MR, Maunsell JHR. Attention improves performance primarily by reducing interneuronal correlations. Nat Neurosci. 2009;12:1594–1600. pmid:19915566
- 59. Daw ND, O’Doherty JP, Dayan P, Seymour B, Dolan RJ. Cortical substrates for exploratory decisions in humans. Nature. 2006 June;441(7095):876–879. Available from: https://doi.org/10.1038/nature04766 pmid:16778890
- 60. Branco T, Staras K. The probability of neurotransmitter release: variability and feedback control at single synapses. Nat Rev Neurosci. 2009 May;10(5):373–383. Available from: https://doi.org/10.1038/nrn2634 pmid:19377502
- 61. Maass W. Noise as a resource for computation and learning in networks of spiking neurons. Proc IEEE. 2014;102(5):860–880.
- 62. Nawrot MP, Schnepel P, Aertsen A, Boucsein C. Precisely timed signal transmission in neocortical networks with reliable intermediate-range projections. Front Neural Circuits. 2009;3(1). Available from: http://www.frontiersin.org/neural_circuits/10.3389/neuro.04.001.2009/abstract pmid:19225575
- 63. Nádasdy Z, Hirase H, Czurkó A, Csicsvari J, Buzsáki G. Replay and Time Compression of Recurring Spike Sequences in the Hippocampus. J Neurosci. 1999 November;19(21):9497–9507. Available from: https://doi.org/10.1523/jneurosci.19-21-09497.1999 pmid:10531452
- 64. Lee AK, Wilson MA. Memory of Sequential Experience in the Hippocampus during Slow Wave Sleep. Neuron. 2002 December;36(6):1183–1194. Available from: https://doi.org/10.1016/s0896-6273(02)01096-6 pmid:12495631
- 65. Euston DR, Tatsuno M, McNaughton BL. Fast-Forward Playback of Recent Memory Sequences in Prefrontal Cortex During Sleep. Science. 2007 November;318(5853):1147–1150. Available from: https://doi.org/10.1126/science.1148979 pmid:18006749
- 66. Davidson TJ, Kloosterman F, Wilson MA. Hippocampal replay of extended experience. Neuron. 2009;63(4):497–507. pmid:19709631
- 67. Xu S, Jiang W, Poo MM, Dan Y. Activity recall in a visual cortical ensemble. Nat Neurosci. 2012 mar;15(3):449–455. pmid:22267160
- 68.
Ahmad S, Hawkins J. How do neurons operate on sparse distributed representations? A mathematical theory of sparsity, neurons and active dendrites. ArXiv. 2016;p. 1601.00720. Available from: https://arxiv.org/abs/1601.00720.
- 69. van Vreeswijk C, Sompolinsky H. Chaotic Balanced State in a Model of Cortical Circuits. Neural Comput. 1998;10(6):1321–1371. Available from: https://doi.org/10.1162/089976698300017214 pmid:9698348
- 70. Antic SD, Zhou WL, Moore AR, Short SM, Ikonomu KD. The decade of the dendritic NMDA spike. J Neurosci Res. 2010;88(14):2991–3001. pmid:20544831
- 71. Schiller J, Major G, Koester HJ, Schiller Y. NMDA spikes in basal dendrites of cortical pyramidal neurons. Nature. 2000 Mar;404(6775):285–289. Available from: http://doi.org/10.1038/35005094 pmid:10749211
- 72. Larkum ME, Nevian T, Sandler M, Polsky A, Schiller J. Synaptic Integration in Tuft Dendrites of Layer 5 Pyramidal Neurons: A New Unifying Principle. Science. 2009 August;325(5941):756–760. Available from: https://doi.org/10.1126/science.1171958 pmid:19661433
- 73. Jahnke S, Timme M, Memmesheimer RM. Guiding Synchrony through Random Networks. Phys Rev X. 2012;2(4):041016. Available from: https://doi.org/10.1103/physrevx.2.041016.
- 74. Breuer D, Timme M, Memmesheimer RM. Statistical physics of neural systems with nonadditive dendritic coupling. Phys Rev X. 2014;4(1):011053. Available from: https://doi.org/10.1103/physrevx.4.011053.
- 75. Morrison A, Diesmann M, Gerstner W. Phenomenological models of synaptic plasticity based on spike-timing. Biol Cybern. 2008;98(6):459–478. Available from: https://doi.org/10.1007/s00422-008-0233-1 pmid:18491160
- 76. Abbott LF, Nelson SB. Synaptic plasticity: taming the beast. Nat Neurosci. 2000 November;3:1178–1183. Available from: https://doi.org/10.1038/81453 pmid:11127835
- 77. Tetzlaff C, Kolodziejski C, Timme M, Wörgötter F. Synaptic scaling in combination with many generic plasticity mechanisms stabilizes circuit connectivity. Front Comput Neurosci. 2011;5:47. pmid:22203799
- 78. Gewaltig MO, Diesmann M. NEST (NEural Simulation Tool). Scholarpedia J. 2007;2(4):1430. Available from: https://doi.org/10.4249/scholarpedia.1430
- 79.
Hahne J, Diaz S, Patronis A, Schenck W, Peyser A, Graber S, et al.. NEST 3.0. Zenodo; 2021. Available from: https://doi.org/10.5281/zenodo.4739103.
- 80.
Plotnikov D, Blundell I, Ippen T, Eppler JM, Rumpe B, Morrison A. NESTML: a modeling language for spiking neurons. In: Oberweis A, Reussner R, editors. Modellierung 2016. vol. P-254 of Lecture Notes in Informatics (LNI). Modellierung 2016, Karlsruhe (Germany), 17 Mar 2016—19 Mar 2016. Gesellschaft für Informatik e.V. (GI); 2016. p. 93–108. Available from: http://juser.fz-juelich.de/record/826510.
- 81.
Nagendra Babu P, Linssen C, Eppler JM, Schulte to Brinke T, Ziaeemehr A, Fardet T, et al.. NESTML 4.0. Zenodo; 2021. Available from: https://doi.org/10.5281/zenodo.4740083.
- 82. Rotter S, Diesmann M. Exact digital simulation of time-invariant linear systems with applications to neuronal modeling. Biol Cybern. 1999;81(5-6):381–402. Available from: https://doi.org/10.1007/s004220050570 pmid:10592015