Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Advertisement
  • Loading metrics

Clustered Desynchronization from High-Frequency Deep Brain Stimulation

  • Dan Wilson ,

    dan.d.wilson8@gmail.com

    Affiliation Department of Mechanical Engineering, University of California, Santa Barbara, Santa Barbara, Calfornia, United States of America

  • Jeff Moehlis

    Affiliation Department of Mechanical Engineering, University of California, Santa Barbara, Santa Barbara, Calfornia, United States of America

  • Article
  • Authors
  • Metrics
  • Comments
  • Media Coverage

Abstract

While high-frequency deep brain stimulation is a well established treatment for Parkinson’s disease, its underlying mechanisms remain elusive. Here, we show that two competing hypotheses, desynchronization and entrainment in a population of model neurons, may not be mutually exclusive. We find that in a noisy group of phase oscillators, high frequency perturbations can separate the population into multiple clusters, each with a nearly identical proportion of the overall population. This phenomenon can be understood by studying maps of the underlying deterministic system and is guaranteed to be observed for small noise strengths. When we apply this framework to populations of Type I and Type II neurons, we observe clustered desynchronization at many pulsing frequencies.

Author Summary

While high-frequency deep brain stimulation (DBS) is a decades old treatment for alleviating the motor symptoms Parkinsons disease, the way in which it alleviates these symptoms is not well understood. Making matters more complicated, some experimental results suggest that DBS works by making neurons fire more regularly, while other seemingly contradictory results suggest that DBS works by making neural firing patterns less synchronized. Here we present theoretical and numerical results with the potential to merge these competing hypotheses. For predictable DBS pulsing rates, in the presence of a small amount of noise, a population of neurons will split into distinct clusters, each containing a nearly identical proportion of the overall population. When we observe this clustering phenomenon, on a short time scale, neurons are entrained to high-frequency DBS pulsing, but on a long time scale, they desynchronize predictably.

Introduction

High frequency deep brain stimulation (DBS), a medical treatment in which high-frequency, pulsatile current is injected into an appropriate brain region, is a well established technique for alleviating tremors, rigidity, and bradykinesia in patients with Parkinson’s disease [1, 2]. While the underlying mechanisms of deep brain stimulation remain unknown, it is well documented that local field potential recordings recorded in the subthalamic nucleus of patients with Parkinson’s disease display increased power in the beta range (approximately 13–35 Hz) [35]. These findings have led to the hypothesis that pathological synchronization among neurons in the basal ganglia-cortical loop contribute to the motor symptoms of Parkinson’s disease [68]. This hypothesis has been supported by findings that when DBS is applied to the STN, abatement of motor symptoms is correlated with a decrease the power in the beta band of the local field potential recorded from STN [911]. This line of thinking has led researchers to develop new strategies for desynchronizing populations of pathologically synchronized oscillators, [1214], some of which have shown promise as new treatment options for Parkinson’s disease in animal and human studies [15, 16].

While many factors including the location of the DBS probe, stimulus duration, and stimulus magnitude influence the efficacy of DBS, one factor that is difficult to explain is the strong dependency on stimulus frequency. Low-frequency stimulation (≤ 50 Hz) is generally ineffective at reducing symptoms of Parkinson’s disease while high-frequency stimulation from 70 to 1000 Hz and beyond has been shown to be therapeutically effective [1719]. However, not all high frequency stimulation is equally effective, and clinicians have generally settled on a therapeutic range at about 130–180 Hz. [20, 21].

In an effort to provide insight into the frequency dependent effects of DBS, the authors of [22] postulated that specifically tuned pulse parameters might yield chaotic desynchronization in a network of neurons. If desynchronization is the goal of DBS, then achieving it chaotically is a worthwhile objective. However, this can generally only be seen in a small window of pulse parameters and frequencies which may make it difficult to observe in real neurons. Furthermore, in both brain slices and in vivo recordings, individual neuronal spikes have been found to be time-locked to the external high-frequency stimulation [2328] which would be unlikely if the spike times were chaotic.

Here we present a different viewpoint showing that with high frequency pulsatile stimulation, in the presence of a small amount of noise, a population of neurons can split into distinct clusters, each containing a nearly identical proportion of the overall population. We find that the number of clusters, and hence desynchronization, is highly dependent the pulsing frequency and strength. We provide theoretical insight into this phenomenon and show that it can be observed over a wide range of pulsing frequencies and pulsing strengths. This viewpoint merges two seemingly contradictory hypotheses, showing that the therapeutic effect of the periodic pulsing could be to replace the pathological behavior with a less synchronous pattern of activity, even if individual neuronal spikes are phase locked to the DBS pulses.

Results

Clustered Desynchronization in a Computational Neural Network

Consider a noisy, periodically oscillating population of thalamic neurons from [29]: (1) Here Vi, hi, and ri represent the transmembrane voltage and gating variables of neuron i, respectively, with all functions and parameters taken to be identical to those found in [29], DBS pulses are represented by an external current u(t), taken to be identical for each neuron, ηi(t) is a Gaussian white noise process, C = 1μF/cm2 is the constant neural membrane capacitance, Ib = 1.93μA/μF is a baseline current chosen so that in the absence of external perturbations and noise the firing rate is 60 Hz, and N is the total number of neurons.

Using phase reduction, [30, 31], we can study Eq (1) in a more convenient form: (2) where θ ∈ [0, 1) is the phase of the neuron with θ = 0 defined to be the time the neuron fires an action potential, ω is the natural frequency of oscillation, f(θ) is a continuous function which describes the effect of the DBS pulse, τ is a positive constant that determines the period of the DBS input, and Z(θ) is the neuron’s phase response curve to an infinitesimal perturbation. Here we assume that ϵ is small enough so that higher order noise terms are negligible (c.f. [32, 33]). Fig 1 shows an example charge-balanced pulsatile stimulus. We take the positive portion to be five times larger than the negative portion, with the positive current applied for 100 μs. The bottom panel shows the function f(θ) for a given stimulus intensity, calculated using the direct method [34]: a pulsatile perturbation is applied to a neuron at a known phase θp so that f(θp) can be inferred by measuring the timing of the next spike. We note that even though the DBS pulse itself is not a δ-function, it is of short enough duration that Eq (2) is an accurate approximation to Eq (1).

thumbnail
Fig 1. Charge-balanced DBS pulses.

The top panel shows a simple, charge-balanced biphasic stimulus chosen to represent a DBS pulse with frequency 1/τ and strength S. The bottom panel shows the relationship between the phase at the onset of a pulse and the induced change in phase at a particular strength. The effect of the stimulus is generally phase advancing.

https://doi.org/10.1371/journal.pcbi.1004673.g001

We simulate Eq (1) with 1000 neurons, taking a pulse strength S = 110μA/μF, and noise strength , for various pulsing frequencies, with results shown in Fig 2. After some initial transients, we find the network tends to settle to a state with different numbers of clusters for different pulsing frequencies. From the probability distributions of neural phases ρ(θ), the bottom panels show somewhat surprisingly that once the network settles to a clustered state, each cluster contains a nearly identical portion of the overall population. Also, upon reaching the steady distribution, neurons can still transition between clusters, but on average, the amount that leave a given cluster must be identical to the amount that enter. Fig 3 shows individual voltage traces for 50 sample neurons from this population after the network settles to a clustered state. Highlighted traces represent neurons from each cluster. In general, increasing the number of clusters will decrease synchrony in the population. Furthermore, neurons are more likely to transition between clusters as the overall number of clusters becomes larger.

thumbnail
Fig 2. Clustered desynchronization in thalamic neurons.

In simulations of Eq (1), the top panel shows snapshots of the probability distribution ρ(θ) taken immediately preceding every pulse for the 63 Hz stimulation and every fourth, third, and second pulse for the 83 Hz, 94 Hz, and 120 Hz stimulation, respectively. From left to right, the bottom panels show average probability distributions from the final fifty snapshots while stimulating at 83, 94, and 120 Hz respectively. Horizontal dashed lines denote troughs of the probability distributions. The probability contained between successive troughs is 0.27, 0.25, 0.25, and 0.23 in the left panels, 0.34, 0.33, and 0.33 in the middle panels, and 0.52 and 0.48 in the right panels.

https://doi.org/10.1371/journal.pcbi.1004673.g002

thumbnail
Fig 3. Voltage traces from thalamic neuron population simulations.

Top, middle, and bottom panels show clustered states achieved with pulsing at 83, 94, and 120 Hz, respectively. Highlighted traces represent individual neurons from separate clusters. Dashed lines represent the time at which the pulses are applied.

https://doi.org/10.1371/journal.pcbi.1004673.g003

A Theoretical Basis for Clustered Desynchronization

For simplicity of notation, we will take ω = 1 for Eq (2) in the theoretical analysis, but note that any other value could be considered to obtain qualitatively similar results. In the absence of noise, one may integrate Eq (2) for a single neuron θ to yield (3) In this work, we are interested in the state of the system immediately after each pulsatile input. By integrating Eq (2), the system dynamics can be understood in terms of compositions of a map (4) where g(s) = s + f(s + τ) + τ and g(n) denotes the composition of g with itself n times, and θ0 is the initial state of a neuron. In Eqs (3) and (4), θ(t) and the arguments of f and g are always evaluated modulo 1. If g(n) has a stable fixed point, then any oscillator which starts within its basin of attraction will approach that fixed point as time approaches infinity [35].

With noise, the dynamics are more complicated. In this case, the phase of each neuron cannot be determined exactly from Eq (2), but rather, follows a probability distribution. For a neuron with known initial phase θ0, after has elapsed, the corresponding δ-function distribution δ(θθ0) will be mapped to the Gaussian distribution (5) with mean μ = g(m)(θ0) given by Eq (23) and variance ν given by Eq (24). In order to study the infinite time behavior of Eq (2) it can be useful to calculate steady state probability distributions for the population of neurons. To simplify the analysis, we will study Eq (2) as a series of stochastic maps applied to an initial phase density (c.f. [22, 33]), (6) where P is the linear Frobenius Perron operator corresponding to evolution for the time , and ρ(θ, t) is the probability distribution of phases at time t. We can approximate P by the matrix by using eq (5) to determine each column of the matrix for a set of discretized phases, Δθ = 1/M. In Fig 4 for instance, the map g(2) yields the stochastic matrix , shown in the panel on the right. The delta function distribution (arrow) is mapped to a Gaussian distribution (solid line). The matrix has all positive entries, and since probability is conserved, the matrix is column stochastic (i.e. the columns of sum to 1). For this class of matrices, the Perron-Frobenius theorem allows us to write [36, 37], (7) where v and w are the right and left eigenvectors associated with the unique eigenvalue of 1, and normalized so that wT v = 1. Recalling that is column stochastic, its left eigenvector associated with λ = 1 is 1T. Therefore, as the map is applied repeatedly, any initial distribution will approach a steady state distribution determined by v. We find that the existence of m fixed points of the underlying map g(m)(θ) provide the basis for the clustered desynchronization seen in Fig 2, with a more formal main theoretical result given below.

thumbnail
Fig 4. Deterministic and stochastic maps.

The left panels show an example map g(1) and g(2), which is obtained from g(1) for a given value of τ. The right panel shows stochastic matrix corresponding to g(2), with darker regions representing larger numbers. After 2τ has elapsed, the delta function distribution (arrow) is mapped to a Gaussian distribution (solid line) while the uniform distribution is mapped to a bimodal distribution (dashed line).

https://doi.org/10.1371/journal.pcbi.1004673.g004

Main Theoretical Result

Consider the map g(m) with the following properties:

  1. g(m) has exactly m stable fixed points corresponding to a period m orbit of g(1),
  2. g(m) has m unstable fixed points and no center fixed points,
  3. g(m) is monotonic.

Then for a given choice of ϵ1 ≪ 1, we may choose ϵ small enough in eq (2) so that the distribution of phases will asymptotically approach a state with m distinct clusters, each containing of the total probability density. A proof of this statement is given in the Methods Section. In this detailed proof, we find that desynchronization can still be guaranteed even when g(m) is not monotonic as long as a more general set of conditions is satisfied. Note that because Eq (2) does not contain any coupling terms, noise will drive the system to a uniform, desynchronized, distribution in the absence of DBS input. In the sections to follow, we give numerical and theoretical evidence that clustered desynchronization can emerge in a population of pathologically synchronized neurons when the DBS pulses overwhelm the terms responsible for the synchronization.

Arnold Tongues and Lyapunov Exponents

Using the main theoretical result, we can calculate regions of parameter space where we expect clustered desynchronization. The top-left panel of Fig 5 gives regions of parameter space where clustering is expected, giving the appearance of Arnold tongues [35]. White regions in the graph represent regions where either clustering is not guaranteed, or where we expect more than five clusters. However, we do not include these regions in the figure because they only exist for very narrow regions of parameter space. At around 60 Hz, the natural unforced period of the neural population, exactly one cluster is guaranteed, corresponding to 1:1 locking (one DBS pulse per neural spike). This locking corresponds to a highly synchronous state, which we found when forcing the population at 63 Hz in Fig 2. For pulsing frequencies between 80 and 120 Hz, we see prominent tongues corresponding to states with three, four and five clusters, which correspond to the states in Fig 2 where we force at 83 Hz and 94 Hz. A very wide tongue corresponding to 2:1 locking (two DBS pulses per neural spike) exists at frequencies ranging from 120 to 200 Hz, which is where DBS is often seen to be effective. Pulsing in this region manifests in the behavior seen with 120 Hz forcing in Fig 2.

thumbnail
Fig 5. Clustered desynchronization and Lyapunov exponent calculations for thalamic neurons.

The top-left panel shows regions of parameter space where clustered desynchronization is guaranteed for small enough noise. The bottom-left panel shows the regions associated with a positive average Lyapunov exponent (LE). Panels A-D give steady state probability distributions for stimulus parameters corresponding to black dots on the left.

https://doi.org/10.1371/journal.pcbi.1004673.g005

To make comparisons with [22] we calculate the average Lyapunov exponent of the resulting steady state distributions using Eq (12). For Lyapunov exponents greater than zero (resp., less than zero), the pulsatile stimulus will, on average, desynchronize (resp., synchronize) neurons which are close in phase, and this has been proposed as an indicator of the overall desynchronization that might be observed in a population of neurons receiving periodic DBS pulses. The Lyapunov exponent is calculated for multiple parameter values for a system with a noise strength ϵ = 0.1. Results are given in the bottom-left panel of Fig 5. We note that results are not qualitatively different for different noise strengths. Compared with the Arnold tongues in the top-left panel, we find very narrow regions where the Lyapunov exponent is positive at relatively high stimulus strengths. The top-right panels show the steady state distribution for a population with pulses of strength S = 50μA/μF at a rate of 119 Hz (resp., 180 Hz) corresponding to a two (resp., three) cluster steady state. The bottom-right panels show the steady state distribution for a pulsing strength of S = 208μA/μF at 120 Hz and S = 206μA/μF at 290 Hz corresponding to regions with positive Lyapunov exponents. Even though the clustered states have very negative Lyapunov exponents, they show similar clustering behavior to the states with a positive Lyapunov exponent. However, the clustered desynchronization in the top-right panels can be accomplished using a significantly weaker stimulus and can be observed at a much wider range of pulsing parameters.

Heterogeneous Pulsatile Inputs

Results thus far have focused on populations of neurons receiving homogeneous pulsatile inputs. However, it is well established that voltage fields vary significantly with distance from an external voltage probe [38]. In computational models such heterogeneity has been shown to create complicated patterns of phase locking that are dependent on the stimulation strength [39] and can improve methods designed to desynchronize large populations of neurons [14].

To understand the emergence of clustered synchronization when external inputs are different among neurons, we can modify the stochastic differential eq (2) as follows (8) Here, fi(θ) is calculated based on the pulsatile input to each neuron. For each neuron, we use Eq (7) to calculate its steady state probability distribution. The state of each neuron is independent of the others, so that the average of the individual distributions gives an overall probability distribution for the population.

As an illustrative example, we model 1000 neurons of the form Eq (1) receiving a charge balanced input of the same shape as in Fig 1 with τ = 1/(140 Hz) and S drawn from a normal distribution with mean 168 μA/μF and standard deviation 20, giving values of S between approximately 100 and 240. From the top-left panel of Fig 5, this range of stimulus parameters is mostly, but not completely, contained in a two cluster region. g(2)(θ) is plotted in black in the top left panel of Fig 6 for a randomly chosen subset of these neurons with the identity line plotted in red for reference. The top-right panel shows each neuron’s steady state probability distribution (calculated from its associated stochastic matrix) in black for a noise strength of . While the main clustering results are guaranteed when the noise strength is small enough, we find that clustering can still occur at higher levels of noise. The steady state probability distribution in corresponding simulations of Eq (1) with heterogeneous pulsing strengths (blue dashed curve) agrees well with the theoretical probability density (red dashed curve) calculated from the average of each black curve in the top-right panel. The bottom panel shows corresponding cumulative distributions for the theoretical (red) and computationally observed (blue) probability densities highlighting that similar numbers of neurons are contained in each cluster.

thumbnail
Fig 6. Heterogeneous pulsatile inputs.

The top-left panel shows g(2)(θ) (black lines) for DBS input at 140 Hz at multiple stimulation strengths with the identity line plotted in red for reference. The top-right panel shows the associated theoretical steady state probability distributions (black lines), average theoretical probability distribution (dashed red line), and computationally observed probability distributions (dashed blue line). The bottom panel shows the cumulative distributions for the average theoretical (red line) and computational (blue line) distributions.

https://doi.org/10.1371/journal.pcbi.1004673.g006

Similar clustering results can be obtained for different heterogeneous stimulus parameters. For instance, from Fig 5, three-cluster behavior will emerge for pulsing frequencies of 200 Hz and stimulus strengths between approximately 90 and 170 μA/μF.

Desynchronization of Neural Populations

Our main clustering results are for single population of neurons which do not explicitly take interactions between multiple populations of neurons into account, as is the case for the brain circuit responsible for Parkinsonian tremor. Here, we provide evidence that clustered desynchronization can still emerge when additional forcing terms are much smaller in magnitude than the external DBS pulses.

Populations of coupled oscillators subject to common external forcing have been widely studied in the form of the forced Kuramoto model [4042]. Synchronization can be observed when either external forcing or intrinsic coupling dominate the system dynamics. For intermediate coupling and external forcing strengths, a complicated bifurcation structure emerges and the macroscopic order parameter, describing the overall synchronization of the population, can oscillate. These behaviors have been observed in chemical oscillator systems [43] and have implications to externally forced biological rhythms such as circadian oscillations and neural oscillations [44]. We simulate Eq (1) with an additional external sinusoidal forcing frequency which could represent an aggregate input from a separate, unperturbed neural population. We note that this is not meant to represent a physiologically accurate model of DBS, but instead is meant to illustrate clustered desynchronization in the presence of a common periodic perturbation. Here, we take u(t) = 0.1 sin(ωext t) + uDBS(t), where ωext is chosen so that the frequency of oscillation is the same as the natural period of the unperturbed neurons (60 Hz) and uDBS(t) represents the common pulsatile input. For these simulations, N = 1000. As we show in the Methods Section we may write this system in an identical form as Eq (2), for which the main theoretical result still holds. For this particular example, clustering results are identical to those in the top left panel of Fig 5. Results are shown with a pulsing strength S = 110μA/μF and noise strength of in Fig 7. We find that the presence of a sinusoidal external stimulus is sufficient to synchronize the network in the absence of DBS forcing. When the DBS is turned on at both 83 and 94 Hz, we see three and four cluster states, respectively, just as we observed in the simulation without external forcing. However, in this simulation, the mean phase of each cluster varies with the external sinusoidal stimulation. Note that 120 Hz stimulation in this network also leads to two cluster desynchronization but is not shown.

thumbnail
Fig 7. Simulations of (2) with additional sinusoidal forcing.

In the top panel, snapshots of the probability distribution are taken immediately preceding every third and fourth pulse for the 83 and 94 Hz DBS pulsing, respectively. When there is no DBS pulsing, snapshots are taken at the sinusoidal forcing frequency. As averaging the term associated with the sinusoidal forcing would suggest, we observe desynchronization into three and four clusters for pulsing at 94 and 83 Hz pulsing, respectively. When the DBS pulsing is turned off at t = 8 seconds, the system quickly settles to a synchronized state. The bottom panels show the probability density averaged over multiple snapshots after the initial transient has decayed. Horizontal dashed lines denote troughs of the probability distributions. The probability contained between successive troughs is 0.32, 0.35, and 0.33 in the left panels, and 0.26, 0.26, 0.25, and 0.23 in the right panels.

https://doi.org/10.1371/journal.pcbi.1004673.g007

When neurons are synchronized through forcing that is not periodic, clustered desynchronization may still emerge when the DBS pulsing overwhelms the stimulation responsible for synchronization. As a second example, we model a network of neurons Eq (1) with an additional synaptic current, with each neuron’s transmembrane voltage dynamics taking the form . Here, (9) where K determines the magnitude of the synaptic current, VG is the reversal potential of a given neurotransmitter, and sk an additional synaptic variable associated with neuron k that evolves according to (c.f. [30]) , where α2 = 2, VT = -37, σT = 2, and β2 = 0.1. We simulate the resulting network withVg = 60mV, K = 0.015 and a noise strength of ; neurons form a single synchronized cluster in the absence of DBS input shown in panel B of Fig 8. starting at t = 0.5 ms, we apply 180 Hz stimulation with S = 200μA/μF, the pulsing quickly overwhelms the synchronizing influence of the coupling, and the population splits into two separate clusters as shown by the probability densities in Panels A and individual voltage traces in panel C. When DBS is applied, we see from the average probability distributions and cumulative distributions in panels F and E, respectively that there are nearly equal proportions of neurons in each cluster. Other computational modeling [29] has suggested that pulsatile DBS may help regulate neural firing patterns, and help alleviate strongly oscillatory synaptic inputs. Panel D shows a similar phenemenon, when DBS is on, high amplitude oscillations in synaptic current are replaced by oscillations with a higher frequency but smaller amplitude. The desynchronization results here can be observed for many choices of parameters provided the pulsatile stimulation is significantly stronger than the synchronizing stimulation and that clustering behavior is expected in the absence of coupling.

thumbnail
Fig 8. Simulations of (2) with synaptic coupling.

Panels B and C show traces for 30 representative neurons from a synaptically coupled network without and with 180 Hz DBS pulsing, respectively. In Panel A, snapshots of the probability density are taken at 90 Hz. At t = 0.5 seconds, pulsatile stimulation is turned on. Panel D shows characteristic synaptic currents felt by a single neuron with and without DBS pulsing. Panel F shows the probability density averaged over multiple snapshots after initial transients have decayed when DBS pulsing is on and panel E gives the cumulative distribution. Here, we observe a similar clustering pattern, with nearly equal amounts of neurons in each cluster.

https://doi.org/10.1371/journal.pcbi.1004673.g008

Clustered Desynchronization in Type II Hodgkin-Huxley Neurons

Consider a two dimensional reduction of the classic Hodgkin-Huxley equations [45] which reproduce the essential dynamical behavior [46]: (10) Here Vi and ni represent the transmembrane voltage and gating variables, respectively. All functions and parameters are identical to those given in [47]. DBS pulses are represented by the external current u(t), which is given identically to each neuron, ηi(t) is a white noise process, C = 1μF/cm2 is the constant neural membrane capacitance, Ib = 10μA/μF is a baseline current yielding a firing rate of 84.7 Hz in the absence of external perturbation, and N is the total number of neurons.

Unlike the model for thalamic neurons used in the main text, the Hodgkin-Huxley neuron displays Type II phase response properties, i.e., a monophasic pulsatile input can act to either significantly increase or decrease the phase of the neuron. The top panel of Fig 9 shows an example monophasic stimulus which will be applied to the network Eq (10) at different strengths, S and periods τ. For this example, the pulse duration will be 100 μs, approximately, one percent of the neural firing rate.

thumbnail
Fig 9. Monophasic DBS pulses.

The top panel shows a pulsatile monophasic stimulus applied to the Hodgkin-Huxley neural network with frequency 1/τ and strength S. The bottom panel shows the relationship between pulse onset and the resulting change in phase.

https://doi.org/10.1371/journal.pcbi.1004673.g009

For this model, using our main theoretical results, we can also calculate regions of parameter space in which we expect to observe clustered desynchronization, with results shown in the middle panel of Fig 10. The Arnold tongues for clustering greater than five become quite narrow and are not included in this figure. We also calculate the average Lyapunov exponent for the steady state distribution using eq (12) from the main text for a noise strength of ϵ = 0.15, with results shown in the bottom panel. We note that unlike for the thalamic neurons, the Lyapunov exponent for the Hodgkin-Huxley network is never positive. We find that regions with the lowest Lyapunov exponents tend to correlate with regions where clustered desynchronization is guaranteed. Even though the Lyapunov exponent might be quite negative, the steady state distribution can still be relatively desynchronized if there are a large number of clusters, as evidenced by the four cluster state in the top panel.

thumbnail
Fig 10. Clustering and Lyapunov exponent calculations for the Hodgkin-Huxley network.

The middle panel gives regions of parameter space where clustered desynchronization is guaranteed to occur for small enough noise. The bottom panel shows the average Lyapunov exponent for the steady state distribution. The top panel shows the steady state probability distribution for stimulus parameters shown with black dots in the middle panel.

https://doi.org/10.1371/journal.pcbi.1004673.g010

Finally, we simulate Eq (10) with N = 1000 neurons with a pulse strength S = 10μA/μF and for pulsing frequencies that are expected to yield clustered desynchronization determined from Fig 10. Results are shown in Fig 11. We find clustered states begin to form almost immediately, and in the bottom panel, after the system has approached the steady state distribution, each cluster contains an approximately identical proportion of the population.

thumbnail
Fig 11. Clustered desynchronization in Type II reduced Hodgkin-Huxley neurons.

The top panel shows snapshots of the probability distribution of phases ρ(θ) from of simulations of Eq (10). Snapshots are taken immediately preceding every pulse for the 86 Hz stimulation, and after every third, second, and third pulse for the 127, 175, and 260 Hz stimulation, respectively. From left to right, bottom panels show average probability distributions from the final fifty snapshots while stimulating at 127, 175, and 260 Hz stimulation, respectively. Horizontal dashed lines denote troughs of the probability distributions. The probability contained between successive troughs is 0.34, 0.33 and 0.33 in the left panels, 0.51, and 0.49 in the middle panels, and 0.33, 0.33, and 0.34 in the right panels.

https://doi.org/10.1371/journal.pcbi.1004673.g011

Discussion

While deep brain stimulation is an important treatment for patients with medically intractable Parkinson’s disease, its fundamental mechanisms remain unknown. Making matters more complicated, experimental studies have shown that the symptoms of Parkinson’s can be alleviated using strategies that seek to desynchronize a population of pathologically synchronized oscillators [15, 16], while other seemingly contradictory studies have shown that neurons have a tendency to time-lock to external high-frequency pulses [2327], supporting the hypothesis that entrainment is necessary to replace the pathological neural activity in order to alleviate the symptoms of Parkinson’s disease. In this work we have have shown that these two phenomenon may be happening in concert: in the presence of a small amount of noise, high frequency pulsing at a wide range of frequencies is expected to split a larger population of neurons into subpopulations, each with a nearly equal proportion of the overall population. The number of subpopulations, and hence the level of desynchronization, is determined by phase locking relationships which can be found by analyzing the phase reduced system in the absence of noise. We note that other theoretical [12] and experimental [16][15] work has yielded control strategies that are specifically designed to split a pathologically synchronized neural population into distinct clusters. The theory presented in this paper suggests that clinical DBS may be accomplishing the same task with a single probe.

The conditions we have developed guarantee clustered desynchronization for small enough noise, but we do not give any a priori estimate of how small the noise needs to be so that distinct clusters can be observed. If the noise is too large, the clusters may start to merge into one another, particularly when there are a large number of clusters (see the bottom left panel of Fig 2). Even in this case, however, we still have discernible clusters, throughout which the overall population of neurons is spread relatively evenly. We also note that this theory does not give any estimates on the time it takes for the system to achieve its steady state population distribution, but this can be calculated for a specific population by examining the second smallest eigenvalue in magnitude, λ2, of a given stochastic matrix (c.f. [33]). As λ2 becomes closer to 1, more iterations of the map will be required for the transient dynamics to die out, and it will take longer for the system to approach the steady state distribution. In general, we find that for a given map, λ2 becomes closer to one as noise strength decreases, which is consistent with the notion that the average escape time between clusters will increase as the strength of the external noise decreases [48].

For the networks that we have investigated, regions of parameter space which are associated with either clustered desynchronization or positive Lyapunov exponents can display similar levels of desynchronization. However, numerics show that the regions with positive Lyapunov exponents are quite small and may be difficult to find without explicit calculation. In contrast, the regions of parameter space associated with clustered desynchronization are fairly large and are likely to be observed without knowledge of the system properties. If desynchronization of the overall population is an important mechanism of high-frequency DBS, doing so chaotically may be an overly restrictive objective if clustered desynchronization is sufficient to alleviate the motor symptoms of Parkinson’s disease.

This study is certainly not without limitations. For instance, the computational neurons considered in this study are based on simple, low-dimensional models of neural spiking behavior. However, we have developed the theory to understand the clustered desynchronization phenomenon in such a way that it can easily be extended to more complicated neural models with more physiologically detailed dynamics provided the neural phase response properties can be measured experimentally in vivo[49]. Furthermore, while we only considered homogeneous populations in this study, the phase response properties and natural frequencies of a living population of neurons will surely have a heterogeneous distribution. In this context, we could still show that clustered desynchronization is expected by applying the theory developed in this work to a family of neurons with different phase response properties and natural frequencies. The expected steady state population could then be obtained as a weighted average of the individual steady state distributions. Numerical results presented here apply to networks for which external DBS perturbations overwhelm the intrinsic coupling between neurons. In this work, we have not considered the complicated interplay between multiple populations of neurons which give rise to the symptoms of Parkinson’s disease; more detailed modelling studies would be required to determine the effect of clustered desynchronization on the overall network circuit. Others have studied synchrony and clustering behavior in coupled populations of neurons [5052] and it is possible that our results could be extended to describe clustering for weaker pulsatile stimuli when coupling cannot be neglected.

Our results suggest that high-frequency external pulsing could have the effect of separating a neural population into equal subpopulations in the presence of noise. This viewpoint could help explain the frequency dependent nature of therapeutically effective DBS and could help merge competing hypotheses, as desynchronization and entrainment are not mutually exclusive when even small amounts of noise are present. If clustered desynchronization does provide a mechanism by which the motor symptoms associated with Parkinson’s disease can be mitigated, it could provide a useful control objective for designing better open-loop DBS stimuli in order to prolong battery life of the implantable device and to mitigate potential side effects of this therapy.

Methods

Average Lyapunov Exponents

To make comparisons with [22] we calculate the average Lyapunov exponent of the resulting steady state distributions, giving a sense of whether, on average, the orbits of the trajectories oscillators from Eq (2) are converging or diverging. For instance, let ϕ denote the phase difference between oscillators θ1 and θ2 which are close in phase, i.e. ϕ(t)≡|θ1(t) − θ2(t)|. Then from Eq (3), immediately after a DBS pulse occuring at time τ, (11) where ′ ≡ d/ and θ(τ) (resp. θ(τ+)) denotes the limit of θ(t) as t approaches τ from below (resp. above). Note that in the second line, we have used a Taylor expansion of f about θ2 for small values of ϕ(τ). Therefore, the oscillators converge or diverge locally depending upon the derivative of f. For a population of neurons, the stochastic matrix for a given pulsing rate can be used to determine the steady state distribution ρ*(θ) before each pulse, with an average Lyapunov exponent taken to be (c.f. [22]): (12) For LE > 0 (resp., LE < 0), the pulsatile stimulus will, on average, desynchronize (resp., synchronize) neurons which are close in phase, and this has been proposed as an indicator of the overall desynchronization that might be observed in a population of neurons receiving periodic DBS pulses.

Expected Value and Variance of a Noisy Neuron with External Pulses

For a single neuron with a known initial phase θ evolving according to the stochastic differential Eq (2), we calculate the expected value and variance with a strategy that is similar to the one employed in [33]. We first asymptotically expand θ(t) in orders of ϵ, (13) with θ0(0) = θ(0), and θ1(0) = 0. Substituting Eq (13) into Eq (2) and taking ω = 1 for simplicity of notation, allows us to write (14) (15) Integrating eq (14) for a time τ yields, (16) This relationship can be used to write θ0 in terms of compositions of a map: (17) where g(s) = s + f(s + τ) + τ and g(n) denotes the composition of g with itself n times. In Eqs (16) and (17), θ(t) and the arguments of f and g are always evaluated modulo 1.

We now focus our attention on θ1. Integrating eq (15) yields (18) where H(⋅) is the Heaviside step function. Note here that θ1() denotes the limit of θ1(t) as t approaches from below. In the interval 0 ≤ t < τ, we note that , so that . With this in mind, we can rewrite Eq (18) as (19) Likewise, using Eq (19) to determine θ1(2τ) allows us to write (20) This process can be repeated indefinitely to find (21) where is defined recursively so that (22)

Because Eq (21) is the sum of stochastic integrals which themselves are normally distributed random variables, θ1(+) will also be a normally distributed random variable [53]. Ultimately, we are interested in calculating the expected value and variance of a neuron starting at θ(0) after the application of m DBS inputs. Using the asymptotic expansion Eq (13), we can calculate the expected value of θ(+) as . Using the relations Eqs (17) and (21) and noting that the noise η(s) has a mean of zero, E[θ1(+)] = 0, and hence, the expected value, μ, is (23) Again using Eqs (17) and (21), we can calculate the variance, ν, of θ(+) to leading order ϵ2: (24) Note that in the last line, when squaring θ1(+), the fact that any noise processes that do not overlap in time are uncorrelated allows us to eliminate terms nonidentical noise processes. Also note that for identical white noise processes, E[η(s)η(s)] = δ(0).

Proof of Clustered Desynchronization in the Limit of Small Noise

Suppose that conditions 1 and 2 from our main theoretical results are satisfied for g(m) with m stable fixed points. Consider the corresponding stochastic matrix . Denote the stable and unstable fixed points as θsi and θui, respectively. To begin, it will be convenient to define (25) so that . With this definition, corresponds to the fixed points of the map g(m)(θ). We subdivide θ ∈ [0, 1) into into 4m disjoint subregions with the following procedure: Choose α > 0 and define the region si near each stable fixed point so that , where . Likewise, define the region ui near each unstable fixed point so that , where . Define the remaining regions and . See Figs 12 and 13 for an example of how the matrix is partitioned into submatrices according to this procedure.

thumbnail
Fig 12. Example partitioning near a stable fixed point.

In the top panel, horizontal bars represent a particular choice of α. Vertical lines separate the resulting regions. The bottom panel shows the corresponding regions for g(m)(θ) (solid line). For reference, the diagonal dotted identity line is also shown.

https://doi.org/10.1371/journal.pcbi.1004673.g012

thumbnail
Fig 13. Example partitioning of the matrix to test conditions 3a-c.

The left panels show an example of g(1) and g(2) and . Note that g(2) has two stable and two unstable fixed points. The partition of shown in the right panel with dashed lines can be determined from a given choice of α. We note that the matrix has been flipped in the vertical direction to emphasize the correspondence between and g(2)(θ) so that in this visual example, matrix multiplication would not be performed in the usual way. From left to right, the submatrices denoted with asterisks are defined to be , , and . From top to bottom, the regions submatrices denoted with dots are defined to be , , and . The steady state distribution calculated as the right eigenvector of associated with λ = 1 is shown to the right of , along with the associated partition. Note that because the conditions guaranteeing clustered desynchronization are satisfied, by taking ϵ small enough, the difference between the amount of the steady state population found in s1 and s2 can be made arbitrarily small.

https://doi.org/10.1371/journal.pcbi.1004673.g013

In the analysis to follow we will show that the steady state probability distribution exists and is invariant to ρ(θ, 0). We define subregions of to represent the subset of v contained in the region a. We have carefully defined the matrix partition in Fig 13 so that, for instance, corresponds to the amount of probability that is mapped from the region si into to the region when is applied to v. This relation results because all entries of and v are positive. See Fig 14 for a visual representation of this probability mapping process which is central to the proof to follow.

thumbnail
Fig 14. Mapping probability between regions.

In the limit as time approaches infinity, the probability that a neuron will be found in the region sa for is given by . After the time has elapsed, the probability that a randomly chosen neuron started in sa and was mapped to sb for is given by . Based on these relations, the above diagram gives an example characterizing how probability initially found in the region si is mapped to all other regions after the time has elapsed.

https://doi.org/10.1371/journal.pcbi.1004673.g014

Suppose that there exists α for the resulting partition such that,

  1. 3a. for all (resp., ), (resp., ) ∪si
  2. 3b. for all θsi,
  3. 3c. for all θui, and for ij

Note here that for a given set p, denotes its interior, and denotes its closure. Furthermore, we will also assume, without loss of generality, that in the absence of noise, upon successive iterations of the map g(1) the period m orbit is θs1θs2 → ⋯ → θsmθs1. Let γ = mod(i, m) + 1. Suppose then that

  1. 3d. for all θsi, g(1)(θ)∈ the interior of

Then for any choice of ϵ1 ≪ 1, we may choose ϵ (the noise strength) small enough in eq (2), so that as time approaches infinity, regardless of initial conditions, the population will be split into m distinct clusters, and to leading order in ϵ1, each cluster will contain an equal portion of the population. We note that if conditions 1 and 2 from our main theoretical result are satisfied and g(m) is monotonic, then we will be guaranteed to be able to choose and α so that the remaining conditions 3a-d are satisfied.

In order to prove the main theoretical result presented earlier, we will first show that for ϵ small enough, as time tends toward infinity, the probability of finding an oscillator far from any of the stable fixed points is . We will then show that as time approaches infinity, the chance of finding a randomly chosen oscillator near any of the stable fixed points is identical to leading order ϵ1.

Throughout this proof, we are interested in the unique steady state solution which solves . We note that for any positive integer k, so that , i.e. the unique steady state solution of is the same for any choice of k. Using Eqs (23) and (24), we will assume that ϵ is taken small enough so that errors in the approximations of all necessary stochastic matrices are negligible.

Bounding near unstable fixed points.

To begin, for the stochastic matrix , we intend to bound the row sums of the submatrix , the portion of the matrix which maps ui back to ui (see Fig 13 for an example of this notation). Let μθ and σθ correspond to the expected value and standard deviation of any oscillator with initial condition θ, respectively. Let θL and θR correspond to the left and right boundaries of ui. We assume that phase space is partitioned into equal bins of length , where . For row j of corresponding to θj, its row sum Rj can be calculated as (26) where is the equation for a Gaussian curve. As we showed in Eq (23), μθ = g(m)(θ). Let correspond to the closest to θj any oscillator in ui is expected to map to. Taylor expanding around θE yields (27) Similarly, we Taylor expand σθ around θE as (28) Substituting Eqs (27) and (28) into Eq (26) yields, (29) For a given choice of ϵ1 ≪ 1, we may choose ϵ small enough yielding a standard deviation σθE small enough so that Eq (29) can be approximated to leading order ϵ1 with an arbitrarily small number of terms. For the remaining terms, notice that and that . Therefore, if we choose ϵ small enough, σθE will be small enough so that the remaining Taylor expanded terms contribute at most error. When we do this we may rewrite eq (29) as (30) Recall that μ(θE) − θj is so that it can be lumped with the other terms upon Taylor expansion. Notice that Eq (30) can be written as a Riemann sum approximation to (31) The integral in Eq (31) is less than one because g(m)(θk)>1 so that it falls off more quickly than a Gaussian distribution. Therefore, if we choose Δθ and ϵ1 small enough, Rj < 1, which implies .

Because the column sums of the stochastic matrix are all equal to one, and each entry is positive, we may use Gershgorin disks to show all eigenvalues are less than or equal to one. Using the Perron-Frobenius theorem, we know that there is exactly one eigenvalue equal to one, so that the steady state solution, v, solves [36]. Recall that is defined as the subset of v contained in si. Then, because each element of the steady state vector and stochastic matrix is positive, (32) The individual terms of Eq (32), for instance, represent the steady state probability density that is mapped from to ui upon one iteration of the stochastic map. We note that from the conditions 3a, 3b, and 3c, only oscillators which start in ui can map to the interior of ui. This implies that we may choose ϵ small enough so that the probability of transitioning from anywhere outside of ui to ui will be where n is the maximum length over all i of ui. This implies for aui. Finally, using properties of matrix norms with Eq (32), we have (33) Note that the final line results from rearranging the second line using the fact that . It follows immediately from properties of matrix norms that (34) in other words, given ϵ1 ≪ 1, we may choose ϵ small enough so that once the distribution reaches its steady state, the probability that an oscillator can be found in a given region ui is of order ϵ1.

Bounding between stable and unstable fixed points.

For , let , i.e., if an oscillator is expected to be found in either or , it will move at least β closer to si. Let where d(a, b) is the distance between the sets a and b. Consider oscillator j with initial condition . Let c ≡ ⌈κ/β⌉. From eq (23), E[θj(cmτ)] = g(cm)(θj(0)). The oscillator can be at most κ away from si and will either move at least β closer to si, or will move inside si upon each iteration of g(m). Thus, after c iterations of g(m), the oscillator is expected to be in si, and by condition 3b, E[θj((c + 1))] will be in the interior of si. From Eq (24), the variance is proportional to ϵ2, and therefore, we may then choose ϵ small enough so that an oscillator starting in or will be mapped with probability to a location in the interior of si. Furthermore, from condition 3b, any oscillator starting in si will be expected to be found in the interior si, we can choose ϵ small enough so that any oscillator starting in si will be mapped with probability to a location in the interior of si.

We have shown that by choosing ϵ small enough, any oscillator which starts in will map to si with probability . Furthermore, from the previous section, we showed that for a small enough choice of ϵ, the steady state probability density contained in ui will be at most for all i. Using these results, and recalling that the steady state solution of the stochastic system also solves , and using the same partitioning of v as we did in the previous section, we will consider , the proportion of the steady state probability density contained in si: (35)

In other words, for the steady state distribution v only of the overall population will be found inside and .

Neurons are found in clusters near stable fixed points with nearly identical probabilities.

We have shown that if we take ϵ small enough, for all i, the probability that we will find a randomly chosen oscillator in regions ui, and will be at most . Consequently, the majority of the probability distribution will be contained in the regions near stable fixed points. Here we show that the probability that an oscillator will be found in si is nearly equal to the probability that the oscillator will be found in sj for any i and j.

Suppose, without loss of generality that in the absence of noise, upon successive iterations of the map g(1) the period m orbit is θs1θs2 → ⋯ → θsmθs1. Recalling that τ is the period of the external pulsing, we are interested in , the matrix approximation to the Frobenius Perron operator, which characterizes the time evolution of the system’s probability density at intervals of τ. Let γ = mod(i, m) + 1. By condition 3d, for θsi, g(1)(θ)∈ the interior of . Recall from Eq (23) that for an oscillator with phase θ, its probability distribution will be centered around g(1)(θ) after a time τ has elapsed, we may choose ϵ small enough so that after τ has elapsed, the probability of transitioning from si to is . When the distribution reaches steady state, this implies, (36) Note that in the second line, we have used the fact that , , and are terms for all i. In the last line, we have used the fact the sum of the probabilities of an oscillator starting in si and mapping to either (represented by ), (represented by ), or sγ (represented by ) equals ).

Next, from the previous section, we know that . Recalling that the steady state solution of the stochastic system, v, solves we will consider the proportion of the probability density contained in . Noting that because all of the entries of and v are positive, we may write (37) Using an identical procedure, we can formulate the bound . With these bounds we may rewrite Eq (36) as (38) Applying eq (38) to each region si yields (39) Here, the sum of the terms represents the probability that a given oscillator will be found in a region without a stable fixed point of g(m). Finally, using Eq (39), and the fact that v is positive, one can show that for any and , (40) which completes the proof.

Clustered Desynchronization with Common Periodic Perturbations

We consider a modified version of Eq (2) where each neuron feels a small, common periodic perturbation (41) Here, p(ω1 t) is a periodic perturbation with period T1 = 1/ω1 common to each oscillator. This perturbation may represent the effect of coupling from an external population of neurons or coupling between neurons in the population under study. Here we give conditions for which Eq (41) exhibits clustered desynchronization.

Defining ϕiθiω1 t allows us to selectively average the term associated with p(ω1 t) from Eq (41) [54], c.f. [55]: (42) where and φi = ϑiω1 t. Here ϑi is a close approximation to θi so that φiϕi. Noting that G is also T1 periodic, we selectively average Eq (42) to yield (43) Here, Θiϑi and . Notice that Eq (43) is in an identical form as Eq (2) for which our main theoretical result still holds.

Author Contributions

Conceived and designed the experiments: DW JM. Performed the experiments: DW. Analyzed the data: DW. Contributed reagents/materials/analysis tools: DW. Wrote the paper: DW JM.

References

  1. 1. Benabid AL, Chabardes S, Mitrofanis J, and Pollak P. Deep brain stimulation of the subthalamic nucleus for the treatment of Parkinson’s disease. The Lancet Neurology, 8(1):67–81, 2009. pmid:19081516
  2. 2. Bronstein JM, Tagliati M, Alterman RL, Lozano AM, Volkmann J, Stefani A, et. al. Deep brain stimulation for Parkinson disease: an expert consensus and review of key issues. Archives of Neurology, 68(2):165–165, 2011. pmid:20937936
  3. 3. Brown P, Oliviero A, Mazzone P, Insola A, Tonali P, and Di Lazzaro V. Dopamine dependency of oscillations between subthalamic nucleus and pallidum in Parkinson’s disease. The Journal of neuroscience, 21(3):1033–1038, 2001. pmid:11157088
  4. 4. Cassidy M, Mazzone P, Oliviero A, Insola A, Tonali P, Di Lazzaro V, and Brown P. Movement-related changes in synchronization in the human basal ganglia. Brain, 125(6):1235–1246, 2002. pmid:12023312
  5. 5. Priori A, Foffani G, Pesenti A, Tamma F, Bianchi AM, Pellegrini M, et. al. Rhythm-specific pharmacological modulation of subthalamic activity in Parkinson’s disease. Experimental Neurology, 189(2):369–379, 2004. pmid:15380487
  6. 6. Levy R, Hutchison W, Lozano A, and Dostrovsky J. High-frequency synchronization of neuronal activity in the subthalamic nucleus of Parkinsonian patients with limb tremor. The Journal of Neuroscience, 20(20):7766–7775, 2000. pmid:11027240
  7. 7. Chen C, Litvak V, Gilbertson T, Kühn A, Lu CS, Lee ST, et. al. Excessive synchronization of basal ganglia neurons at 20 Hz slows movement in Parkinson’s disease. Experimental Neurology, 205(1):214–221, 2007. pmid:17335810
  8. 8. Wichmann T, DeLong MR, Guridi J, and Obeso JA. Milestones in research on the pathophysiology of Parkinson’s disease. Movement Disorders, 26(6):1032–1041, 2011. pmid:21626548
  9. 9. Wingeier B, Tcheng T, Koop MM, Hill BC, Heit G, and Bronte-Stewart HM. Intra-operative STN DBS attenuates the prominent beta rhythm in the STN in Parkinson’s disease. Experimental Neurology, 197(1):244–251, 2006. pmid:16289053
  10. 10. Kühn AA, Kempf F, Brücke C, Doyle LG, Martinez-Torres I, Pogosyan A, et. al. High-frequency stimulation of the subthalamic nucleus suppresses oscillatory β activity in patients with parkinson’s disease in parallel with improvement in motor performance. The Journal of Neuroscience, 28(24):6165–6173, 2008. pmid:18550758
  11. 11. Bronte-Stewart H, Barberini C, Koop MM, Hill BC, Henderson JM, and Wingeier B. The STN beta-band profile in Parkinson’s disease is stationary and shows prolonged attenuation after deep brain stimulation. Experimental Neurology, 215(1):20–28, 2009. pmid:18929561
  12. 12. Tass PA. A model of desynchronizing deep brain stimulation with a demand-controlled coordinated reset of neural subpopulations. Biological Cybernetics, 89(2):81–88, 2003. pmid:12905037
  13. 13. Nabi A, Mirzadeh M, Gibou F, and Moehlis J. Minimum energy desynchronizing control for coupled neurons. Journal of Computational Neuroscience, 34:259–271, 2013. pmid:22903565
  14. 14. Wilson D and Moehlis J. Optimal chaotic desynchronization for neural populations. SIAM Journal on Applied Dynamical Systems, 13(1):276–305, 2014.
  15. 15. Tass PA, Qin L, Hauptmann C, Dovero S, Bezard E, Boraud T, et. al. Coordinated reset has sustained aftereffects in Parkinsonian monkeys. Annals of Neurology, 72(5):816–820, 2012. pmid:23280797
  16. 16. Adamchic I, Hauptmann C, Barnikol UB, Pawelczyk N, Popovych O, Barnikol TT, et. al. Coordinated reset neuromodulation for Parkinson’s disease: proof-of-concept study. Movement Disorders, 29(13):1679–1684, 2014. pmid:24976001
  17. 17. Rizzone M, Lanotte M, Bergamasco B, Tavella A, Torre E, Faccani G, et. al. Deep brain stimulation of the subthalamic nucleus in Parkinson’s disease: effects of variation in stimulation parameters. Journal of Neurology, Neurosurgery and Psychiatry, 71(2):215–219, 2001. pmid:11459896
  18. 18. Moro E, Esselink RJA, Xie J, Hommel M,. Benabid AL, and Pollak P. The impact on Parkinson’s disease of electrical parameter settings in STN stimulation. Neurology, 59(5):706–713, 2002. pmid:12221161
  19. 19. Benabid AL, Pollak P, Hoffmann D, Gervason C, Hommel M, Perret JE, et. al. Long-term suppression of tremor by chronic stimulation of the ventral intermediate thalamic nucleus. The Lancet, 337(8738):403–406, 1991.
  20. 20. Volkmann J, Herzog J, Kopper F, and Deuschl G. Introduction to the programming of deep brain stimulators. Movement Disorders, 17(S3):S181–S187, 2002. pmid:11948775
  21. 21. Kuncel A and Grill W. Selection of stimulus parameters for deep brain stimulation. Clinical Neurophysiology, 115(11):2431–2441, 2004. pmid:15465430
  22. 22. Wilson C, Beverlin B II, and Netoff T. Chaotic desynchronization as the therapeutic mechanism of deep brain stimulation. Front. Syst. Neurosci., 5:Art. No. 50, 2011.
  23. 23. Garcia L, Audin J, D’Alessandro G, Bioulac B, and Hammond C. Dual effect of high-frequency stimulation on subthalamic neuron activity. The Journal of Neuroscience, 23(25):8743–8751, 2003. pmid:14507974
  24. 24. Bar-Gad I, Elias S, Vaadia E, and Bergman H. Complex locking rather than complete cessation of neuronal activity in the globus pallidus of a 1-methyl-4-phenyl-1, 2, 3, 6-tetrahydropyridine-treated primate in response to pallidal microstimulation. The Journal of Neuroscience, 24(33):7410–7419, 2004. pmid:15317866
  25. 25. Hashimoto E, Elder CM, Okun MS, Patrick SK, and Vitek JL. Stimulation of the subthalamic nucleus changes the firing pattern of pallidal neurons. The Journal of Neuroscience, 23(5):1916–1923, 2003. pmid:12629196
  26. 26. Galati S, Mazzone P, Fedele E, Pisani A, Peppe A, Pierantozzi M, et. al. Biochemical and electrophysiological changes of substantia nigra pars reticulata driven by subthalamic stimulation in patients with Parkinson’s disease. European Journal of Neuroscience, 23(11):2923–2928, 2006. pmid:16819981
  27. 27. McConnell GC, So RQ, Hilliard JD, Lopomo P, and Grill WM. Effective deep brain stimulation suppresses low-frequency network oscillations in the basal ganglia by regularizing neural firing patterns. The Journal of Neuroscience, 32(45):15657–15668, 2012. pmid:23136407
  28. 28. Cleary DR, Raslan AM, Rubin JE, Bahgat D, Viswanathan A, Heinricher MM et. al. Deep brain stimulation entrains local neuronal firing in human globus pallidus internus. Journal of Neurophysiology, 109(4):978–987, 2013. pmid:23197451
  29. 29. Rubin J and Terman D. High frequency stimulation of the subthalamic nucleus eliminates pathological thalamic rhythmicity in a computational model. Journal of Computational Neuroscience, 16:211–235, 2004. pmid:15114047
  30. 30. Ermentrout GB and Terman DH. Mathematical Foundations of Neuroscience. Springer, New York, 2010.
  31. 31. Hoppensteadt FC and Izhikevich EM. Weakly Connected Neural Networks. Springer, New York, 1997.
  32. 32. Ly C and Ermentrout GB. Synchronization dynamics of two coupled neural oscillators receiving shared and unshared noisy stimuli. Journal of Computational Neuroscience, 26(3):425–443, 2009. pmid:19034640
  33. 33. Ermentrout BG, Beverlin B II, Troyer T, and Netoff TI. The variance of phase-resetting curves. Journal of Computational Neuroscience, 31(2):185–197, 2011. pmid:21207126
  34. 34. Izhikevich EM. Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. MIT Press, London, 2007.
  35. 35. Wiggins S. Introduction to Applied Nonlinear Dynamical Systems and Chaos, volume 2. Springer, New York, 2003.
  36. 36. Berman A and Plemmons RJ. Nonnegative Matrices in the Mathematical Sciences. SIAM, Philadelphia, 1979.
  37. 37. Pillai SU, Suel T, and Cha S. The Perron-Frobenius theorem: some of its applications. Signal Processing Magazine, IEEE, 22(2):62–75, 2005.
  38. 38. McIntyre CC, Mori S, Sherman DL, Thakor NV, and Vitek JL. Electric field and stimulating influence generated by deep brain stimulation of the subthalamic nucleus. Clinical Neurophysiology, 115(3):589–595, 2004. pmid:15036055
  39. 39. Lysyansky B, Popovych OV, and Tass PA. Multi-frequency activation of neuronal networks by coordinated reset stimulation. Interface Focus, 1(1):75–85, 2011. pmid:22419975
  40. 40. Sakaguchi H. Cooperative phenomena in coupled oscillator systems under external fields. Progress of Theoretical Physics, 79(1):39–46, 1988.
  41. 41. Antonsen TM Jr, Faghih RT, Girvan M, Ott E, and Platig J. External periodic driving of large systems of globally coupled phase oscillators. Chaos, 18(3):037112, 2008.
  42. 42. Childs LM and Strogatz SH. Stability diagram for the forced Kuramoto model. Chaos, 18(4):043128, 2008. pmid:19123638
  43. 43. Kiss IZ, Zhai Y, and Hudson JL. Resonance clustering in globally coupled electrochemical oscillators with external forcing. Physical Review E, 77(4):046204, 2008.
  44. 44. Popovych OV and Tass PA. Macroscopic entrainment of periodically forced oscillatory ensembles. Progress in Biophysics and Molecular Biology, 105(1):98–108, 2011. pmid:20875831
  45. 45. Hodgkin AL and Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol., 117:500–44, 1952. pmid:12991237
  46. 46. Rinzel J. Excitation dynamics: insights from simplified membrane models. In Federation Proceedings, volume 44, pages 2944–2946, 1985.
  47. 47. Moehlis J. Canards for a reduction of the Hodgkin-Huxley equations. J. Math. Biol., 52:141–53, 2006. pmid:16195925
  48. 48. Gardiner CW. Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences. Springer, Berlin, 2004.
  49. 49. Netoff T, Schwemmer MA, and Lewis TJ. Experimentally estimating phase response curves of neurons: theoretical and practical issues. In Phase Response Curves in Neuroscience, pages 95–129. New York, 2012.
  50. 50. Achuthan S and Canavier CC. Phase-resetting curves determine synchronization, phase locking, and clustering in networks of neural oscillators. The Journal of Neuroscience, 29(16):5218–5233, 2009. pmid:19386918
  51. 51. Goel P and Ermentrout B. Synchrony, stability, and firing patterns in pulse-coupled oscillators. Physica D: Nonlinear Phenomena, 163(3):191–216, 2002.
  52. 52. Brown E, Holmes P, and Moehlis J. Globally coupled oscillator networks. In Perspectives and Problems in Nolinear Science, pages 183–215. Springer, 2003.
  53. 53. Klebaner FC. Introduction to Stochastic Calculus with Applications. Singapore, 2005.
  54. 54. Schmidt GS, Wilson D, Allgöwer F, and Moehlis J. Selective averaging with application to phase reduction and neural control. Nonlinear Theory and Its Applications, IEICE, 5(4):424–435, 2014.
  55. 55. Sanders JA, Verhulst F, and Murdock J. Averaging Methods in Nonlinear Dynamical Systems. Springer-Verlag, New York, second edition, 2007.