ArticlePDF Available

Interspike Interval Statistics in the Stochastic Hodgkin-Huxley Model: Coexistence of Gamma Frequency Bursts and Highly Irregular Firing

Authors:

Abstract and Figures

When the classical Hodgkin-Huxley equations are simulated with Na- and K-channel noise and constant applied current, the distribution of interspike intervals is bimodal: one part is an exponential tail, as often assumed, while the other is a narrow gaussian peak centered at a short interspike interval value. The gaussian arises from bursts of spikes in the gamma-frequency range, the tail from the interburst intervals, giving overall an extraordinarily high coefficient of variation--up to 2.5 for 180,000 Na channels when I approximately 7 microA/cm(2). Since neurons with a bimodal ISI distribution are common, it may be a useful model for any neuron with class 2 firing. The underlying mechanism is due to a subcritical Hopf bifurcation, together with a switching region in phase-space where a fixed point is very close to a system limit cycle. This mechanism may be present in many different classes of neurons and may contribute to widely observed highly irregular neural spiking.
Content may be subject to copyright.
LETTER Communicated by Bard Ermentrout
Interspike Interval Statistics in the Stochastic Hodgkin-Huxley
Model: Coexistence of Gamma Frequency Bursts and Highly
Irregular Firing
Peter Rowat
prowat@ucsd.edu
Institute for Neural Computation, University of California at San Diego, La Jolla,
CA 92093, U.S.A.
When the classical Hodgkin-Huxley equations are simulated with Na-
and K-channel noise and constant applied current, the distribution of
interspike intervals is bimodal: one part is an exponential tail, as often
assumed, while the other is a narrow gaussian peak centered at a short
interspike interval value. The gaussian arises from bursts of spikes in
the gamma-frequency range, the tail from the interburst intervals, giv-
ing overall an extraordinarily high coefficient of variation—up to 2.5 for
180,000 Na channels when I7µA/cm2. Since neurons with a bimodal
ISI distribution are common, it may be a useful model for any neuron
with class 2 firing. The underlying mechanism is due to a subcritical
Hopf bifurcation, together with a switching region in phase-space where
a fixed point is very close to a system limit cycle. This mechanism may
be present in many different classes of neurons and may contribute to
widely observed highly irregular neural spiking.
1 Introduction
The variability of spike times in response to repeated identical inputs is
a prominent characteristic of the nervous system that many investigators
have observed (Werner & Mountcastle, 1963; Pfeiffer & Kiang, 1965; Naka-
hama, Suzuki, Yamamoto, & Aikawa, 1968; Noda & Adey, 1970; Lamarre,
Filion, & Cordeau, 1971; Bassant, 1976; Burns & Webb, 1976; Whitsel,
Schreiner, & Essick, 1977; Dean, 1981; Softky & Koch, 1993; Holt, Softky,
Koch, & Douglas, 1996; Shadlen & Newsome, 1998) and analyzed from
many different approaches (Stein, 1965; Wilbur & Rinzel, 1983; Softky
& Koch, 1993; Mainen & Sejnowski, 1995; Troyer & Miller, 1997; Gutkin
& Ermentrout, 1998; Stevens & Zador, 1998; Destexhe, Rudolph, Fellous,
& Sejnowski, 2001; Mann-Metzer & Yarom, 2002; Rudolph & Destexhe,
2002; Brette, 2003; Stiefel, Englitz, & Sejnowski, 2006). Although synaptic
noise may account for some of this variability, another significant source is
noise from stochastic ion channel activity, which can affect neuronal thresh-
old and spike time reliability (Sigworth, 1980; White, Rubinstein, & Kay,
Neural Computation 19, 1215–1250 (2007) C2007 Massachusetts Institute of Technology
1216 P. Rowat
2000). Indeed, in certain small neurons, a single channel opening can initi-
ate an action potential (Lynch & Barry, 1989). Channel noise has been shown
to be significant or essential in other neural systems (White, Klink, Alonso,
& Kay, 1998; Dorval & White, 2005; Kole, Hallerman, & Stuart, 2006), in-
cluding bursting neurons, where Rowat and Elson (2004) showed that the
addition of channel noise to a model cell qualitatively reproduced certain
burst statistics of an identified biological neuron.
In this letter, we show that an isolated model neuron, described by the
stochastic Hodgkin-Huxley (HH) equations, generates intrinsic spike-time
variability with a characteristic bimodal distribution of interspike intervals
(ISIs): a gaussian peak at the shortest intervals in addition to an exponential
tail. This bimodal distribution has been observed in the firing of a variety of
biological neurons (Gerstein & Kiang, 1960; Nakahama et al., 1968; Lamarre
et al., 1971; Ruggero, 1973; Armstrong-James, 1975; Bassant, 1976; Burns &
Webb, 1976; Siebler, Koller, Stichel, Muller, & Freund, 1993; de Ruyter van
Steveninck, Lewen, Strong, Koberle, & Bialek, 1997; Duchamp-Viret, Kostal,
Chaput, Lansky, & Rospars, 2005). Moreover, in the stochastic HH model
as in biology, the location of the initial peak lies in the gamma-frequency
range (40–100 Hz). Put another way, gamma-frequency firing is a favored
behavior of the stochastic HH model.
In their deterministic form, the HH equations (Hodgkin & Huxley,
1952), originally introduced to describe action potentials in the squid
giant axon, remain a foundation stone of modern neuroscience, and
the formalism introduced by this model—also known as conductance
based—is widely used today (Meunier & Segev, 2002). The basic outlines
of the HH equations’ dynamics in the physiological range are well
known (e.g., Hassard 1978; Troy, 1978; Best, 1979; Rinzel & Miller, 1980;
Labouriau, 1985; Guckenheimer & Oliva, 2002), and much of what remains
unknown in the deterministic case has been illuminated by Guckenheimer
and Labouriau (1993). Many neurons, including several cortical types
(Markram, Toledo-Rodriguez, Wang, Gupta, Silberberg, & Caizhi, 2004;
Tateno, Harsch, & Robinson, 2004), can be classified as Hodgkin’s class II
(Hodgkin, 1948)—a type of neuron that, as applied current is increased in
steps, begins spiking with positive (nonzero) frequency. Class II behavior
arises in most cases from the presence of a subcritical Hopf bifurcation.
The HH model is a four-dimensional system, and several reductions to a
two-dimensional system have been introduced. Of the latter, the Fitzhugh-
Nagumo model is perhaps the best known (Fitzhugh, 1961) and has been
heavily investigated as a prototype excitable system in biological, chemical,
and physical contexts (Lindner, Garica-Ojalvo, Neiman, & Schimansky-
Geier, 2004). While not an HH model reduction, the Morris-Lecar neuron
model is a conductance-based model of voltage oscillations in the barnacle
giant muscle fiber (Morris & Lecar, 1981). It has been analyzed extensively
(Rinzel & Ermentrout, 1998) and is often used as a good, qualitatively quite
accurate two-dimensional model of neuronal spiking.
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1217
Table 1: Hodgkin Huxley Parameters Used in Simulations
CMembrane capacitance 1 µF/cm2
VLLeak reversal potential 54.4 mV
gLLeak conductance 0.3 mS/cm2
VKPotassium reversal potential 77 mV
¯gKMaximal potassium conductance 36 mS/ cm2
ρKPotassium channel density 18 channels/µm2
NKNumber of potassium channels
VNa Sodium reversal potential 50 mV
¯gNa Maximal sodium conductance 120 mS/ cm2
ρNa Sodium channel density 60 channels/µm2
NNa Number of sodium channels
Using a stochastic version of this reduced two-dimensional model, we
propose a simple dynamical explanation for the characteristic bimodal dis-
tribution of ISIs observed in the four-dimensional stochastic HH model.
These dynamics are likely to apply to the spontaneous spiking of most class
II neurons, even those with complex arrays of ionic currents. Our findings
indicate that a combination of irregular firing and gamma-frequency bursts
can be attributed to the effects of channel noise intrinsic to a neuron.
2 Methods
2.1 The Stochastic Hodgkin-Huxley Model and Simulation. The cur-
rent conservation equation for the HH model is
CdV
dt =I[gNa (VVNa )+gK(VVK)+gL(VVL)] (2.1)
where Cis the membrane capacitance, Vis the membrane potential, Iis
the applied current, gNa is the sodium conductance, gKis the potassium
conductance, gLis the leak conductance, VNa and VKare the sodium and
potassium reversal potentials, and VLis the leak reversal potential. The
parameter values used are given in Table 1.
The Na and K conductances are written as:
gNa =m3h¯gNa,gK=n4¯gK,(2.2)
where the gating variables m,h,n,satisfy equations
dx
dt =αx(V)(1 x)βx(V)x,for x=m,h,n.(2.3)
1218 P. Rowat
The rate functions αx
x,for x=m,h,nare as follows:
αm(V)=0.1(V+40)
1e(V+40)/10
m(V)=4e(V+65)/18,(2.4)
αh(V)=0.07 e(V+65)/20
h(V)=1
1+e(V+35)/10 ,(2.5)
αn(V)=0.01 (V+55)
1e(V+55)/10
n(V)=0.125e(V+65)/80,(2.6)
In the stochastic HH model, the gating equations 2.3 are not used, and we
proceed as follows. The voltage-dependent conductances for the sodium
and potassium currents are given by
gNa =QNa
NNa
¯gNa,gK=QK
NK
¯gK,(2.7)
where NNa is the total number of sodium channels, NKthe total number of
potassium channels, QNa and QKare the numbers of sodium and potassium
channels in the open state, and ¯gNa and ¯gKare the maximum sodium and
potassium conductances. NNa and NKare given by
NNa =ρNa ×area, NK=ρK×area,(2.8)
where ρNa and ρKare the sodium and potassium channel densities. In this
letter, ρNa and ρKare fixed. Thus, when the membrane area is changed, the
total number of channels, NNa and NK, is changed.
Each potassium channel has four identical gates. Each gate is either open
or closed, and the channel conducts potassium current only when all four
gates are open. Each potassium gate is a Markov process with voltage-
dependent opening and closing rates given by the classical Hodgkin-
Huxley functions αn(V)andβn(V) and the kinetic scheme for the operation
of a single potassium channel is the following:
(2.9)
Here, n4 is the state in which all four gates are open. For each state transition,
for example, n0n1, the function labeling it (here, 4αn) is the transition
rate.
A sodium channel has three activation (m-) gates and one inactivation
(h-) gate, and conducts sodium current only when all four gates are open.
With the opening and closing rates αm(V), βm(V), αh(V), and βh(V) for the
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1219
m-gates and the h-gate, the associated Markov kinetic scheme is then
(2.10)
A channel is in state mihjifim-gates and j h-gates areopen, 0 i3,j=0,1.
A sodium channel is open only when it is in state m3h1.
The stochastic Hodgkin-Huxley model consists of equations 2.1, 2.4, 2.5,
2.6, and 2.7; NNa copies of equation 2.9; and NKcopies of equation 2.10.
The brute-force algorithm keeps track of the states of all the sodium and
potassium channels, computationally a hugely inefficient scheme. The first
time one of the (NNa +NK) channels changes state, the membrane equation,
2.1, is integrated up to that instant in time, and the process is repeated.
An exact, and much better, algorithm (Gillespie, 1977; Chow & White,
1996) does not track the state of every channel but instead tracks the number
of channels in each state, as follows.
In schemes 2.9 and 2.10, there are 13 different channel states x,
for x=n0,n1, ... ,n4, m0h0,...,m3h1,and 28 transitions ywith rates
{Ry(V):y=n0n1,. ..., n3n4,m0h0m1h0, ..., m2h1m1h1,
..., m3h0m3h1}—for example, Rn1n2=3αn.
For each x,Nxis the number of channels in state x. The algorithm tracks
the global state (Nn0,Nn1,...,Nm3h1),where
Nn0+Nn1+Nn2+Nn3+Nn4=NK,
Nmoho +···+Nm3h1=NNa ,
QK=Nn4,and QNa =Nm3h1.
The algorithm has five steps:
Step 1: Select a transition time ttr from the current global state
(Nn0,Nn1, ..., Nm3h1) to its immediate successor.
Step 1a: Update the transition rates {Ry(V)}.
Step 1b: For each z=1,2, ... ,28,compute Rtot
z=Nsource(y)Ry(V),
where z=ord(y):z=1,2,...,28is a fixed ordering of the
transitions y=n0n1,...,m3h0m3h1.
1220 P. Rowat
Step 1c: Let λ=28
z=1Rtot
z, the global transition rate out of the current
global state. This has the distribution function
λeλ(tt0).(2.11)
Step 1d: Let ttr =ln(r1
1), with r1a pseudorandom number in [0,1].
ttr is a sample from the PDF (see equation 2.11).
Step 2: Integrate equation 2.1 from t0to t=t0+ttr to get a new value
of V.
Step 3: Select the transition taken. Using r2, a pseudorandom num-
ber in [0,1], compute µ[1,28] such that µ1
z=1Rtot
zr2
µ
z=1Rtot
z.
Step 4: Update the state as determined by the transition µ. For example,
if Rtot
µcorresponds to transition m2h1m1h1,then add 1 to
Nm1h1and subtract 1 from Nm2h1.
Step 5: Set t0=tand repeat steps 1 to 4.
2.2 The Stochastic Hodgkin-Huxley Model in Voltage-Clamp Mode.
This is an experimental technique physiologists use. We use this mode
to collect statistics in channel noise. Fix V=V
0, and use the algorithm
just described, but omit step 2. Also, step 1a can be taken out of the loop
and executed once on initialization. In all HH voltage-clamp runs, the first
20 ms were discarded to avoid transients.
2.3 The Morris-Lecar Model. The deterministic Morris-Lecar model is
CdV
dt =−¯gCa m(V)(VV
Ca)¯gKw(VVK)gL(VVL)+I
(2.12)
dw
dt =φw(V)w
τ(V),(2.13)
where
m(V)=0.51+tanh (VV
1)/V
2,
w(V)=0.51+tanh (VV
3)/V
4,(2.14)
and
τw(V)=1/cosh (VV
3)/(2V
4).(2.15)
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1221
Parameter values were V
1=−1.2,V
2=18,V
3=2,V
4=30,¯gCa =4.4,¯gK=
8.0,gL=2,VK=−84,VL=−60,V
Ca =120,C=20µF/cm2,and φ=0.04.
The injected current I was varied.
The stochastic Morris-Lecar model is derived in the same manner as the
stochastic HH model derives from the deterministic HH model. The K+-
gating variable wis replaced by a population of NKchannels, each having a
single gate with two states—open and closed. We introduce the membrane
area Area with channel density ρKfor which
NK=ρKArea (2.16)
holds. The opening and closing transition probabilities are given by func-
tions αw(V)andβw(V). These are obtained by rewriting equation 2.13 in the
form
dw
dt =αw(V)(1 w)βw(V)w, (2.17)
where, after a little manipulation and using equations 2.14 and 2.15,
αw(V)=0.5φcosh (VV
3)/(2V
4)1+tanh (VV
3)/(2V
4)
βw(V)=0.5φcosh (VV
3)/(2V
4)1tanh (VV
3)/(2V
4).
(2.18)
Now we retain only equation 2.12, and replace equation 2.13 (or its alternate
form, 2.17), by the kinetic scheme,
(2.19)
where αw(V)andβw(V) are given by equations 2.18. We used ρK=20
channels/µm2, typically with area =50 µm2so the number of potassium
channels NK=1000.
The stochastic Morris-Lecar model in voltage-clamp mode is obtained as
for the Hodgkin-Huxley model. Six different random number generators
were tested, and the resulting histograms compared for I =0. Since no
significant differences were found, all subsequent simulations were done
with the Mersenne Twister (Matsumoto & Nishimura, 1998). Most runs
were done using double precision (64 bits), but with the larger areas of over
1000 µm2(hence, larger channel numbers and very small times between
Markov state changes), we changed to long double precision (128 bits).
1222 P. Rowat
2.4 Hodgin-Huxley Phase Space and Q Space. In the deterministic HH
model, phase space is the set of 4-tuples (V,m,h,n).In the stochastic HH
model, the natural space to work with is the space of 3-tuples (V,QNa ,QK),
where QNa =Nm3h1is the number of open Na channels and QK=Nn4is
the number of open K channels. Knowing NNa and NK, we can move from
(V,m,h,n) space to (V,QNa ,QK) space via
QNa =m3hN
Na ,and QK=n4NK,(2.20)
by equation 2.7. There is no single 1-1 map from (V,QNa ,QK)backto
(V,m,h,n) space. One choice is the following, assuming the channel state
numbers Nm0h1,Nm1h1,Nm2h1,and Nm3h0are known. For the single gate open
probability n,assuming symmetry among the gates, use
n=4
QK
NK
.
For sodium channels, by diagram 2.10, we use
h=Nm0h1+Nm1h1+Nm2h1+QNa
NNa
and
m=3
QNa +Nm3h0
NNa
.
3 Results
When the ISI time series used for Figure 1A is converted to hertz and the
frequency histogram plotted, two significant peaks appear (see Figure 1B);
the insert shows the original case with I =0 and A =100 µm2(Chow &
White, 1996), with a barely noticeable second peak. In both histograms, the
second peak is in the gamma-frequency range (40–100 Hz). This suggested
plotting the frequency histograms for a wide range of input currents (see
Figure 2), which revealed that the right-hand peak frequency increased
linearly with input current. Next, we plotted the right-hand peak against
input current frequency for two areas, A =100 and A =400 µm2(see
Figure 3, top two traces). On the same graph, the frequency of the determin-
istic Hodgkin-Huxley equations (Rinzel & Miller, 1980) is plotted, showing
that for I7µA/cm2, the right-hand peaks in the frequency histogram
closely track, at a slightly higher frequency, the deterministic frequency.
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1223
0204060
Instantaneous frequency (Hz)
0
0.03
0.06
0 204060
0
0.03
0.06
B
0 200 400
ISI length (ms)
0
0.02
0.04
Probability Density
0 200 400
0
0.02
A=400 µm2
I=3 µA/cm2
A=100 µm2
I=0
A
Probability Density
Prob.Density
ISI length (ms)
Figure 1: A prominent initial peak in ISI histograms generates a peak in the
frequency histogram in the 40–100 Hz range. (A) Smoothed sliding histogram
of ISIs generated by the stochastic HH model with I=3µA/cm2and area
A=400 µm2(corresponding channel numbers NNa =24000, NK=72000). The
histogram was generated by sliding a bin of width 2 ms along the data in steps
of 0.2 ms and then smoothing it slightly. For most histograms, n=30000 data
points were used, except when the mean ISI was higher than a few seconds,
when nranged as low as 3000. Inset: Area =100 µm2,I=0, the case considered
by (Chow & White, 1996). All histogram bins are normalized so each curve is
an approximate probability density function. (B) Smoothed sliding histograms
of the frequencies obtained from the ISI data sets used in Awhere frequency =
1000/ISI.
1224 P. Rowat
Hz
0
0.03
0.06
Probability Density
gamma
frequency
I=10
9
8
7
6
5
4
3
I=3
4
5
6
78
80040
Figure 2: Frequency histograms for input current I =3, 4, . . . ,10 µA/cm2,A=
400 µm2. The numerals indicate the current corresponding to each histogram
curve.
0 5 10 15
I,
µ
A/cm2
0
20
40
60
80
Frequency of righthand peak, Hz
I
ν
I
1
Area=100 µm
2
Area=400 µm
2
Deterministic
frequency
Figure 3: Comparison of right-hand peaks of the frequency histograms in
Figure 2 with the frequency of the deterministic Hodgkin-Huxley model.
Squares: right-hand peaks plotted against input current I for area =400 µm2.Cir-
cles: right-hand peaks for area A=100 µm2. The solid curve is the deterministic
frequency. The left and right vertical dotted lines are where the deterministic
frequency drops to 0 as I decreases (left line) or jumps up from 0 as I increases
(right line). For the meaning of Iνand I1, see the text and Figure 6.
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1225
0
2
4
6
8
10
12
0 500
ms
7
120mV
Figure 4: Sample traces for different input currents I.Area=400 µm2,except
for the bottom trace, which has area =2000 µm2. Numerals at the left give the
input current I(µA/cm2).
Figure 4 shows membrane potential traces underlying the ISIs used for
histograms of Figures 1 to 3, and Figure 5 shows basic statistics (mean,
coefficient of variation) of the ISIs. The grouping of spikes into gamma-
frequency bursts suggested the unlikely possibility that the stochastic HH
model was in fact bursty. We disproved this by taking a 30,000-long ISI time
series and examining the distribution of gamma-frequency bursts in 10,000
shuffles. The numbers of pairs, triples, 4-tuples, and 5-tuples of short ISIs
in the original time series was not significantly different from their overall
distributions in the 10,000 shuffles. This was done twice. We conclude that
the distribution of short and longer ISIs within any one ISI time series is
random. This has also been found in experimental data (Stiefel et al., 2006).
4 Mechanism
The frequency plots in Figure 3 strongly suggest that the right-hand peaks
in the frequency histograms in Figure 2—and hence the initial narrow
1226 P. Rowat
10
100
1000
10000
1e+05
Mean ISI, ms
100 µm
2
400 µm
2
1000 µm
2
2000 µm
2
3000 µm
2
0 5 10 15
input I, µA/cm
2
0
1
2
3
Coefficient of Variation
I
ν
I
1
A
B
Figure 5: ISI statistics as a function of input current. (A) Mean ISI for each I.
(B) Coefficient of variation (CV). Note the log scale for means. Circles, A=
100 µm2;squares,A=400 µm2;diamonds,A=1000 µm2, triangles, A=
2000 µm2; inverted triangles, A=3000 µm2. The vertical dashed lines at I=Iν
and I1are important values in the subcritical Hopf bifurcation (see Figures 6
and 3). For the vertical arrow, see Figure 6.
gaussian peak in the ISI histogram—are linked to the spiking frequency
of the deterministic HH equations. If this is indeed the case, what mecha-
nism underlies the linear continuation of the frequency plot below Iνfor
the stochastic HH model?
Here we describe mechanisms that underlie the tracking of the determin-
istic frequencies above Iνand the linear extension of the peak noise-induced
frequencies to lower input currents, I<Iν, where the deterministic HH
model is quiescent.
From Figure 1A, an obvious proposal is that there are two processes
present—one generating the gaussian initial peak and another the expo-
nential tail, together with a mechanism for switching between them. The
gaussian peak could be generated by consecutive noisy traversals of a limit
cycle (LC). Commonly, an exponential distribution is generated by a ran-
dom walk with absorbing barrier; thus, the second process requires a spike
to be followed by a random walk to an absorbing barrier followed by an-
other spike. Thus, the second process could be generated by a system whose
state was suddenly transported to a fixed point and then undergoes a ran-
dom walk to an unstable limit cycle (ULC) that is followed by another spike,
that is, another orbit of a limit cycle. In a dynamical system, both processes
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1227
would be present if the phase portrait has a stable limit cycle (SLC), gener-
ating the first process, around a stable fixed point (SFP), with an unstable
limit cycle between them—as required in two dimensions by the Bendixson
theorem (see Figure 7A)—generating the second process. We will often use
vicinity to mean “basin of attraction.”
What mechanism could act to switch between these processes? Noise
could perturb a trajectory from the vicinity of the SLC across the ULC and
into the vicinity of the SFP, an inward switch. When the random walk out-
ward from the SLC crosses the ULC into the vicinity of the SLC, an outward
switch has occurred. When the first spike is emitted on entering the vicinity
of the SLC, an ISI is generated that marks the end of a spiraling random walk
and hence contributes to the exponential distribution. Subsequent traver-
sals of the SLC without crossing the ULC result in short ISIs that contribute
to the initial gaussian peak.
This description is strictly true only in a two-dimensional system, since
the “inside” or “outside” of a limit cycle is not defined in higher dimensions.
However, it can be generalized to higher dimensions if we replace “unstable
limit cycle” by the “attractor basin boundary” (ABB), a lower-dimensional
manifold lying between the basins of attraction of the stable fixed point and
the stable limit cycle.
It is well known that the HH equations undergo a subcritical Hopf bi-
furcation as input current is increased (Hassard, 1978; Troy, 1978; Rinzel &
Miller, 1980). The bifurcation diagram in Figure 6 (Guckenheimer & Oliva,
2002) shows how the voltage coordinate of the fixed point, and the maxi-
mum and minimum voltages reached by the stable and unstable limit cycles,
change as input current I is increased. The subcritical Hopf bifurcation oc-
curs at input current I1, while Iν,Iν<I1, is the current where the stable
and unstable limit cycles merge. No limit cycle is present when I<Iν.For
Ibetween Iνand I1(case A in Figure 6), there are a stable LC, an unstable
LC, and a stable fixed point, except for a small interval J of length at most
0.12 µA/cm2and lying strictly between Iνand I1, which has two extra
unstable LCs. Thus, for values of Iin the middle interval of the subcritical
Hopf bifurcation (Iν<I<I1), outside of the small subinterval J, the phase
portrait has exactly the structure shown in Figure 7A for an ISI histogram
of the form in Figure 1A.
Could the transitions from fixed point to stable limit cycle, and vice
versa, ever occur as a result of channel noise? A cursory glance at Figure 6A
suggests that the distance between SFP and ULC might be too large com-
pared to the noise amplitude. However, on remembering that the ULC and
SLC are four-dimensional orbits around the SFP and that the voltage ranges
of both the unstable and stable limit cycles include the fixed-point voltage,
the question becomes: Are there portions of the SLC where the minimal
distance between SFP and SLC is small enough that channel-noise-induced
perturbations can plausibly cause the state to switch from one attractor
basin to the other? There are two requirements. First, the minimal distance
1228 P. Rowat
510150
input current I, µA/cm
2
V, mV
40
20
0
-20
-40
-60
-80
A:
III
ν
<<
1
C:
II
<
ν
B:
II
>
1
I
1
I
ν
Figure 6: Bifurcation diagram of the deterministic HH model, for parameter
I. Solid line, stable fixed point (SFP). Long-dashed line, unstable fixed point
(UFP). Dot-dashed lines, extreme values of V on the stable limit cycle (SLC).
Short-dashed lines, extreme V values on the unstable limit cycle (ULC). The
S-bend in the ULC curve above the vertical arrow implies that three ULCs exist
over a small range of input current. The vertical dashed lines at I=I1,Iνdivide
the range of I into three cases: A, B, and C. The subcritical Hopf bifurcation occurs
at I=I1and the stable and unstable limit cycles merge at I=Iν. Adapted, with
permission, from Guckenheimer & Oliva (2002).
δ1from the SFP to the ABB must be small enough to permit a noise-induced
“out” transition out of the SFP vicinity across the ABB into the SLC vicinity,
with nonzero probability. Second, the minimal distance δ2from SLC to the
ABB must be small enough for a noise-induced “in” transition into the SFP
basin. It is not necessary for the two minima to occur at the same point or
region of the ABB. Sketches in Figures 7B and 7C illustrate this point in two
dimensions.
A detailed description of this mechanism will not be given for the four-
dimensional HH model. Instead the mechanisms are illustrated in a simpler,
two-dimensional case: the class II Morris-Lecar model (Rinzel & Ermen-
trout, 1998), which also undergoes a subcritical Hopf bifurcation. Here the
values for Iνand I1are approximately 88.3 and 93.85 µA/cm2.Wecon-
structed a stochastic version of the Morris-Lecar equations, 2.12 to 2.15,
and obtained ISI data distributions similar to those we obtained for the
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1229
ABC
in
out
in
out
ULC
SLC SFP
leaving ULC
Switching
Region
Figure 7: Phase plane sketches in region A (Iv<I<I1; see Figure 6). Dot, sta-
ble fixed point (SFP); continuous circle, stable limit cycle (SLC); dashed circle,
unstable limit cycle (ULC). (A) Minimal phase portrait implied by the ISI his-
togram in Figure 1A. The distances between SFP and ULC and between ULC
and SLC are large relative to noise amplitude. (B) There is a single switching
region where the distances between SFP, SLC, and ULC are of the same order
as the noise. Thus, noise can easily perturb a trajectory from the vicinity of
the fixed point over the ULC and “out” to the vicinity of the SLC and, vice
versa, perturb a trajectory near the SLC “in” to the vicinity of the SFP. (C) The
switching region for an “in” movement of a trajectory could be different from
the switching region for an “out” movement.
stochastic HH model. Compare Figure 8A with Figure 1A, Figure 8B with
Figure 2, and Figure 8C with Figure 3. Qualitatively, the match is quite good.
It might be objected that the Morris-Lecar model does not have a J-interval
with more than two limit cycles. We do not think this an important objection
because in our use of the HH model, we have not seen any indication of
unusual ISI statistics when the injected current I is within the J-interval in
the HH model. We conclude that the global dynamics of the Morris-Lecar
model should serve as a good, or at least valuable, guide to the dynamics
of the stochastic HH model that underlie the ISI distributions of Figure 1.
Case A: Iν<I<I1. In Figure 9A (1), the small dotted rectangle shows
that the stable fixed point and the stable and unstable limit cycles are very
close together for a short section of the ULC. Therefore, in this region of
phase space, noise with amplitude the same order as the distance between
the SFP and the SLC will frequently cause the trajectory to switch from the
SLC vicinity to the stable fixed-point vicinity and vice versa. A region with
this property will be referred to as a switching region. Figure 9A (2) shows
an example of noisy switching and Figure 9A (3) the corresponding voltage
trace.
Once the trajectory switches to the inside of the ULC, in the absence
of further noise, it spirals in toward the SFP. Consider a fixed radius from
the SFP through the closest point on the ULC and extending out beyond
1230 P. Rowat
0 250 500
ISI, ms
0
0.005
Probability
0Frequency, Hz
0
0.5
Probability
80 90 100
I, µA/cm
2
0
10
Frequency , Hz
Stochastic
MLE
Deterministic
frequency
I=98
... 11000
I=82
84
86
88
90
I
ν
I
1
B
A
C
10
Figure 8: ISI data from the stochastic Morris-Lecar model has similar qualitative
properties to ISI data from the stochastic HH model. (A) ISI histogram for
I=82.0. Area =50 µm2(1000 K+channels). The long exponential tail, cut off
at 500 ms, extends out to 10,000 ms. (B) Superimposed frequency histograms
for I=82,84,...,98. Area =50 µm2. (C) Plot of the right-hand peaks in B
(circles) compared with the frequency of the deterministic Morris-Lecar model.
Iν88.29 and I193.86.
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1231
the ULC. Due to noise, successive intersections of the spiraling trajectory
with this radius generate a one-dimensional random walk that terminates
when the next intersection with the radius lies outside the ULC, in the SLC
attractor basin. Now, in the absence of another perturbation back inside the
ULC, the trajectory rejoins the SLC and emits another spike.
Case B: I1<I. Here there is no ULC, only an SLC and a UFP (see
Figure 9B(1)). However, the fixed point is only weakly unstable, so if the tra-
jectory is in its vicinity, it remains close for a significant duration, while the
trajectory describes a slowly expanding spiral (see Figure 9B(1), inset). See,
for example, the initial segment of the voltage trace in Figure 9B(3) before
the first spike. Thus, noise on w keeps the state close to this fixed point for
prolonged periods until eventually the outward-expanding spiral “wins”
(see Figure 9B(2))—the random walk is terminated—and the trajectory
returns to the SLC, whose next spike generates an ISI contributing to the
negative exponential tail in the ISI PDF. Subsequent noisy ISIs contribute
to the gamma-frequency peak in the frequency PDF.
Case C: I <Iν.Now there is no limit cycle, just a single SFP. There are,
however, remnants of the limit cyle, termed “ruts,” due to the way that
trajectories starting from a wide range of initial conditions converge onto
a rut. Figure 9C(1) shows two ruts in the deterministic Morris-Lecar phase
space and several trajectories converging onto each one. In a rut, the vector
field is strongly contracting transverse to the trajectory. In the region of
phase space to the right of the fixed point, there is no transverse contraction
so initially separate trajectories remain parallel and separate (see Figures
9C(1) and 9C(2). In a similar case (I =88.2 instead of 82.0), Tateno and
Pakdaman (2004) show the distribution of the ML dynamics in phase space:
ruts appear as small ridges in approximately the same position, while in the
nonrut (noncontraction) areas, the ridges disappear due to the distribution
being spread out. Any trajectory starting more than about 0.03 vertically
below the fixed point emits a spike and enters the rut that constitutes the
downstroke of the MLE spike. The arrow in the inset graph shows the
location of the “threshold” between trajectories that do and do not generate
spikes.
4.1 Where Is the Switching Region in the Hodgkin-Huxley Model?
The mechanisms just described for the two-dimensional (2D) stochastic
Morris-Lecar model generalize, approximately, to the 4D stochastic HH
model. An exact description is beyond the scope of this letter, in part
because 4D objects are difficult to visualize, but also because in 4D, the
boundary between two attractor basins (e.g., between a SFP and a SLC) is
a 3D manifold, which is very likely to be fractal (McDonald, Grebogi, Ott,
& Yorker, 1985; Guckenheimer & Oliva, 2002) in some range of I. Another
difficulty in case A of the HH subcritical Hopf bifurcation (see Figure 6)
is this: the ULC is a 1D object embedded in the 3D boundary between the
basins of attraction of the SFP and the SLC, but a trajectory switching from
V
w
ms
-40
0
40
V
V
w
stable FP
stable LC
unstable LC
noisy trajectory
(1)
(2)
(3)
Switching
Region
A
0 2 50 500
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1233
one basin to the other might not pass close to the ULC, despite appearances
in Figure 6.
Corresponding to case C in the Morris-Lecar model (see Figure 9C),
Figure 10 shows the projection of a noise-free HH trajectory (I=6µA/cm2)
onto the (m,n)plane—one of six possible 2D projections—that emits one
spike and then spirals into the SFP. The projection of the switching region
where small perturbations can generate a spike is enclosed in the ellipse
in the inset. There is no ABB since there is only one attractor. However,
the dashed line segment indicates something similar: the (2D projection of
the) boundary between trajectories that emit a full-blown spike and those
continuing with small-amplitude spirals into the SFP.
The dashed line segment in Figure 10 is like a threshold. However, the
concept of a threshold is problematic for a system with more than one
variable because the hidden variables may not have their expected values
(see Guckenheimer & Oliva, 2002, for a rigorous definition). If the system
is in the vicinity of the SFP with coordinates (V
0,m0,h0,n0) but previous
perturbations have caused (m,h,n) to momentarily have values different
from (m0,h0,n0), then the effective Vthreshold to generate a spike may
now be very different. This is seen experimentally: in cortical neurons, the
previous voltage behavior can affect spike threshold by as much as 10 mV
(Azouz & Gray, 2000). Also, in a very small I-interval within the S-turns in
the ULC amplitude plot (see the thick arrow in Figure 6), there is a difficulty
of a different kind: with I7.9219, Guckenheimer and Oliva (2002) found
a chaotic invariant set and conjectured that there are regions of phase space
where every neighborhood, however small, contains two kinds of states:
Figure 9: Switching regions in Morris-Lecar phase-portraits. The parameters
are as in Rinzel and Ermentrout (1998), subcritical Hopf bifurcation. For one
instance of each case A,B,C(see Figure 6), we show (1) the phase portrait of
the deterministic (noise-free) system. The switching region is shown by a small
dotted rectangle and expanded as an inset graph. (2) A noisy trajectory overlaid
on the noise-free phase portrait (3) A noisy voltage trace corresponding to the
noisy trajectory in graph 2. In each case, graphs 1 and 2 have identical scales.
Scale bars in graph 1: Horizontal bars: main graph, 10 mV; inset graph, 1 mV.
Vertical bars: main graph, 0.1; inset graph, 0.01. Case A: I=90 µA/cm2,which
hasanSFP,anSLC,andaULC.CaseB:I=98 µA/cm2.AnUFPandSLC.
The solid line shows a trajectory spiraling out from near the UFP to join the
SLC; the latter is shown by dots at equal time intervals. Case C: I=82 µA/cm2.
Here there is only a stable fixed point and no LC, but “ruts” in phase-space
enable noise-induced spikes to occur. Segments of nine trajectories are shown,
starting from points very close to the SFP. Six fall into the upper rut and emit
a spike, while the remaining three immediately spiral into the SFP. The small
arrow in the inset graph indicates the threshold between spiking and nonspiking
initial points.
1234 P. Rowat
Figure 10: Hopf case Cin the Hodgkin-Huxley model with I=6µA/cm2,so
there is no LC. (A) Projection of deterministic trajectory onto the (m,n) phase-
plane. (B) Expansion of the region around SFP. The dashed curve indicates the
approximate location of the attractor basin boundary or “threshold”; the dotted
ellipse indicates the switching region in this projection.
those leading to action potentials and those leading to the stable fixed point.
If true, then, mathematically, no threshold can be defined for this value of
the injected current.
In order to truly locate a switching region for case A of the Hopf bifur-
cation, one must compute the ABB between the SFP and the SLC and then
measure two minimal distances: δ1from SFP out to the ABB and δ2inward
from SLC to ABB. The endings of δ1and δ2intheABBmaybeclose,but
with current knowledge, they could be at different locations (see the sketch
in Figure 7C). If the latter, then switching in and out of the vicinity of the
SFP will occur at different phases of the spiking cycle. Cases B and C require
different definitions of the switching region, which will not be attempted.
These difficulties are avoided by a simplistic approach. We computed the
minimal distance between fixed point and stable limit cycle in (V,m,h,n)-
space. It’s also useful to look at this in three-dimensional (V,QNa ,QK)-, or
Q-, space (see Figure 11). If these distances are small enough, then basin
switching can be induced by small-amplitude noise. Since no LC exists in
case C, only ruts, these plots do not go below Iν. See the location of Iνin
Figures 3 and 6.
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1235
6 9 12 15
Injected current I, µA/cm
2
0.04
0.06
69
12 15
8
12
Minimum distance from SFP to SLC
Four-dimensional phase-space
Three-dimensional Q-space
Figure 11: These two curves track the closest approach of the HH stable limit
cycle to the fixed point for 6.5I15,in two deterministic phase spaces. The
lower trace uses the standard four-dimensional space (V,m,h,n),while the
upper uses the three-dimensional space (V,QNa ,QK)whereQNa =m3h×NNa ,
and QK=n4×NK.Area =400 µm2.
As input current increases from Iνto 15, the minimal distance in
(V,m,h,n)-space from the FP to the SLC increases from 0.04 to nearly 0.07,
while the minimal distance in Q-space increases from under 7 to over 12. In
Q-space, “minimal distance X” means X channel openings or closing in the
direction of the fixed point or SLC depending on whether the trajectory is
going “in” or “out.” If one supposes that the ABB or ULC lies approximately
halfway,1then only X/2 channel changes are needed. Thus, the expected
probability of basin switching goes down as Iincreases, which is in agree-
ment with the sequence of traces in Figure 4. More exactly, the probability
of switching “in” from SLC vicinity to FP vicinity becomes lower, while the
probability of an “out” switch decreases less due to weakening of the FP
stability.
With I=7µA/cm2, the distance from the SFP to points on the SLC
during one spiking cycle is shown in Figures 12B and 12C. The distance
in (V,m,h,n)-space is dominated by the voltage V, since 0 m,h,n1,
whereas the range of Vis over 100 mV. Thus, the minimal distance occurs
at two downward spikes created when Vcrosses the SFP rest potential
1Here I assume that basic dynamical features such as limit cycles in (V,m,h,n)-space
map into similar features in Q-space.
1236 P. Rowat
-80
-40
0
40
V
1
1e+02
distance from SFP to SLC
01020
time, ms
10
2
10
4
100
rest V0
Four-dimensional phase-space
Three-dimensional Q-space
A
B
C
Figure 12: (A) The voltage trace for the deterministic HH equations with
I=7µA/cm2(thick line). The dotted line is a trace from the stochastic HH
model for the same Iand with area = 400 µm2, but shifted up for clarity. (B) The
Euclidean distance in (V,m,h,n) space from the fixed point (FP) to the stable
limit cycle as two spikes occur. (C) The thick line is the distance to the stable limit
cycle in the three-dimensional Q-space (V,QNa ,QK)whereQNa =m3hN
Na and
QK=n4NK. The dotted line is the distance in Q-space from the FP to the Q-
space trajectory whose V-component is shown dotted in A. This trace has been
shifted up. The arrow-headed line indicates an interval of over 6 ms where the
distance in Q-space is under 20. The origin for time coincides with the peak
of the first spike. The minimum distance in Q-space from FP to the dotted
trajectory is 3.24.
V
0.In Q-space, the range of Vis negligible relative to the QNa and QK-
ranges (see Figure 13), and quite different locations of the minimal distance
are found. The minimum distance is 6.8 at 9 ms into the 18 ms spike cy-
cle, with another minimum of 8.5 channel steps at 14 ms; moreover, the
Q-distance remains below 20 between these minima. Overall this gives a
6 ms interval with Q-distance always under 20. Most switches between SLC
and SFP occur in this interval; consequently, the approximate location of
the switching region is known.
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1237
-80 -40 0 40
V
0
2000
4000
QNa
-80 -40 0 40
V
0
1000
2000
3000
QK
0 2000 4000
QNa
0
1000
2000
3000
QK
-62 V
0
5
10
15
QNa
-64 -60
V
100
200
QK
02040
QNa
100
200
QK
BE
CF
AD
Figure 13: Comparison of deterministic and stochastic trajectories in Q-space.
The thick gray line shows the deterministic LC in (V,QNa ,QK) phase space
when I=7µA/cm2, and the thin black line is a path from the stochastic HH
model. The area was 400 µm2so NNa =24,000 and NK=7200. The fixed point,
shown by a circled cross, is visible in the expanded views D,E,andF. In the
(V,QNa )plane, D, the fixed point appears to be on the limit cycle, not inside,
but this is merely due to the orthogonal projection being used. The stochastic
trajectory used here is an extension of the one shown in Figure 12.
Figures 12A and 12C show the voltage and the distance to the SFP,
respectively, for two spikes generated by the stochastic HH model. Most of
the jitter occurs in the arrow-headed interval, as expected, and the trajectory
1238 P. Rowat
comes within 3.24 of the SFP, half the minimum distance of the deterministic
trajectory. It is not trapped in the SFP vicinity but instead continues to
cycle.
4.2 ISI and Channel Noise Statistics. Figure 5 shows how the over-
all mean and CV of ISIs varies with current and membrane area. Since
there is no geometry associated with the model, the increase in area simply
means an increase in the number of channels, which in turn means the
channel noise has a lower standard deviation and higher frequency (see
Figure 14). For I>I1, the noise-free dynamics is continuous spiking, so
smaller-amplitude noise means a smaller probability of leaving the SLC
or of spending any significant time near the weakly unstable FP. Since the
intervals between continuous spiking (ICSs) become fewer and shorter, the
CV quickly goes to zero as area increases. On the contrary, as Idecreases
below I1, the ICSs can become arbitrarily long, as can be seen when the
area is high (A =3000), resulting in very high CV. For I<Iν,the noise-
free dynamics rest at the SFP, so the CV curves reflect the introduction of
single spikes into a primarily nonspiking system. For case A, Iν<I<I1,
the two opposing tendencies result in CV curves whose maxima increase
with increased area (reduced amplitude/increased noise frequency). From
Figure 5B, it is tempting to speculate that as the noise amplitude tends to
zero, the CV curve tends toward a delta function centered in the region in
the bifurcation diagram where chaos and undefinable threshold occur. This
is not the case. The probability density
ρ(t)=αδ(t)+(1 α)λeλt(4.1)
with λ>0 and 0 <α<1, where δ(t) is the Dirac delta, has a coefficient
of variation given by (1 α)1/2, which is always larger than one and can
become arbitrarily large when αapproaches 1 (Wilbur & Rinzel, 1983). If we
replace the initial gaussian peaks in the ISI histograms by tall Dirac delta
functions, the probability density of equation 3.1 is a rough match to the
histograms. So it is to be expected that for some combinations of area and
current, the ISI histograms will have arbitrarily large CV.
4.3 Characteristics of Channel Noise. It is difficult to give a concise
description of channel noise occurring in a spiking neuron. First, the time
between changes in the number of open channels varies by nearly four
orders of magnitude when area =400 µm2, from a minimum of less than
106ms at the action potential peak to a maximum of about 0.06 ms, 3
to 5 ms after the deepest hyperpolarization. Second, the size of the Na-
current passing when a single channel opens or closes varies according
to the distance from the Na+-reversal potential. Thus, the magnitude of
a single Na-channel current varies by a factor of six. It is smallest at the
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1239
0 200 400 600 800 1000
Area, µm2
0
100
200
300
KHz
Na channels
K channels
0
0.2
0.4
0.6
0.8
CV
Na activation
K activation
0
1e+06
2e+06
Frequency,Hz
K+ channels
Na+ channels
0
0.4
0.8
Mean
activation
K+
Na+ X 100
0
0.003
0.006
Standard deviation
K+
Na+ X 10
-80 -40 40
V, mV
0
1
CV
K+ activation
Na+ activation
0
C
D
E
F
A
B
Figure 14: Channel noise statistics when the stochastic HH model is held in
voltage clamp. In Aand B, the voltage was fixed at 60.78 mV (steady-state
value when I=7µA/cm2and the CV and frequency of the Na- and K- ac-
tivation plotted as a function of area. The data in Aare fitted with curves of
the form a/Area b.InC,D,E,F, the area =400 µm2, while the potential
varies from 75 to +31 mV. (C) Frequency. (D) Mean activation. (E) Standard
deviation. (F) CV. In Dthe mean Na-activation is magnified ×100. In E, the
standard deviation of Na-activation is magnified ×10.
1240 P. Rowat
peak and largest at the lowest potentials. Similar considerations apply to
K-current.
We sidestepped the issue of characterizing channel noise in a spike cy-
cle by adopting an experimental technique: the voltage clamp. For each
membrane area, we clamped the neuron at 60.78 mV, the steady-state po-
tential for I=7µA/cm2, and then recorded the Na- and K-activations and
computed their coefficient of variation (CV) and frequency. Figures 14A
and 14B show the CV and frequency as a function of area—hence, of NNa
or of NK. As expected, the CV decreases as the inverse square root of the
area and the frequency increases linearly. In Figures 14C to 14F, the area
is fixed, and the potential ranges from 75 to +31 mV. Figure 14C shows
the frequency, while Figure 14D plots the mean activations, which exactly
match the deterministic activations: m3hfor Na- and n4for K-current. The
curves in Figures 14C, 14D, and 14F depend on only the rate functions
{αx
x:x=m,h,n}in equations 2.4 to 2.6, but the relationship remains to
be elucidated.
4.4 Oscillatory Histograms. In Figure 1A, there is a small, subsidiary
peak after the initial, peak which is seen in all ISI histograms we have ex-
amined. In some cases, a third, poorly defined peak can be discerned, and
when large numbers of channels are used, as in Figure 15, at least three sub-
sidiary peaks can be seen (but note the logarithmic scale in Figure 15B used
to make these very small peaks visible). On a gross scale, these subsidiary
peaks are simply part of the exponential tail. On a finer scale, they stand
out and can be understood in terms of our proposed mechanism. The initial
peak arises from bursts of consecutive traversals of the SLC. The second
peak arises when the trajectory leaves the SLC vicinity after one traversal
and spirals round in the vicinity of the FP for one cycle, and as the next
cycle starts, it switches back to the SLC vicinity, thus creating an ISI roughly
twice as long as the period of the SLC. Compared to the first two peaks,
the probability of a second spike occurring at time T=1.5×(SLC period)
is small; this results in the significant dip between the first two peaks. The
fact that this dip is not down to zero arises from the noise-induced phase
randomization that presumably increases with time, thus causing the sub-
sidiary peaks to be smoothed out as the time since the last spike increases.
The Na channel numbers in Figure 15 are 60,000, 120,000, and 180,000. The
CV for A =3000 is 2.8.
4.5 Gaussian Noise Has the Same Effect as Channel Noise. We in-
tegrated the deterministic HH equations with a gaussian noise term in
equation 2.1. In Figure 16, the histogram obtained with gaussian noise
and the histogram obtained with channel noise in Figure 1, both with I =
3µA/cm2, are plotted together. This shows that the effects of channel noise
and gaussian noise are essentially indistinguishable.
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1241
0
0.1
0.2 Area=1000 µm
2
Area=2000 µm
2
Area=3000 µm
2
20 40 60
ISI, ms
1e-05
0.0001
0.001
0.01
0.1
Probability density
A
B
Figure 15: ISI histograms for large areas 1000, 2000, 3000 µm2, shown on a linear
scale in Aand logarithmic scale in B.
Figure 16: An ISI histogram nearly identical to the histogram of Figure 1A is
obtained when gaussian noise is added to the HH current balance equation, 2.1.
Noise with standard deviation 0.1 was used, with integration time step 0.01 ms.
This histogram curve, in gray, is overlaid on the curve from Figure 1A, in black.
The inset graph shows the gray curve again on a log scale.
1242 P. Rowat
4.6 On the Existence of Ruts. Every SLC has a neighborhood—that is,
an annulus in a 2D system or a tube in a 3D system—where the vector field
transverse to the SLC is contracting everywhere, with different strengths at
different phases. A ULC has a neighborhood where the local vector field at
each point of the ULC is expanding everywhere. When a bifurcation occurs
and the SLC no longer exists, this means that somewhere along the SLC,
the width of the neighborhood of contraction has shrunk to zero. In the
geographical view of a dynamical system that depends continuously on
parameters—such as a SFP is a hollow and a UFP is a rounded peak—there
is a “valley” at every point of a SLC, but as a parameter Ipasses through a
bifurcation value, at some point of the SLC the valley opens out and becomes
flat. Except in a completely symmetric system, which is unlikely to occur
in applications, there will still be regions close to where the SLC used to be,
with transverse contracting neighborhoods still in existence. These are the
ruts. In this argument, we appeal only to the continuous dependence of the
vector field on the parameter Iand to the disappearance of a SLC, not to
the width of the interval [Iν,I1]. The sudden disappearance of an SLC as
Idecreases is sufficient to ensure the existence of ruts, independent of the
width of the region with two or more attractors.
5 Discussion
We have shown that the stochastic HH model generates a bimodal ISI distri-
bution and described in qualitative terms the underlying dynamical mech-
anism. Although the description is based on a two-dimensional model, our
extensive modeling data suggest that the same dynamical mechanism will
be present in more complex models, provided a subcritical Hopf bifurcation
and a switching region are present. More formally, we conjecture that a bi-
modal ISI distribution is generated in any noisy system that passes through
a subcritical Hopf bifurcation with a switching region. The noise does not
need to be channel noise; gaussian noise will suffice (see Figure 16).
Our simulations have been done with the stochastic HH equations whose
parameters are based on squid, a cold-blooded creature. The dynamical
mechanisms described are quite general and can be expected to apply to
any neuron, not just the HH model or squid axon, that is, class II. Thus,
any such neuron will have a bimodal ISI distribution in the presence of
channel or synaptic noise. In particular, it should apply to all class II cortical
cells.
Experimentally, bimodal ISI histograms are common. They have been
observed in many mammalian neurons in spontaneous behavior or in the
presence of stimuli: spontaneous activity of a ventro-basal neuron during
sleep (Nakahama et al., 1968), ventrolateral thalamic neurons in slow-wave
and fast-wave sleep (Lamarre et al., 1971), auditory nerve fiber response
to white noise (Ruggero, 1973), adult rat primary somatosensory cortex
(Armstrong-James, 1975), pyramidal cells in rabbit (Bassant, 1976), cat
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1243
auditory cortex awake in light (Burns & Webb, 1976) and either spon-
taneous or in response to clicks (Gerstein & Kiang 1960; Kiang, 1965),
cultured hippocampal cells (Siebler et al., 1993), and rat olfactory recep-
tor neurons (Duchamp-Viret et al., 2005). Recordings that look very sim-
ilar to the traces in Figure 4, but without accompanying ISI histograms,
have been observed in cortical fast-spiking cells (Cauli, Audinat, Lambolez,
Angulo, Ropert, Tsuzuki, Mestrin, & Rossier, 1997), stellate cells in medial
entorhinal cortex (Klink & Alonso, 1993), and in stuttering and irregular-
spiking inhibitory cortical interneurons (Markram et al., 2004)). In all these
observations, the initial gaussian spike is, or would be if the histogram
was constructed, in the range of 10 to 30 ms, roughly in the gamma-
frequency range. A periodic histogram similar to Figure 15 was observed
in thalamic somatic sensory neurons (Poggio & Viernstein 1964). The spon-
taneous activity of frog olfactory neurons generates similar bimodal his-
tograms on a much slower timescale (3–5 Hz) (Rospars, Lansky, Vaillant,
Duchamp-Viret, & Duchamp, 1994), while a motion-sensitive neuron in
fly generates a bimodal histogram on a much faster timescale (150 Hz)
(de Ruyter van Steveninck et al. 1997).
Guttman and Barnhill (1970), using space-clamped squid axon treated
with low calcium, observed “skip runs,” which look very similar to our
model traces (see Figure 4) but were unable to explain them by modeling.
Guttman, Lewis, & Rinzel (1980), again with squid axon bathed in low cal-
cium, showed the coexistence of two states in the squid axon—repetitive
firing and subthreshold oscillations—and used small, well-chosen pertur-
bations to annihilate repetitive firing in both the axon and model. This
confirms the region of bistability in the HH bifurcation diagram (see
Figure 6).
Gerstein and Mandelbrot (1964) used a random walk model to generate
ISI histograms with a nonsimple-exponential tail, that fits data from several
types of neurons but not from those under investigation here that generate
bimodal ISI histograms.
Figure 17 shows an ISI histogram that closely matches the form of
the ISI histogram in Figure 1A. It was recorded by Kiang (1965) from
an auditory nerve fiber in anesthetized cat and has a timescale approxi-
mately eight times faster than the squid axon’s. Note that the characteristic
dip in the histogram between initial peak and exponential tail is clearly
visible.
While on the one hand, it appears from the literature that bimodal
ISI histograms occur in many places in the nervous system, on the
other hand, we have shown that with the introduction of a source of
stochasticity, the HH equations generate bimodal ISI histograms of the
same form, that is, a high initial peak followed by an exponential tail.
It seems natural, then, to suggest that the stochastic HH model could
be used to model the firing properties of the spike initiation zone of
other classes of neurons as well as the squid axon. Obviously, the HH
1244 P. Rowat
0 32 64
0
64
128
ISI count
ISI length (msec)
Figure 17: An ISI histogram recorded from spontaneous firing of an auditory
nerve fiber in cat. (Reproduced, with permission, from Kiang 1965.) Qualita-
tively, this has the same form as the histograms generated by the HH model
(see Figure 1). However, the frequency associated with the initial peak is much
higher—approximately 300 to 500 Hz.
parameters would need considerable changes to generate spikes on the
timescales of the frog olfactory neurons or motion-sensitive neurons
in fly.
In related work, Yu and Lewis (1989) showed that the response of the HH
model could be linearized by the presence of noise: using high-amplitude
noise, they showed that the simulation data in Figure 3 linearly extend to
near zero frequency. Tateno and Pakdaman (2004) investigated the noisy
class II Morris-Lecar model and described the phase-space distribution
in the same regimes we investigated; however, ISI distributions were not
studied and are not easily deduced from the phase-space distributions. Lee,
Neiman, & Kim (1998) studied coherence resonance in the HH model with
noisy synaptic inputs and computed a coherence measure from the gaussian
peak of the ISI histogram. Tiesenga, Jose, and Sejnowski (2000) investigated
in the deterministic HH model the effect of different kinds of input noise on
the CV and other measures of the output spike train. The work presented
here, on the other hand, investigates the effect of the intrinsic neuronal
channel noise on the distribution of the output ISIs. Analysis of stochastic
dynamical systems (Berglund & Gentz, 2003; Su, Rubin, & Terman, 2004)
can give strict estimates of, for example, the time spent in the neigborhood
of the unstable fixed point before switching back to the SLC as in our Hopf
case B, but the statement of such an estimate would take us too far afield. In
an article to which this one could be considered an extension, Schneidman,
Freedman, and Segev (1998) showed that a stochastic HH model qualita-
tively reproduced the different reliability and precision characteristics of
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1245
spike firing in response to DC and fluctuating input in neocortical neu-
rons found by Mainen and Sejnowski (1995). ISI histograms were not
examined.
In an approximate analysis, Chow and White (1996) derived an exponen-
tial distribution for the ISIs generated by channel noise in the subthreshold
HH equations. The common practice in modeling studies to assume an ex-
ponential distribution for spontaneous ISIs was bolstered by their result.
Our work was initiated when we repeated their simulations and, using
much longer ISI time series, found the initial peak shown in the inset in
Figure 1A—the case they studied. It is hoped that modelers of neural net-
works will take into account the often bimodal character of the ISI distribu-
tion.
Are the channel noise levels used in this study realistic in any sense?
Typically, the conductance, γNa , of single, biological Na-channels lies be-
tween 2 and 20 pS (Hille, 2001). The population conductance of our
model neuron, gNa , was 120 mS/cm2, implying a channel density 109
to 1010/cm2. Assuming that a spike initiation zone might occupy 103to
104/mm2(105to 104cm2), one comes to a projected Na channel number,
NNa 104to 106, whereas the maximum number used in our simulations
was 180,000 =1.8105. These are, of course, crude approximations. They
serve only to argue that our noise levels are not necessarily unreasonable.
In many biological neurons, the actual number of Na and K channels occu-
pying the site of spike initiation remains unknown.
6 Summary
We have by simulation shown that the stochastic HH model generates bi-
modal ISI histograms with a prominent initial peak in the gamma-frequency
range and often has high (greater than 1) coefficient of variation in phys-
iological ranges. We introduced a stochastic version of the Morris-Lecar
model that uses perhaps the simplest possible kind of channel noise, which
may prove useful in the analysis of channel noise and its effects. We used
this Morris-Lecar model to give a simple description of the dynamical
mechanisms underlying the generation of bimodal ISI histograms and
gave numerical evidence that the dynamical mechanism is the same in the
Morris-Lecar and HH stochastic models. We conjecture that the same mech-
anism arises in any neuron model, however complex, provided only that it
undergoes a subcritical Hopf bifurcation—is class II—and has a switching
region.
We pointed out that class II neurons and neurons with bimodal ISI his-
tograms are widespread in the nervous system, in particular in cortex and
sensory systems, and suggested that the same underlying dynamics may
be present. The high CV often associated with bimodal histograms may
contribute to the highly irregular firing of cortical cells (Softky & Koch
1993), in particular to the firing of stuttering and irregular firing classes
1246 P. Rowat
of inhibitory cortical interneurons (Markram et al., 2004). Although we
worked here with channel noise, the same properties of the ISI distribution
arise with gaussian noise. We conclude that the properties of the stochastic
HH model make it a prime candidate for models of mammalian neural
networks.
Acknowledgments
I thank Robert Elson for advice at an early stage, Javier Movellan for the use
of his cluster of Mac G5s, Jochen Triesch for a crucial early comment, Terry
Sejnowski for encouragement, and two referees for constructive criticism.
Above all, I thank my partner,Nona, for her unflagging support and keeping
it all together.
References
Armstrong-James, M. (1975). Functional status and columnar organization of single
cells responding to cutaneous stimulation in neonatal rat somatosensory cortex
S1. J. Physiol., 246, 501–538.
Azouz, R., & Gray, C. (2000). Dynamic spike threshold reveals a mechanism for
synaptic coincidence detection in cortical neurons in vivo. Proc. Natl. Acad. Sci.,
97(14), 8110–8115.
Bassant, M. (1976). Analyse statistique de l’activit´
e des cellules pyramidales de
l’hippocampe dorsal du lapin. Electroencephal. Clinical Neurophysiology, 40, 585–
603.
Berglund, N., & Gentz, B. (2003). Geometric singular perturbation theory for stochas-
tic differential equations. J. Differential Equations, 191, 1–54.
Best, E. (1979). Null space in the Hodgkin-Huxley equations. Biophysical J., 27, 87–104.
Brette, R. (2003). Reliability of spike timing is a general property of spiking model
neurons. Neural Computation, 15, 279–308.
Burns, B., & Webb, A. (1976). The spontaneous activity of neurones in the cat’s
cerebral cortex. Proc.R.Soc.Lond.B,194, 211–223.
Cauli, B., Audinat, E., Lambolez, B., Angulo, M. C., Ropert, N., Tsuzuki, K., Hes-
trin, S., & Rossier, J. (1997). Molecular and physiological diversity of cortical
nonpyramidal cells. J. Neuroscience, 17(10), 3894–3906.
Chow, C. C., & White, J. A. (1996). Spontaneous action potentials due to channel
fluctations. Biophysical J., 71, 3013–3021.
de Ruyter van Steveninck, R., Lewen, G., Strong, S. P., Koberle, R., & Bialek, W.
(1997). Reproducibility and variability in neural spike trains. Science, 275, 1805–
1808.
Dean, A. (1981). The variability of discharge of simple cells in the cat striate cortex.
Experimental Brain Research, 44, 437–440.
Destexhe, A., M. Rudolph, Fellous, J., & Sejnowski, T. J. (2001). Fluctuating synaptic
conductances recreate in vivo–like activity in neocortical neurons. Neuroscience,
107(1), 13–24.
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1247
Dorval, A., & White, J. (2005). Channel noise is essential for perithreshold oscillations
in entorhinal stellate neurons. J. Neurosci., 25(43), 10025–10028.
Duchamp-Viret, P., Kostal, L., Chaput, M., Lansky, P., & Rospars, J.-P. (2005). Patterns
of spontaneous activity in single rat olfactory receptor neurons are different in
normally breathing and tracheotomized animals. J. Neurobiology, 65(2), 97–14.
Fitzhugh, R. (1961). Impulses and physiological states in models of nerve membrane.
Biophys. J., 1, 445–466.
Gerstein, G., & Kiang, N. Y.-S. (1960). Approach to the quantitative analysis of
electrophysiological data from single neurons. Biophys. J., 6, 15–28.
Gerstein, G., & Mandelbrot, B. (1964). Random walk models for the spike activity of
asingleneuron.Biophys. J., 4, 41–68.
Gillespie, D. T. (1977). Exact stochastic simulation of coupled chemical reactions. J.
Physical Chemistry, 81(25), 2340–2361.
Guckenheimer, J., & Labouriau, I. S. (1993). Bifurcation of the Hodgkin-Huxley
equations: A new twist. Bull. Math. Biol., 55, 937–952.
Guckenheimer, J., & Oliva, R. A. (2002). Chaos in the Hodgkin-Huxley model. SIAM
J. Applied Dynamical Systems, 1(1), 105–114.
Gutkin, B. S., & Ermentrout, G. B. (1998). Dynamics of membrane excitability deter-
mine interspike interval variability: A link between spike generation mechanisms
and cortical spike train statistics. Neural Computation, 10(5), 1047–1065.
Guttman, R., & Barnhill, R. (1970). Oscillation and repetitive firing in squid axons.
JournalofgeneralPhysiology,55, 104–118.
Guttman, R., Lewis, S., & Rinzel, J. (1980). Control of repetitive firing in squid axon
membrane as a model for a neuroneoscillator. J. Physiology, 305, 377–395.
Hassard, B. (1978). Bifurcation of periodic solutions of the Hodgkin-Huxley model
for the squid giant axon. J. Theor. Biology, 71, 401–420.
Hille, B. (2001). Ion channels of excitable membranes (3rd ed.). Sunderland, MA: Sinauer.
Hodgkin, A. (1948). The local electrical changes associated with repetitive action in
a non-medullated axon. J. Physiol., 107, 165–181.
Hodgkin, A., & Huxley, A. (1952). A quantitative description of membrane current
and its application to conduction and excitation in nerve. J. Physiol., 117, 500–544.
Holt, G., Softky, W., Koch, C., & Douglas, R. J. (1996). Comparison of discharge
variability in vitro and in vivo in cat visual cortex neurons. J. Neurophysiol., 75(5),
1806–1814.
Kiang, N. Y.-S. (1965). Discharge patterns of single fibers in the cat’s auditory nerve.
Cambridge, MA: MIT Press.
Klink, R., & Alonso, A. (1993). Ionic mechanisms for the subthreshold oscillations and
differential electroresponsiveness of medial entorhinal cortex layer II neurons.
J. Neurosci., 70(1), 144–157.
Kole, M., Hallerman, S., & Stuart, G. (2006). Single Ihchannels in pyramidal neu-
ron dendrites: Properties, distribution, and impact on action potential output.
J. Neurosci., 26(6), 1677–1687.
Labouriau, I. S. (1985). Degenerate Hopf bifurcation and nerve impulse. SIAM
J. Math. Anal., 16(6), 1121–1133.
Lamarre, Y., Filion, M., & Cordeau, J. P. (1971). Neuronal discharges of the ventral
nucleus of the thalamus during sleep and wakefulness in the cat. I. Spontaneous
activity. Experimental Brain Research, 12, 480–498.
1248 P. Rowat
Lee, S.-G., Neiman, A., & Kim, S. (1998). Coherence resonance in a Hodgkin-Huxley
neuron. Physical Review E, 57(3), 3292–3297.
Lindner, B., Garica-Ojalvo, J., Nerman, A., & Schimansky-Geier, L. (2004). Effects of
noise in excitable systems. Physics Reports, 392(6), 321–424.
Lynch, J., & Barry, P. (1989). Action potentials initiated by single channels
opening in a small neuron (rat olfactory receptor). Biophys. J., 55, 755–
768.
Mainen, Z., & Sejnowski, T. J. (1995). Reliability of spike timing in neocortical neu-
rons. Science, 268, 1503–1506.
Mann-Metzer, P., & Yarom, Y. (2002). Jittery trains induced by synaptic-like currents
in cerebellar inhibitory interneurons. J. Neurophysiol., 87, 149–156.
Markram, H., Toledo-Rodriguez, M., Wang, Y., Gupta, A., Silberberg, G., & Caizhi,
W. (2004). Interneurons of the neocortical inhibitory system. Nature Reviews Neu-
roscience, 5, 793–807.
Matsumoto, M., & Nishimura, T. (1998). Mersenne Twister: A 623-dimensionally
equidistributed uniform pseudorandom number generator. ACM Trans. on Mod-
eling and Computer Simulation, 8(1), 3–30.
McDonald, S. W., Grebogi, C., Ott, E., & Yorke, J. A. (1985). Fractal basin boundaries.
Physica D, 17, 125–153.
Meunier, C., & Segev, I. (2002). Playing the devil’s advocate: Is the Hodgkin-Huxley
model useful? TINS, 25(11), 558–563.
Morris, C., & Lecar, H. (1981). Voltage oscillations in the barnacle giant muscle fiber.
Biophys. J., 35, 193–213.
Nakahama, H., Suzuki, H., Yamamoto, M., & Aikawa, S. (1968). A statistical analysis
of spontaneous activity of central single neurons. Physiology and Behavior, 3(5),
745–752.
Noda, H., & Adey, R. (1970). Firing variability in cat association cortex during sleep
and wakefulness. Brain Research, 18, 513–526.
Pfeiffer, R., & Kiang, N. Y.-S. (1965). Spike discharge patterns of spontaneous and
continuously stimulated activity in the cochlear nucleus of anaesthetized cats.
Biophysical Journal, 5, 301–316.
Poggio, G., & Viernstein, J. (1964). Time series analysis of impulse sequences of
thalamic somatic sensory neurons. J. Neurophysiol., 27, 517–545.
Rinzel, J., & Ermentrout, G. B. (1998). Analysis of neural excitability and oscillations.
K. Koch & I. Segev (Eds.), In Methods in neuronal modeling (pp. 251–291). Cam-
bridge, MA: MIT Press.
Rinzel, J., & Miller, R. (1980). Numerical calculation of stable and unstable periodic
solutions to the Hodgkin-Huxley equations. Mathematical Biosciences, 49, 27–59.
Rospars, J.-P., Lansky, P., Vaillant, J., Duchamp-Viret, P., & Duchamp, A. (1994).
Spontaneous activity of first- and second-order neurons in the frog olfactory
system. Brain Research, 662, 31–44.
Rowat, P., & Elson, R. (2004). State-dependent effects of Na-channel noise on neuronal
burst generation. J. Comput. Neuroscience, 16, 87–112.
Rudolph, M., & Destexhe, A. (2002). Point-conductance models of cortical neurons
with high discharge variability. Neurocomputing, 44–46, 147–152.
Ruggero, M. (1973). Response to noise of auditory nerve fibers in the squirrel monkey.
J. Neurophysiol., 36, 569–587.
ISI Statistics in the Stochastic Hodgkin-Huxley Model 1249
Schneidman, E., Freedman, B., & Segev, I. (1998). Ion channel stochasticity may
be critical in determining the reliability and precision of spike timing. Neural
Computation, 10(7), 1679–1703.
Shadlen, M., & Newsome, W. (1998). The variable discharge of cortical neurons: Im-
plications for connectivity, computation, and information coding. J. Neuroscience,
18(10), 3870–3896.
Siebler,M., Koller, H., Stichel, C. C., Muller,H. W., & Freund, H. J. (1993). Spontaneous
activity and recurrent inhibition in cultured hippocampal networks. Synapse, 14,
206–213.
Sigworth, F. (1980). The variance of sodium current fluctuation at the node of Ranvier.
J. Physiol., 307, 97–129.
Softky, W., & Koch, K. (1993). The highly irregular firing of cortical cells is incon-
sistent with temporal integration of random EPSPs. J. Neuroscience, 13(1), 334–
350.
Stein, R. (1965). A theoretical analysis of neuronal variability. Biophys. J., 5, 173–194.
Stevens, C. F., & Zador, A. M. (1998). Input synchrony and the irregular firing of
cortical neurons. Nature Neuroscience, 1(3), 210–217.
Stiefel, K. M., Englitz, B., & Sejnowski, J. (2006). The irregular firing of cortical interneu-
rons in vitro is due to fast K+-kinetics. Manuscript in preparation.
Su, J., Rubin, J., & Terman, D. (2004). Effects of noise on elliptic bursters. Nonlinearity,
17, 133–157.
Tateno, T., Harsch, A., & Robinson, H. P. C. (2004). Threshold firing frequency-current
relationships of neurons in rat somatosensory cortex: Type 1 and type 2 dynamics.
J. Neurophysiol., 92, 2283–2294.
Tateno, T., & Pakdaman, K. (2004). Random dynamics of the Morris-Lecar model.
Chaos, 14(3), 511–530.
Tiesenga, P., Jose, J., & Sejnowski, J. (2000). Comparison of current-driven and
conductance-driven neocortical model neurons with Hodgkin-Huxley voltage-
gated channels. Phys. Review E, 62(6), 8413–8419.
Troy, W. C. (1978). The bifurcation of periodic solutions in the Hodgkin-Huxley
equations. Quarterly Journal of Applied Mathematics, 36, 73–83.
Troyer, T., & Miller, K. (1997). Physiological gain leads to high ISI variability in a
simple model of a cortical regular spiking cell. Neural Computation, 9(5), 971–
983.
Werner, G., & Mountcastle, V. (1963). The variability of central neural activity in a
sensory system, and its implications for the central reflection of sensory events.
J. Neurophysiol., 26, 958–977.
White, J. A., Klink, R., Alonso, A., & Kay, A. P. (1998). Noise from voltage-gated ion
channels may influence neuronal dynamics in the entorhinal cortex. J. Neurophys-
iol., 80, 262–269.
White, J. A., Rubinstein, J. T., & Kay, A. P. (2000). Channel noise in neurons. Tre n d s
in Neurosciences, 23(3), 131–137.
Whitsel, B., Schreiner, R., & Essick, G. K. (1977). An analysis of variability in so-
matosensory cortical neuron discharge. J. Neurophysiol., 40(3), 589–607.
Wilbur, W., & Rinzel, J. (1983). A theoretical basis for large coefficient of variation
and bimodality in neuronal interspike interval distribution. J. Theoretical Biology,
105, 345–368.
1250 P. Rowat
Yu, X., & Lewis, E. R. (1989). Studies with spike initiators: Linearization by noise
allows continuous signal modulation in neural networks. IEEE Trans. Biomedical
Eng., 36(1), 36–43.
Received February 1, 2006; accepted July 24, 2006.
... In order to take into consideration fluctuations observed in neuronal dynamics, external stimulus characterized by noise was introduced to these models. The problem of noise-induced complex dynamics in excitable neurons have great interest [22][23][24][25][26][27][28][29][30][31][32][33]. It should be recalled that many processes contribute to the randomicity in neurons such as the interplay between channels fluctuations, the nonlinear physiological parameters, the release of neurotransmitter by synapses, and the switching of ion channels [34]. ...
... This principal behavior called bistability is important because, it forms the basis for a general class of rhythmic and bursting phenomena. To the best of authors knowledge, this was only mentioned numerically using deterministic and stochastic basic conductance models of HH [25][26][27] and ML [28,29]. In a recent work [19], we analyzed the switching between two attractors of a similar model by introducing a slow variable as a control parameter. ...
... Dynamical studies of the neuronal models under the effect of a Gaussian noise, have suggested that channel noise can influence information processing in neurons by the switching between otherwise stable neural states and associated with a change in membrane conductance. [25][26][27][28][29][30][31][32]. They also generate bursting spiking behavior. ...
Article
By drawing inspiration from existing polynomial models for neurons, we make use of the Moris Lecar system to derive a new two dimensional birhythmic conductance-based neuronal model for nerves. The analysis of fixed points and their stability indicates that its dynamics strongly depends on the parameters of the newly nonlinear terms introduced. Using Lindsted's method, it is observed that the neuronal system can exhibit coexistence of attractors. These coexisting attractors are on the one hand the subthreshold oscillation and on the other hand the spike generation well known in neuronal systems. After introducing the effects of the channel fluctuations in the form of a Gaussian white noise, the global stability of the attractors is analyzed. The effective active energy barrier also called threshold potential is obtained. This threshold potential is the one needed by neuron to switch from one attractor to another. The probability distribution is also studied analytically and numerically, using the Fokker–Planck type equation derived from the new model and the Monte Carlo methods. It is observed that the system physiological parameters and the intensity of the noise plays an important role in the probability of neuron to switch from the subthreshold attractor to the spiking one and vice versa.
... At the single-cell level, stochastic behavior in neurons and variability in action potential timing arises from the random gating of ion channel populations. Such stochastic noise associated with the random opening and closing of individual ion channels can alter the firing threshold [36], spike timing [19,37], interspike interval statistics [38], and the amount of stochastic resonance [39]. Based on the HH approach, channel noise was first modeled as a system of gating variables consistent with a multi-state Markov process description [40], with other numerical studies providing insights into fluctuations in discrete ion channel populations [41][42][43]. ...
Article
Full-text available
Noise activity is known to affect neural networks, enhance the system response to weak external signals, and lead to stochastic resonance phenomenon that can effectively amplify signals in nonlinear systems. In most treatments, channel noise has been modeled based on multi-state Markov descriptions or the use stochastic differential equation models. Here we probe a computationally simple approach based on a minor modification of the traditional Hodgkin-Huxley approach to embed noise in neural response. Results obtained from numerous simulations with different excitation frequencies and noise amplitudes for the action potential firing show very good agreement with output obtained from well-established models. Furthermore, results from the Mann–Whitney U Test reveal a statistically insignificant difference. The distribution of the time interval between successive potential spikes obtained from this simple approach compared very well with the results of complicated Fox and Lu type methods at much reduced computational cost. This present method could also possibly be applied to the analysis of spatial variations and/or differences in characteristics of random incident electromagnetic signals.
... But channel leak and spontaneous subthreshold fluctuations in membrane potential significantly contribute to the likelihood of a given cell reaching action potential threshold [5][6][7][8]. Indeed, the relationship between membrane voltage, ion conductances, and channel activation, given by the Hodgkin-Huxley equations, is a classical limit that emerges from intrinsically stochastic processes [9][10][11]. Notably, cortical neurons actively maintain a coordinated 'up-state' , allowing electrical noise to gate signaling outcomes [12]. ...
Article
Full-text available
Neuronal populations in the cerebral cortex engage in probabilistic coding, effectively encoding the state of the surrounding environment with high accuracy and extraordinary energy efficiency. A new approach models the inherently probabilistic nature of cortical neuron signaling outcomes as a thermodynamic process of non-deterministic computation. A mean field approach is used, with the trial Hamiltonian maximizing available free energy and minimizing the net quantity of entropy, compared with a reference Hamiltonian. Thermodynamic quantities are always conserved during the computation; free energy must be expended to produce information, and free energy is released during information compression, as correlations are identified between the encoding system and its surrounding environment. Due to the relationship between the Gibbs free energy equation and the Nernst equation, any increase in free energy is paired with a local decrease in membrane potential. As a result, this process of thermodynamic computation adjusts the likelihood of each neuron firing an action potential. This model shows that non-deterministic signaling outcomes can be achieved by noisy cortical neurons, through an energy-efficient computational process that involves optimally redistributing a Hamiltonian over some time evolution. Calculations demonstrate that the energy efficiency of the human brain is consistent with this model of non-deterministic computation, with net entropy production far too low to retain the assumptions of a classical system.
... Multistability in neural systems is a particularly important property as it refers to different patterns of neuronal activity [12][13][14]. For example, different bistability scenarios have been found in the Hodgkin-Huxley (HH) neuron model [15] such as the coexistence of two stable steady states [16], a stable steady state and small/large oscillations [17], or between two stable periodic solutions with different amplitudes and periods [18]. Tristability scenarios with three stable states or two stable periodic solutions with one stable steady state are also reported [19]. ...
... But channel leak and spontaneous subthreshold fluctuations in membrane potential significantly contribute to the likelihood of a given cell reaching action potential threshold [5][6][7][8]. Indeed, the relationship between membrane voltage, ion conductances, and channel activation, given by the Hodgkin-Huxley equations, is a classical limit that emerges from intrinsically stochastic processes [9][10][11]. Notably, cortical neurons actively maintain a coordinated 'up-state', allowing electrical noise to gate signaling outcomes [12]. ...
Preprint
Full-text available
Neuronal populations in the cerebral cortex engage in probabilistic coding, effectively encoding the state of the surrounding environment with high accuracy and extraordinary energy efficiency. A new approach models the inherently probabilistic nature of cortical neuron signaling outcomes as a thermodynamic process of non-deterministic computation. A mean field approach is used, with the trial Hamiltonian maximizing free energy and minimizing the net quantity of temperature-entropy, compared with a reference Hamiltonian. Thermodynamic quantities are always conserved during the computation; free energy must be expended to produce information, and free energy is released during information compression, as correlations are identified between the encoding system and its surrounding environment. Due to the relationship between the Gibbs free energy equation and the Nernst equation, any increase in free energy is paired with a local decrease in membrane potential. As a result, this process of thermodynamic computation adjusts the likelihood of each neuron firing an action potential. This model shows that non-deterministic signaling outcomes can be achieved by noisy cortical neurons, through an energy-efficient computational process that involves optimally redistributing a Hamiltonian over some time evolution. Calculations demonstrate that the energy efficiency of the human brain is consistent with this model of non-deterministic computation, with net entropy production far too low to retain the assumptions of a classical system.
... Linear approximations of the Hodgkin-Huxley model also accurately predict shifts in membrane potential, as long as temperatures are below 27°C and the region of membrane being modeled is above 200 square microns in size [11]. But it is worth noting that the underlying relationship between membrane voltage, ion conductances, and channel activation -given by these four partial differential * stoll@westerninstitute.org; https://westerninstitute.org equations -must ultimately be described by either modeling all eigenvectors in the system along real and complex planes, or by modeling a Hopf bifurcation to find the critical point where the cortical neuron flips from an offstate to an on-state [12][13][14]. Both of these deeper models essentially describe quantum processes, utilizing imaginary axes to approximate the contribution of inherently random events to signaling outcomes. ...
Preprint
Full-text available
In cortical neurons, spontaneous membrane potential fluctuations affect the likelihood of firing an action potential. Yet despite retaining sensitivity to random electrical noise in gating signaling outcomes, these cells achieve highly accurate computations with extraordinary energy efficiency. A new approach models the inherently probabilistic nature of cortical neuron firing as a thermodynamic process of non-deterministic computation. Typically, the cortical neuron is modeled as a binary computational unit, in either an off-state or an on-state, but here, the cortical neuron is modeled as a two-state quantum system, with some probability of switching from an off-state to an on-state. This approach explicitly takes into account the contribution of random electrical noise in gating signaling outcomes, particularly during cortical up-states. In this model, the membrane potential is described as the mixed sum of all component microstates, or the quantity of von Neumann entropy encoded by the computational unit. This distribution of macrostates is given by a density matrix, which undergoes a unitary change of basis as each unit, System A, interacts with its surrounding environment, System B. Any linear correlations reduce the number of distinguishable pure states, leading to the selection of an optimal system state in the present context. This process of information compression is shown to be equivalent to the extraction of predictive value from a thermodynamic quantity of information. Calculations demonstrate that estimated coulomb scattering profiles and decoherence timescales in cortical neurons are consistent with a quantum system, with random electrical noise driving signaling outcomes.
... The probabilistic firing patterns of cortical neurons can be modelled using Bayesian statistics [7], by introducing random connectivity [8], by employing fanofactor analysis of inter-spike variability [9], or by modifying the * stoll@westerninstitute.org; https://westerninstitute.org Hodgkin-Huxley equations to account for electrical noise [38][39][40]. Cognitive processes are also known to shape cortical neuron firing patterns, with contextual cues [10], prior experience [11], and expectations [12] contributing to signaling outcomes in the cortex. However, none of these methods provide mechanistic insight into how individual cortical neurons gate signaling outcomes by allowing random electrical noise to contribute to the uncertain state of each neuron. ...
Preprint
Full-text available
Cortical neurons allow random electrical noise to contribute to the likelihood of firing a signal. Previous approaches have involved statistically modeling signaling outcomes in neuronal populations, or modeling the dynamical relationship between membrane potential, ion channel activation, and ion conductance in individual neurons. However, these methods do not mechanistically account for the role of random electrical noise in gating the action potential. Here, the membrane potential of a cortical neuron is modeled as the uncertainty in all component pure states, or the amount of information encoded by that computational unit. With this approach, each neuron computes the probability of transitioning from an off-state to an on-state, with the macrostate of each computational unit being a function of all component microstates. Component pure states are integrated into a physical quantity of information, and the derivative of this high-dimensional probability density yields eigenvalues, or an internally-consistent observable system state at a defined point in time. In accordance with the Hellman-Feynman theorem, the resolution of the system state is paired with a spontaneous shift in charge distribution, and so this defined system state instantly becomes the past as a new probability density emerges. This model of Hamiltonian mechanics produces testable predictions regarding the wavelength of free energy released upon information compression. Overall, this model demonstrates how cortical neurons might achieve non-deterministic signaling outcomes through noisy coincidence detection.
... 2. The noise-induced-firing case: It is predicted [BNR15] and verified with direct simulation [Row07] that the neurons would split into a firing and a resting subpopulation due to the noise age membrane potential increases as the noise level increase, indicating the existence of noise-induced firing of neurons. Whereas in Fig. III.7E, we noticed that only a proportion of neurons have fired over a period, which implies that the existence of resting neurons. ...
Thesis
This thesis presents numerical methods and modeling related to simulating neurons. Two approaches to the simulation are taken: a population density approach and a neuronal network approach. The first two chapters present the results from the population density approach and its applications. The population density approach assumes that each neuron can be identified by its states (e.g., membrane potential, conductance of ion channels). Additionally, it assumes the population is large such that it can be approximated by a continuous population density distribution in the state space. By updating this population density, we can learn the macroscopic behavior of the population, such as the average firing rate and average membrane potential. The Population density approach avoids the need to simulate every single neuron when the population is large. While many previous population-density methods, such as the mean-field method, make further simplifications to the models, we developed the Asymmetric Particle Population Density (APPD) method to simulate the population density directly without the need to simplify the dynamics of the model. This enables us to simulate the macroscopic properties of coupled neuronal populations as accurately as a direct simulation. The APPD method tracks multiple asymmetric Gaussians as they advance in time due to a convection-diffusion equation, and our main theoretical innovation is deriving this update algorithm by tracking a level set. Tracking a single Gaussian is also applicable to the Bayesian filtering for continuous-discrete systems. By adding a measurement-update step, we reformulated our tracking method as the Level Set Kalman Filter(LSKF) method and find that it offers greater accuracy than state-of-the-art methods. Chapter IV presents the methods for direct simulation of a neuronal network. For this approach, the aim is to build a high-performance and expandable framework that can be used to simulate various neuronal networks. The implementation is done on GPUs using CUDA, and this framework enables simulation for millions of neurons on a high-performance desktop computer. Additionally, real-time visualization of neuron activities is implemented. Pairing with the simulation framework, a detailed mouse cortex model with experiment-determined morphology using the CUBIC-Atlas, and neuron connectome information from Allen's brain atlas is generated.
Article
A central problem in the cognitive sciences is identifying the link between consciousness and neural computation. The key features of consciousness-including the emergence of representative information content and the initiation of volitional action-are correlated with neural activity in the cerebral cortex, but not computational processes in spinal reflex circuits or classical computing architecture. To take a new approach toward considering the problem of consciousness, it may be worth re-examining some outstanding puzzles in neuroscience, focusing on differences between the cerebral cortex and spinal reflex circuits. First, the mammalian cerebral cortex exhibits exascale computational power, a feature that is not strictly correlated with the number of binary computational units; second, individual computational units engage in noisy coding, allowing random electrical events to gate signaling outcomes; third, this noisy coding results in the synchronous firing of statistically random populations of cells across the neural network, at a range of nested frequencies; fourth, the system grows into a more ordered state over time, as it encodes the predictive value gained through observation; and finally, the cerebral cortex is extraordinarily energy efficient, with very little free energy lost to entropy during the work of information processing. Here, I argue that each of these five key features suggest the mammalian brain engages in probabilistic computation. Indeed, by modeling the physical mechanisms of probabilistic computation, we may find a better way to explain the unique emergent features arising from cortical neural networks.
Article
Full-text available
Molecular fluctuations can lead to macroscopically observable effects. The random gating of ion channels in the membrane of a nerve cell provides an important example. The contributions of independent noise sources to the variability of action potential timing have not previously been studied at the level of molecular transitions within a conductance-based model ion-state graph. Here we study a stochastic Langevin model for the Hodgkin–Huxley (HH) system based on a detailed representation of the underlying channel state Markov process, the “14×28\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$14\times 28$$\end{document}D model” introduced in (Pu and Thomas in Neural Computation 32(10):1775–1835, 2020). We show how to resolve the individual contributions that each transition in the ion channel graph makes to the variance of the interspike interval (ISI). We extend the mean return time (MRT) phase reduction developed in (Cao et al. in SIAM J Appl Math 80(1):422–447, 2020) to the second moment of the return time from an MRT isochron to itself. Because fixed-voltage spike detection triggers do not correspond to MRT isochrons, the inter-phase interval (IPI) variance only approximates the ISI variance. We find the IPI variance and ISI variance agree to within a few percent when both can be computed. Moreover, we prove rigorously, and show numerically, that our expression for the IPI variance is accurate in the small noise (large system size) regime; our theory is exact in the limit of small noise. By selectively including the noise associated with only those few transitions responsible for most of the ISI variance, our analysis extends the stochastic shielding (SS) paradigm (Schmandt and Galán in Phys Rev Lett 109(11):118101, 2012) from the stationary voltage clamp case to the current clamp case. We show numerically that the SS approximation has a high degree of accuracy even for larger, physiologically relevant noise levels. Finally, we demonstrate that the ISI variance is not an unambiguously defined quantity, but depends on the choice of voltage level set as the spike detection threshold. We find a small but significant increase in ISI variance, the higher the spike detection voltage, both for simulated stochastic HH data and for voltage traces recorded in in vitro experiments. In contrast, the IPI variance is invariant with respect to the choice of isochron used as a trigger for counting “spikes.”
Article
Full-text available
It is shown that noise can be an important element in the translation of neuronal generator potentials (summed inputs) to neuronal spike trains (outputs), creating or expanding a range of amplitudes over which the spike rate is proportional to the generator potential amplitude. Noise converts the basically nonlinear operation of a spoke initiator into a nearly linear modulation process. This linearization effect of noise is examined in a simple intuitive model of a static threshold and in a more realistic computer simulation of spike initiator based on the Hodgkin-Huxley (HH) model. The results are qualitatively similar; in each case larger noise amplitude results in a larger range of nearly linear modulation. The computer simulation of the HH model with noise shows linear and nonlinear features that were earlier observed in spike data obtained from the VIIIth nerve of the bullfrog. This suggests that these features can be explained in terms of spike initiator properties, and it also suggests that the HH model may be useful for representing basic spike initiator properties in vertebrates.
Article
Cortical neurons in the waking brain fire highly irregular, seemingly random, spike trains in response to constant sensory stimulation, whereas in vitro they fire regularly in response to constant current injection. To test whether, as has been suggested, this high in vivo variability could be due to the postsynaptic currents generated by independent synaptic inputs, we injected synthetic synaptic current into neocortical neurons in brain slices. We report that independent inputs cannot account for this high variability, but this variability can be explained by a simple alternative model of the synaptic drive in which inputs arrive synchronously. Our results suggest that synchrony may be important in the neural code by providing a means for encoding signals with high temporal fidelity over a population of neurons.
Article
1. In neocortical slices, the majority of neurons fire quite regularly in response to constant current injections. But neurons in the intact animal fire irregularly in response to constant current injection as well as to visual stimuli. 2. To quantify this observation, we developed a new measure of variability, which compares only adjacent interspike intervals and is therefore less sensitive to rate variations than existing measures such as the coefficient of variation of interspike intervals. 3. We find that the variability of firing is much higher in cells of primary visual cortex in the anesthetized cat than in slice. The response to current injected from an intracellular electrode in vivo is also variable, but slightly more regular and less bursty than in response to visual stimuli. 4. Using a new technique for analyzing the variability of integrate-and-fire neurons, we prove that this behavior is consistent with a simple integrate-and-fire model receiving a large amount of synaptic background activity, but not with a noisy spiking mechanism.
Article
A simple neuronal model is assumed in which, after a refractory period, excitatory and inhibitory exponentially decaying inputs of constant size occur at random intervals and sum until a threshold is reached. The distribution of time intervals between successive neuronal firings (interresponse time histogram), the firing rate as a function of input frequency, the variability in the time course of depolarization from trial to trial, and the strength-duration curve are derived for this model. The predictions are compared with data from the literature and good qualitative agreement is found. All parameters are experimentally measurable and a direct test of the theory is possible with present techniques. The assumptions of the model are relaxed and the effects of such experimentally found phenomena as relative refractory and supernormal periods, adaptation, potentiation, and rhythmic slow potentials are discussed. Implications for gross behavior studies are considered briefly.
Article
To provide information about dynamic sensory stimuli, the pattern of action potentials in spiking neurons must be variable. To ensure reliability these variations must be related, reproducibly, to the stimulus. For H1, a motion-sensitive neuron in the fly's visual system, constant-velocity motion produces irregular spike firing patterns, and spike counts typically have a variance comparable to the mean, for cells in the mammalian cortex. But more natural, time-dependent input signals yield patterns of spikes that are much more reproducible, both in terms of timing and of counting precision. Variability and reproducibility are quantified with ideas from information theory, and measured spike sequences in H1 carry more than twice the amount of information they would if they followed the variance-mean relation seen with constant inputs. Thus, models that may accurately account for the neural response to static stimuli can significantly underestimate the reliability of signal transfer under more natural conditions.
Article
We review the behavior of theoretical models of excitable systems driven by Gaussian white noise. We focus mainly on those general properties of such systems that are due to noise, and present several applications of our findings in biophysics and lasers. As prototypes of excitable stochastic dynamics we consider the FitzHugh–Nagumo and the leaky integrate-and-fire model, as well as cellular automata and phase models. In these systems, taken as individual units or as networks of globally or locally coupled elements, we study various phenomena due to noise, such as noise-induced oscillations, stochastic resonance, stochastic synchronization, noise-induced phase transitions and noise-induced pulse and spiral dynamics. Our approach is based on stochastic differential equations and their corresponding Fokker–Planck equations, treated by both analytical calculations and/or numerical simulations. We calculate and/or measure the rate and diffusion coefficient of the excitation process, as well as spectral quantities like power spectra and degree of coherence. Combined with a multiparametric bifurcation analysis of the corresponding cumulant equations, these approaches provide a comprehensive picture of the multifaceted dynamical behaviour of noisy excitable systems.
Article
We consider the current clamped version of the Hodgkin-Huxley nerve conduction equations. Under appropriate assumptions on the functions and parameters we show that there are two critical values of I I , the current parameter, at which a Hopf bifurcation of periodic orbits occurs.
Article
Inverse stochastic resonance (ISR) is a recently pronounced phenomenon that is the minimum occurrence in mean firing rate of a rhythmically firing neuron as noise level varies. Here, by using a realistic modeling approach for the noise, we investigate the ISR with concrete biophysical mechanisms. It is shown that mean firing rate of a single neuron subjected to synaptic bombardment exhibits a minimum as the spike transmission probability varies. We also demonstrate that the occurrence of ISR strongly depends on the synaptic input regime, where it is most prominent in the balanced state of excitatory and inhibitory inputs.