ArticlePDF Available

Synaptic dynamics in analog VLSI

Authors:

Abstract and Figures

Synapses are crucial elements for computation and information transfer in both real and artificial neural systems. Recent experimental findings and theoretical models of pulse-based neural networks suggest that synaptic dynamics can play a crucial role for learning neural codes and encoding spatiotemporal spike patterns. Within the context of hardware implementations of pulse-based neural networks, several analog VLSI circuits modeling synaptic functionality have been proposed. We present an overview of previously proposed circuits and describe a novel analog VLSI synaptic circuit suitable for integration in large VLSI spike-based neural systems. The circuit proposed is based on a computational model that fits the real postsynaptic currents with exponentials. We present experimental data showing how the circuit exhibits realistic dynamics and show how it can be connected to additional modules for implementing a wide range of synaptic properties.
Content may be subject to copyright.
ARTICLE Communicated by Christal Gordon
Synaptic Dynamics in Analog VLSI
Chiara Bartolozzi
chiara@ini.phys.ethz.ch
Giacomo Indiveri
giacomo@ini.phys.ethz.ch
Institute for Neuroinformatics, UNI-ETH Z
¨
urich, Z
¨
urich, Switzerland
Synapses are crucial elements for computation and information trans-
fer in both real and artificial neural systems. Recent experimental find-
ings and theoretical models of pulse-based neural networks suggest that
synaptic dynamics can play a crucial role for learning neural codes and
encoding spatiotemporal spike patterns. Within the context of hardware
implementations of pulse-based neural networks, several analog VLSI
circuits modeling synaptic functionality have been proposed. We present
an overview of previously proposed circuits and describe a novel analog
VLSI synaptic circuit suitable for integration in large VLSI spike-based
neural systems. The circuit proposed is based on a computational model
that fits the real postsynaptic currents with exponentials. We present ex-
perimental data showing how the circuit exhibits realistic dynamics and
show how it can be connected to additional modules for implementing a
wide range of synaptic properties.
1 Introduction
Synapses are highly specialized structures that, by means of complex chem-
ical reactions, allow neurons to transmit signals to other neurons. When an
action potential generated by a neuron reaches a presynaptic terminal, a
cascade of events leads t o the release of neurotransmitters that give rise to
a flow of ionic currents into or out of the postsynaptic neuron’s membrane.
These excitatory or inhibitory postsynaptic currents (EPSC or IPSC, respec-
tively) have temporal dynamics with a characteristic time course that can
last up to several hundreds of milliseconds (Koch, 1999).
In computational models of neural systems, the temporal dynamics
of synaptic currents have often been neglected. In models that represent
the information with mean firing rates, synaptic transmission is typically
modeled as an instantaneous multiplier operator (Hertz, Krogh, & Palmer,
1991). Similarly, in pulse-based neural models, where the precise timing
of spikes and the dynamics of the neuron’s transfer function play an im-
portant role, synaptic currents are often reduced to simple instantaneous
charge impulses. Also in VLSI implementations of neural systems, silicon
Neural Computation 19, 2581–2603 (2007)
C
2007 Massachusetts Institute of Technology
2582 C. Bartolozzi and G. Indiveri
synapses have often been reduced to simple multiplier circuits (Borgstrom,
Ismail, & Bibyk, 1990; Satyanarayana, Tsividis, & Graf, 1992) or constant cur-
rent sources activated only for the duration of the presynaptic input pulse
(Mead, 1989; Fusi, Annunziato, Badoni, Salamon, & Amit, 2000; Chicca,
Badoni, et al., 2003).
Within the context of pulse-based neural networks, modeling the de-
tailed dynamics of postsynaptic currents can be a crucial step for learn-
ing neural codes and encoding spatiotemporal patterns of spikes. Leaky
integrate-and-fire (I&F) neurons can distinguish between different tempo-
ral input spike patterns only if the synapses stimulated by the input spike
patterns exhibit dynamics with time constants comparable to the time con-
stant of the neuron’s membrane potential (G
¨
utig & Sompolinsky, 2006).
Modeling the temporal dynamics of each synapse in a network of I&F
neurons can be onerous in terms of CPU usage for software simulations
and in terms of silicon real estate for dedicated VLSI implementations. A
compromise between highly detailed models of synaptic dynamics and
no dynamics at all is to use computationally efficient models that account
for the basic properties of synaptic transmission. A very efficient model
that reproduces the macroscopic properties of synaptic transmission and
accounts for the linear summation property of postsynaptic currents is
the one based on pure exponentials proposed by Destexhe, Mainen, and
Sejnowski (1998). Here we propose a novel VLSI synaptic circuit, the diff-
pair integrator (DPI), that implements the model proposed in Destexhe et al.
(1998) as a log-domain linear temporal filter and supports a wide range of
synaptic properties, ranging from short-term depression to conductance-
based EPSC generation.
The design of the DPI synapse is inspired by a series of similar circuits
proposed in the literature that collectively share many of the advantages
of our solution but individually lack one or more of the features of our
design. In the next section, we present an overview of previously proposed
synaptic circuits and describe the DPI synapse pointing out the advantages
that the DPI offers over each of them. In section 3 we present experimental
data from a VLSI chip showing the properties of the circuit in response
to a single pulse and to sequences of spikes. In section 4 we show how
the DPI is compatible with additional circuits used to implement various
types of synaptic dynamics, and in section 5, we discuss possible uses of
the DPI circuit in massively parallel networks of I&F neurons implemented
on single or multichip neuromorphic systems.
2 Synaptic VLSI Circuits
Synaptic circuits translate presynaptic voltage pulses into postsynaptic cur-
rents injected in the membrane of their target neuron, with a gain typi-
cally referred to as the synaptic weight. The function of translating “fast”
presynaptic pulses into long-lasting postsynaptic currents, with elaborate
Synaptic Dynamics in Analog VLSI 2583
temporal dynamics, can be efficiently mapped onto silicon using subthresh-
old (or weak-inversion) analog VLSI (aVLSI) circuits (Liu et al., 2002). In
typical VLSI neural network architectures, the currents generated by mul-
tiple synapses are integrated by a single postsynaptic neuron circuit. The
neuron circuit carries out a weighted sum of the input signals, produces
postsynaptic potentials, and eventually generates output spikes that are
typically transmitted to synaptic circuits in further processing stages. A
common neuron model used in VLSI spike-based neural networks is the
point neuron. With this model, the spatial position of the synaptic circuits
connected to the neuron is not relevant, and the currents produced by the
synapses are summed linearly into the single neuron’s membrane capaci-
tance node. Alternatively, synaptic circuits (including the one presented in
this article) can be integrated in multicompartmental models of neurons,
and the neuron’s dendrite, comprising the spatial arrangement of VLSI
synapses connected to the neuron, implements the spatial summation of
synaptic currents (Northmore & Elias, 1998; Arthur & Boahen, 2004).
Regardless of the neuron model used, one of the main requirements for
synaptic circuits in large VLSI neural networks is compactness: the less
silicon area is used, the more synapses can be integrated on the chip. On
the other hand, implementing synaptic integrator circuits with linear re-
sponse properties and time constants of the order of tens of milliseconds
can require substantial silicon area. Therefore, designing VLSI synaptic cir-
cuits that are compact and linear and m odel relevant functional properties
of biological synapses is a challenging task still being actively pursued.
Several subthreshold synaptic circuit designs have been proposed (Mead,
1989; Lazzaro, 1994; Boahen, 1998; Fusi et al., 2000; Chicca, Indiveri, &
Douglas, 2003; Shi & Horiuchi, 2004a; Gordon, Farquhar, & Hasler, 2004;
Hynna & Boahen, 2006) covering a range of trade-offs between function-
ality and complexity of temporal dynamics versus circuit and layout size.
Some of the circuits proposed require floating-gate devices (Gordon et al.,
2004) or restrict the signals used to a very limited dynamic range (Hynna &
Boahen, 2006) to reproduce in great detail the physics of biological synaptic
channels. Here we focus on the synaptic circuits that implement kinetic
models of synaptic transmission functionally equivalent to the one imple-
mented by the DPI, which can be directly integrated into large arrays of
address-event-based neural networks (Lazzaro, 1994; Boahen, 1998).
2.1 Pulsed Current-Source Synapse. The pulsed current-source
synapse, originally proposed by Mead (1989) in the late 1980s, was one
of the first synaptic circuits implemented using transistors operated in the
subthreshold domain. The circuit schematics are shown in Figure 1 (left);
it consists of a voltage-controlled current source activated by an active-low
input spike. In VLSI pulsed neural networks, input spikes are typically brief
digital voltage pulses that last at most a few microseconds. The output of
this circuit is a pulsed current I
syn
that lasts as long as the input spike.
2584 C. Bartolozzi and G. Indiveri
V
w
M
w
I
syn
M
pre
V
w
M
τ
V
τ
M
syn
I
syn
V
syn
C
syn
M
pre
I
τ
Figure 1: (Left) Pulsed current-source synaptic circuit. (Right) Reset-and-
discharge synapse.
Assuming that the output p-FET M
w
is saturated (i.e., that its V
ds
is greater
than 4U
T
), the current I
syn
can be expressed as
I
syn
= I
0
e
κ
U
T
(V
w
V
dd
)
, (2.1)
where V
dd
is the power supply voltage, I
0
the leakage current, κ the sub-
threshold slope factor, and U
T
the thermal voltage (Liu et al., 2002).
This circuit is extremely compact but does not integrate input spikes into
continuous output currents. Whenever a presynaptic spike reaches M
pre
,the
postsynaptic membrane potential undergoes a step increase proportional
to I
syn
. As integration happens only at the level of the postsynaptic I&F
neuron, input spike trains with the same mean rates but different spike
timing distributions cannot be distinguished. However, given its simplicity
and compactness, this circuit has been used in a wide variety of VLSI
implementations of pulse-based neural networks that use mean firing rates
as the neural code (Murray, 1998; Fusi et al., 2000; Chicca, Badoni, et al.,
2003).
2.2 Reset-and-Discharge Synapse. In the early 1990s, Lazzaro (1994)
proposed a synaptic circuit where the duration of the output EPSC I
syn
(t)
could be extended with respect to the input voltage pulse by means of
a tunable exponential decay (see also Shi & Horiuchi, 2004b, for a recent
application example). This circuit, shown in Figure 1 (right), comprises three
p-FET transistors and one capacitor; the p-FET M
pre
is used as a digital
switch that is turned on by the synapse’s input spikes; the p-FET M
τ
is
Synaptic Dynamics in Analog VLSI 2585
operated in subthreshold and is used as a constant current source to linearly
charge the capacitor C
syn
; the output p-FET M
syn
is used to generate an EPSC
that is exponentially dependent on the V
syn
node (assuming subthreshold
operation and saturation):
I
syn
(t) = I
0
e
κ
U
T
(V
syn
(t)V
dd
)
. (2.2)
At the onset of each presynaptic pulse, the node V
syn
is (re)set to the bias
V
w
. When the input pulse ends, the p-FET M
pre
is switched off, and the node
V
syn
is linearly driven back to V
dd
,ataratesetbyI
τ
/C
syn
. For subthreshold
values of ( V
dd
V
w
), the EPSC generated by an input spike is therefore
I
syn
= I
w0
e
t
τ
, (2.3)
where I
w0
= I
0
e
κ
U
T
(V
w
V
dd
)
and τ =
κ I
τ
U
T
C
syn
.
In general, given a generic spike sequence on n spikes,
ρ(t) =
n
i
δ(t t
i
), (2.4)
the response of the reset-and-discharge synapse can be formally expressed
as
I
syn
(t) = I
w0
e
t
τ
·
t
0
δ(ξ t
n
)e
ξ
τ
dξ = I
w0
e
(tt
n
)
τ
. (2.5)
Although this synaptic circuit produces an EPSC that lasts longer than the
duration of its input pulses and decays exponentially with time, its response
depends on only the last (nth) input spike. This nonlinear property of the
circuit fails to reproduce the linear summation property of postsynaptic
currents often desired in synaptic models and makes the theoretical analysis
of networks of neurons interconnected with this synapse intractable.
2.3 Linear Charge-and-Discharge Synapse. In Figure 2 (left), we show
a modification of the reset-and-discharge synapse that has often been used
by the neuromorphic engineering community and was recently presented
in Arthur and Boahen (2004). Here the presynaptic pulse, applied to the
input n-FET M
pre
, is active high. Assuming that all transistors are saturated
and operate in subthreshold, t he circuit behavior is the following. During an
input pulse, the node V
syn
(t) decreases linearly, at a rate set by the net current
I
w
I
τ
, and the synapse EPSC I
syn
(t) increases exponentially (charge phase).
In between spikes, the V
syn
(t) node is recharged toward V
dd
at a rate set by I
τ
,
2586 C. Bartolozzi and G. Indiveri
V
w
M
τ
M
w
V
τ
M
syn
I
syn
V
syn
C
syn
M
pre
I
w
I
τ
V
w
M
τ
M
w
V
τ
M
syn
I
syn
V
syn
C
syn
M
pre
I
w
I
τ
Figure 2: (Left) Linear charge-and-discharge synapse. (Right) Current mirror
integrator synapse.
and I
syn
(t) decreases exponentially with time (discharge phase). The circuit
equations that describe this behavior are
I
syn
(t) =
I
syn
e
+
(tt
i
)
τ
c
(charge phase)
I
+
syn
e
(tt
+
i
)
τ
d
(discharge phase),
(2.6)
where t
i
is the time at which the ith input spike arrives, t
+
i
the time at
which it ends, I
syn
the initial condition at t
i
, I
+
syn
the initial condition at t
+
i
,
τ
c
=
U
T
C
syn
κ(I
w
I
τ
)
is the charge phase time constant, and τ
d
=
U
T
C
syn
κ I
τ
the discharge
phase time constant.
Assuming that each spike lasts a fixed brief period t, and considering
two successive spikes arriving at times t
i
and t
i+1
, we can then write
I
syn
(t
i+1
) = I
syn
(t
i
)e
t
1
τ
c
+
1
τ
d
e
(t
i+1
t
i
)
τ
d
. (2.7)
From this recursive equation, we derive the response of the linear charge-
and-discharge synapse to a generic spike sequence ρ(t)ofn spikes
I
syn
(t) = I
0
e
nt
1
τ
c
+
1
τ
d
e
t
τ
d
, (2.8)
assuming as the initial condition V
syn
(0) = V
dd
.
The EPSC dynamics depend on the total number of spikes n received at
time t and on the circuit’s time constants τ
c
and τ
d
. If we denote the input
Synaptic Dynamics in Analog VLSI 2587
spike train frequency at time t with f = (n/t), we can express equation 2.8
as
I
syn
(t) = I
0
e
τ
c
f t(τ
c
+τ
d
)
τ
c
τ
d
t
. (2.9)
The major drawback of this circuit, aside from its not being a linear
integrator, is that if the argument of the exponential in equation 2.9 is
positive (i.e., if f >
1
t
I
τ
I
w
), the output current increases exponentially with
time, and the circuit’s response saturates: V
syn
(t) decreases all the way to
Gnd,andI
syn
(t) increases to its maximum value. This can be a problem
because in these conditions, the circuit’s steady-state response does not
encode the input frequency.
2.4 Current-Mirror-Integrator Synapse. In his doctoral dissertation,
Boahen (1997) proposed a synaptic circuit that differs from the linear charge-
and-discharge one by a single node connection (see Figure 2) but that has a
dramatically different behavior. The two transistors M
τ
M
syn
of Figure 2
(right) implement a p-type current mirror, and together with the capaci-
tor C
syn
, they form a current mirror integrator (CMI). The CMI synapse
implements a nonlinear pulse integrator circuit that produces a mean out-
put current I
syn
that increases with input firing rates and has a saturating
nonlinearity with maximum amplitude that depends on the circuit’s synap-
tic weight bias V
w
and on its time constant bias V
τ
.
1
The CMI response properties have been derived analytically in Hynna
and Boahen (2001) for steady-state conditions. An explicit solution of the
CMI response to a generic spike train, which does not require the steady-
state assumption, was also derived in Chicca (2006). According to the anal-
ysis presented in Chicca, the CMI response to a spike arriving at t
i
and
ending at t
+
i
is
I
syn
(t) =
αI
w
1 +
αI
w
I
syn
1
e
(tt
i
)
τ
c
(charge phase)
I
w
I
w
I
+
syn
+
(
tt
+
i
)
τ
d
(discharge phase),
(2.10)
where t
i
, t
+
i
, I
syn
,andI
+
syn
are the same as defined in equation 2.6, α =
e
(V
τ
V
dd
)
U
T
, τ
c
=
C
syn
U
T
κ I
w
,andτ
d
= ατ
c
.
1
The CMI does not implement a linear integrator filter; therefore the term time constant
is improperly used. We use it in this context to denote a parameter that controls the
temporal extension of the C MI’s impulse response.
2588 C. Bartolozzi and G. Indiveri
V
w
M
τ
M
w
V
τ
M
syn
I
syn
V
syn
C
syn
M
pre
I
w
I
τ
Figure 3: Log-domain integrator synapse.
During the charge phase, the EPSC increases over time as a sigmoidal
function, while during the discharge phase, it decreases with a 1/t profile.
The discharge of the EPSC is therefore extremely fast compared to the
typical exponential decay profiles of other synaptic circuits. The parameter
α (set by the V
τ
bias voltage) can be used to slow the EPSC response profile.
However, this parameter affects both the length of the EPSC discharge
profile and the maximum amplitude of the EPSC charge phase: longer
response times (larger values of τ
d
) produce higher EPSC values.
Despite these problems and although the CMI cannot be used to linearly
sum postsynaptic currents, this circuit was very popular and has been
extensively used by the neuromorphic engineering community (Boahen,
1998; Horiuchi & Hynna 2001; Indiveri, 2000; Liu et al., 2001).
2.5 Log-Domain Integrator Synapse. More recently Merolla and
Boahen (2004) proposed another variant of the linear charge-and-discharge
synapse that implements a true linear integrator circuit. This circuit (shown
in Figure 3) exploits the logarithmic relationship between subthreshold
MOSFET gate-to-source voltages and their channel currents and is there-
fore called a log-domain filter. The output current I
syn
of this circuit has the
same exponential dependence on its gate voltage V
syn
as all other synapses
presented (see equation 2.2). Therefore, we can express its derivative with
respect to time as
d
dt
I
syn
=−I
syn
κ
U
T
d
dt
V
syn
. (2.11)
Synaptic Dynamics in Analog VLSI 2589
During an input spike (charge phase), the dynamics of the V
syn
are gov-
erned by the equation: C
syn
d
dt
V
syn
=−(I
w
I
τ
). Combining this first-order
differential equation with equation 2.11, we obtain
τ
d
dt
I
syn
+ I
syn
= I
syn
I
w
I
τ
, (2.12)
where τ =
C
syn
U
T
κ I
τ
. The beauty of this circuit lies in the fact that the term I
w
is inversely proportional to I
syn
itself:
I
w
= I
0
e
κ(V
w
V
syn
)
U
T
= I
0
e
κ(V
w
V
dd
)
U
T
e
κ(V
syn
V
dd
)
U
T
= I
w0
I
0
I
syn
, (2.13)
where I
0
is the leakage current and I
w0
is the current flowing through M
w
in the initial condition, when V
syn
= V
dd
. When this expression of I
w
is
substituted in equation 2.12, the right term of the differential equation loses
the I
syn
dependence and becomes the constant factor
I
0
I
w0
I
τ
.
Therefore, the log-domain integrator transfer function takes the form of
a canonical first-order low-pass filter equation, and its response to a spike
arriving at t
i
and ending at t
+
i
is
I
syn
(t) =
I
0
I
w0
I
τ
1 e
(tt
i
)
τ
+ I
syn
e
(tt
i
)
τ
(charge phase)
I
+
syn
e
(tt
+
i
)
τ
(discharge phase).
(2.14)
This is the only synaptic circuit of the ones described up to now that
has linear filtering properties. The same silicon synapse can be shared to
sum the contributions of spikes potentially arriving from different sources
in a linear way. This could save significant amounts of silicon real estate in
neural architectures where the synapses do not implement learning or local
adaptation mechanisms and could therefore solve many of the problems
that have hindered the development of large-scale VLSI multineuron chips
up to now. However, this particular circuit has two drawbacks. One problem
is that the VLSI layout of the schematic shown in Figure 3 requires more
area than the layout of other synaptic circuits, because the M
w
p-FET has
to live in an “isolated well" structure (Liu et al., 2002). The second, and
more serious, problem is that the spike lengths used in pulse-based neural
network systems, which typically last less than a few microseconds, are too
short to inject enough charge in the membrane capacitor of the postsynaptic
neuron to see any effect. The maximum amount of charge possible is Q =
I
0
I
w0
I
τ
t,andI
w0
cannot be increased beyond subthreshold current limits (of
the order of nano-amperes); otherwise, the log domain properties of the
filter break down (note that also I
τ
is fixed, to set the desired time constant
2590 C. Bartolozzi and G. Indiveri
V
w
V
thr
M
τ
M
in
M
thr
M
w
V
τ
M
syn
I
syn
V
syn
C
syn
M
pre
I
w
I
in
I
τ
Figure 4: Diff-pair integrator synapse.
τ ). A possible solution is to increase the fast (off-chip) input pulse lengths
with on-chip pulse extenders (e.g., with CMI circuits). But this solution
requires additional circuitry at each input synapse and makes the layout of
the overall circuit even larger (Merolla & Boahen 2004).
2.6 Diff-Pair Integrator Synapse. The DPI circuit that we designed
solves the problems of the log domain integrator synapse while maintaining
its linear filtering properties, thus preserving the possibility of multiplexing
in time spikes arriving from different sources. The schematic diagram of
the DPI synapse is shown in Figure 4. This circuit comprises four n-FETs,
two p-FETs, and a capacitor. The n-FETs form a differential pair whose
branch current I
in
represents the input to the synapse during the charge
phase. Assuming subthreshold operation and saturation regime, the diff-
pair branch current I
in
can be expressed as
I
in
= I
w
e
κV
syn
U
T
e
κV
syn
U
T
+ e
κV
thr
U
T
, (2.15)
Synaptic Dynamics in Analog VLSI 2591
and multiplying the numerator and denominator of equation 2.15 by e
κV
dd
U
T
,
we can express I
in
as
I
in
=
I
w
1 +
I
syn
I
ga in
, (2.16)
where the term I
ga in
= I
0
e
κ(V
thr
V
dd
)
U
T
represents a virtual p-type subthreshold
current that is not tied to any p-FET in the circuit.
As for the log-domain integrator, we can c ombine the C
syn
capacitor
equation C
syn
d
dt
V
syn
=−(I
in
I
τ
) with equation 2.11 and write
τ
d
dt
I
syn
=−I
syn
1
I
in
I
τ
, (2.17)
where (as usual) τ =
CU
T
κ I
τ
. Replacing I
in
from equation 2.16 into equa-
tion 2.17, we obtain
τ
d
dt
I
syn
+ I
syn
=
I
w
I
τ
I
syn
1 +
I
syn
I
ga in
. (2.18)
This is a first-order nonlinear differential equation; however, the steady-
state condition can be solved in closed form, and its solution is
I
syn
=
I
ga in
I
τ
(I
w
I
τ
). (2.19)
If I
w
I
τ
, the output current I
syn
will eventually rise to values such
that I
syn
I
ga in
, when the circuit is stimulated with an input step signal. If
I
syn
I
ga in
1theI
syn
dependence in the second term of equation 2.18 cancels out,
and the nonlinear differential equation simplifies to the canonical first-order
low-pass filter equation:
τ
d
dt
I
syn
+ I
syn
=
I
w
I
ga in
I
τ
. (2.20)
In this case, the response of the DPI synapse to a spike arriving at t
i
and
ending at t
+
i
is
I
syn
(t) =
I
ga in
I
w
I
τ
1 e
(tt
i
)
τ
+ I
syn
e
(tt
i
)
τ
(charge phase)
I
+
syn
e
(tt
+
i
)
τ
(discharge phase)
.
(2.21)
2592 C. Bartolozzi and G. Indiveri
The solution of the DPI synapse is almost identical to the one of the log-
domain integrator synapse, described in equation 2.14. The only difference
is that the term I
0
of equation 2.14 is replaced by I
ga in
. This scaling fac-
tor can be used to amplify the charge phase response amplitude, therefore
solving the problem of generating sufficiently large charge packets sourced
into the neuron’s integrating capacitor for input spikes of very brief du-
ration, while keeping all currents in the subthreshold regime and without
requiring additional pulse-extender circuits. In addition, the layout of DPI
does not require isolated well structures and can be implemented in a very
compact way.
As for the log-domain integrator synapse described in section 2.5, the DPI
synapse implements a low-pass filter with linear transfer function (under
the realistic assumption that I
w
I
τ
). Although it is less compact than the
synaptic circuits described in sections 2.1, 2.2, 2.3, and 2.4, it is the only one
that can reproduce the exponential dynamics observed in excitatory and in-
hibitory postsynaptic currents of biological synapses (Destexhe et al., 1998),
without requiring additional input pulse-extender circuits. Moreover, the
DPI synapse we propose has independent control of time constant, synap-
tic weight, and synaptic scaling parameters. The extra degree of freedom
obtained with the V
thr
parameter can be used to globally scale the efficacies
of the DPI circuits that share the same V
thr
bias. This feature could in turn be
employed to implement global homeostatic plasticity mechanisms comple-
mentary to local spike-based plasticity ones acting on the synaptic weight
V
w
node (see also section 4). In the next section, we present experimental
results from a VLSI chip comprising an array of DPI synapses connected
to low-power leaky I&F neurons (Indiveri, Chicca, & Douglas, 2006) that
validate the analytical derivations presented here.
3 Experimental Results
We fabricated a prototype chip in standard AMS 0.35 µm CMOS technol-
ogy comprising the DPI circuit and additional test structures to augment
the synapse’s functionality. Here we present experimental results measured
from the b asic DPI circuit of Figure 4, while the characteristics and measure-
ments from the additional test circuits are described in section 4. In Figure 5,
we show a picture of the synaptic circuit layout. The full layout occupies
an area of 1360 µm
2
. These types of synaptic circuits can therefore be used
to implement networks of spiking neurons with a very large number of
synapses on a small chip area. For example, in a recent chip, we imple-
mented a network comprising 8192 synapses and 32 neurons (256 synapses
per neuron) using an area of only 12 mm
2
(Mitra, F usi, & Indiveri, 2006).
The silicon area occupied by the synaptic circuit can vary significantly, as
it depends on the choice of layout design solutions. More conservative so-
lutions use large transistors, have lower mismatch, and require more area.
More aggressive solutions require less area, but multiple instances of the
Synaptic Dynamics in Analog VLSI 2593
Figure 5: Layout of the fabricated DPI synapse and additional circuits that
augment the synapse’s functionality. The schematic diagram and properties of
the STD, NMDA, and G blocks are described in section 4.
same layout cell produce currents with larger deviations. The layout of
Figure 5 implements a very conservative solution.
To validate the theoretical analysis of section 2.6, we measured the
DPI step response and fitted the experimental data with equation 2.21.
In Figure 6 (left), we plot the circuit’s step response for different synaptic
weight V
w
bias values. The rise and decay parts of the data were fitted with
the charge phase and discharge phase parts of equation 2.21 using slightly
different parameters for the estimated time constant. The small differences
in the time constants are most likely d ue to leakage currents and parasitic
capacitance effects, not considered in the analytical derivations. These re-
sults, however, show that the DPI time constant does not depend on V
w
and
can be independently tuned with V
τ
.
Silicon synapses are typically stimulated with trains of pulses (spikes)
of very brief duration, separated by longer interspike intervals (ISIs). It can
be easily shown from equation 2.20 that when the DPI is stimulated with a
spike train of average frequency f
in
and pulse duration t, its steady-state
response is
< I
syn
>=
I
ga in
I
w
I
τ
tf
in
. (3.1)
2594 C. Bartolozzi and G. Indiveri
0 0.05 0.1 0.15
0
50
100
150
200
250
300
Time (s)
EPSC (nA)
V
w
=420mV
V
w
=440mV
V
w
=460mV
50 100 150 200
0
100
200
300
400
500
600
700
800
900
Input Frequency (Hz)
I
syn
(nA)
V
w
=0.96V
V
w
=1V
V
w
=1.04V
Figure 6: DPI circuit response properties. (Left) Step response for three different
values V
w
. The response is fitted with equation 2.21, and the fitting functions
(dotted, and dashed lines) are superimposed to the measured data. The time
constants estimated by the fit are τ = 3 ms for the charge phase and τ = 4msfor
the discharge phase. (Right) Response to spike trains of increasing frequencies.
The output mean current is linear with the synaptic input frequency, and its
gain can be changed with the synaptic weight bias V
w
.
We also verified this derivation by measuring the mean EPSC of the circuit
in response to spike trains of increasing frequencies. In Figure 6 (right),
we show the i f curve for typical biological spiking frequencies, ranging
from 10 to 200 Hz. The mean output current is linear over a wide range of
input frequencies (extending well beyond the ones shown in the plot).
4 Synaptic Dynamics
The results of the previous sections showed how the DPI response mod-
els the EPSC generated by biological excitatory synapses of AMPA type
(Destexhe et al., 1998). Inhibitory (GABA
a
) type synapses can be easily em-
ulated by using the complementary version of the DPI circuit of Figure 4
(with a p-type diff-pair, and n-type output transistor). Additional circuits
can be attached to the DPI synapse to extend the model with additional
features typical of biological synapses and implement various types of
plasticity. For example, by adding two extra transistors, we can implement
voltage-gated channels that model NMDA synapse behavior. Similarly, by
using two more transistors, we can extend the synaptic model to be conduc-
tance based (Kandel, Schwartz, & Jessell, 2000). Furthermore, the DPI circuit
is compatible with previously proposed circuits for implementing synaptic
plasticity, on both short timescales with models of short-term depression
(STD) (Rasche & Hahnloser, 2001; Boegerhausen, Suter, & Liu, 2003) and
on longer timescales with spike-based learning mechanisms, such as spike-
timing-dependent plasticity (STDP) (Indiveri et al., 2006). Finally the DPI’s
extra degree of freedom for modifying the overall gain of the synapse
Synaptic Dynamics in Analog VLSI 2595
Figure 7: Schematic diagram of the DPI connected to additional test circuits
that augment the synapse’s functionality. The names of the functional blocks
correspond to the ones used in the layout of Figure 5: The STD block comprises
the circuit modeling short-term depression of the synaptic weight, the NMDA
block comprises the transistors modeling NMDA voltage-gated channels, and
the G block includes transistors that render the synapse conductance based.
either with V
thr
or with V
w
allows the implementation of synaptic homeo-
static mechanisms (Bartolozzi & Indiveri, 2006), such as global activity de-
pendent synaptic scaling (Turrigiano, Leslie, Desai, Rutherford, & Nelson,
1998).
In Figure 7, we show the schematics of the extension circuits mentioned
above implemented on the test chip (with the exception of the STDP and
homeostatic circuits). In the next paragraphs, we describe the behavior of
these additional circuits, characterized by measuring the membrane poten-
tial V
mem
of a low power leaky I&F neuron (Indiveri et al., 2006) that receives
in input the synaptic EPSC.
4.1 NMDA Synapse. With the DPI we reproduce phenomenologi-
cally the current flow through ionic ligand-gated membrane channels that
open and let the ions flow across the postsynaptic membrane as soon
as they sense the neurotransmitters released by the presynaptic boutons
(e.g., AMPA channels). Another important class of ligand-gated synaptic
2596 C. Bartolozzi and G. Indiveri
0246810
0.2
0.4
0.6
0.8
1
1.2
V
mem
(V)
Time (s)
200 300 400 500 600 700 800
0
10
20
30
40
50
60
70
80
V
mem
(mV)
EPSP (mV)
V
nmda
= 300mV
V
nmda
= 400mV
V
nmda
= 500mV
V
nmda
= 600mV
Figure 8: NMDA-type synapse response properties. (Left) Membrane potential
of an I&F neuron connected to the synapse and stimulated by a constant injection
current. The NMDA threshold voltage is set to V
nmda
= 400 mV. The small bumps
in V
mem
represent the excitatory postsynaptic potentials (EPSPs) produced by
the synapse, when V
mem
> V
nmda
, in response to the presynaptic input spikes.
(Right) EPSP amplitude versus the membrane potential, for increasing values
of the NMDA threshold V
nmda
and for a fixed value of V
w
.
channels, the NMDA receptors, is, in addition, voltage gated; these channels
open to let the ions flow only if the membrane voltage is depolarized above
a given threshold while in the presence of its neurotransmitter (glutamate).
We can implement this behavior by exploiting the thresholding property
of the differential pair circuit, as shown in Figure 7; if the node V
mem
is lower
than the externally set bias V
nmda
, the output current I
syn
flows through the
transistor M
nmda
in the left branch of the diff-pair and has no effect on the
postsynaptic depolarization. On the other hand, if V
mem
is higher than V
nmda
,
the current flows also into the membrane potential node, depolarizing the
I&F neuron, and thus implementing the voltage-gating typical of NMDA
synapses.
In Figure 8, we show the results measured from the test circuit on the
prototype chip: we stimulate the synapse with presynaptic spikes, while
also injecting constant current into the neuron’s membrane. The synapse’s
EPSC amplitude depends on the d ifference between the membrane poten-
tial and the NMDA threshold V
nmda
. As expected, when V
mem
is smaller than
V
nmda
, the synaptic current is null, and the membrane potential increases
solely due to the constant injection current. As V
mem
increases above V
nmda
,
the contribution of the synaptic current injected with each presynaptic spike
becomes visible. The time constant of the DPI circuit used in this way can be
easily extended to hundreds of milliseconds (values typical of NMDA-type
synaptic dynamics) by increasing the V
τ
bias voltage of Figure 7. This allows
us to faithfully reproduce both the voltage-gated and temporal dynamic
properties of real NMDA synapses. It is important to be able to implement
Synaptic Dynamics in Analog VLSI 2597
these properties in our VLSI devices because there is evidence that they play
an important role in detecting coincidence between the presynaptic activity
and postsynaptic depolarization for inducing long-term-potentiation (LTP)
(Morris, Davis, & Butcher, 1990). Furthermore, the NMDA’s synapse sta-
bilizing role, hypothesized by computational studies within the context of
working memory (Wang, 1999), could be useful for stabilizing persistent
activity of recurrent VLSI networks of spiking neurons.
4.2 Conductance-Based Synapse. So far we have reproduced the to-
tal current flowing through the synaptic channels independent of the
postsynaptic membrane potential. However, in real synapses, the current is
proportional to the difference between the postsynaptic membrane voltage
and the synaptic ion reversal potential E
ion
:
I
syn
= g
syn
( V
mem
E
ion
). (4.1)
Exploiting once more the properties of the differential pair circuit, we
can model this dependence with just two more transistors (see G block
of Figure 7), and obtain a behavior that, to first-order approximation, is
equivalent to that described by equation 4.1. Formally, the conductance-
based synapse output is:
I
syn

= I
syn
1
1 + e
κ
U
T
(V
mem
V
gthr
)
, (4.2)
so if we consider the first-order term of the Taylor expansion, when V
mem
=
V
gthr
, we obtain
I
syn

=
I
syn
2
+ g
syn
( V
mem
V
gthr
), (4.3)
where the conductance term g
syn
= I
syn
κ
4U
T
.
In Figure 9 we plot the EPSPs measured from the I&F neuron connected
to the conductance-based synapse for different values of V
gthr
. These ex-
perimental results show that our synapse can reproduce the behavior of
conductance-based synapses. This behavior is especially relevant in in-
hibitory synapses, where the dependence expressed in equation 4.1 re-
sults in shunting inhibition. Computational and biological studies have
attributed different roles to shunting inhibition, such as logical AND-NOT)
Koch, Poggio, & Torre, 1983), and normalization (Carandini, Heeger, &
Movshon, 1997) functions. Evidence for these and other hypotheses con-
tinues to be the subject of further investigation (Anderson, Carandini, &
Ferster, 2000; Chance, Abbott, & Reyes, 2002). The implementation of shunt-
ing inhibition in large arrays of VLSI synapses and spiking neurons provides
2598 C. Bartolozzi and G. Indiveri
0.5 1 1.5
0.2
0.4
0.6
0.8
1
1.2
1.4
Time (s)
V
mem
(V)
V
gthr
200 300 400 500 600 700 800
0
10
20
30
40
50
60
70
V
mem
(mV)
EPSP (mV)
V
gthr
=800mV
V
gthr
=700mV
V
gthr
=600mV
V
gthr
=500mV
V
gthr
=400mV
Figure 9: Conductance-based synapse. (Left) Membrane potential of the I&F
neuron stimulated by the synapse for different values of the synaptic reversal
potential V
gthr
. (Right) EPSP amplitude as a function of V
mem
for different values
of V
gthr
.
an additional means for exploring the computational role of this computa-
tional primitive.
4.3 Synaptic Plasticity. In the previous sections, we showed that our cir-
cuit can model biologically realistic synaptic current dynamics. The synaptic
main feature exploited in neural networks, though, is plasticity: the ability
of changing the synaptic efficacy to learn and adapt to the environment.
In neural networks with large arrays of synapses and neurons, usually
(Indiveri et al., 2006; Mitra et al. 2006; Arthur & Boahen, 2004; Shi &
Horiuchi, 2004b) all the synapses belonging to one population share the
same bias that sets their initial weight.
2
In addition each synapse can be
connected to a local circuit for the short- and/or long-term modification
of its weight. Our silicon synapse supports all of the short-term and long-
term plasticity mechanisms for inducing long-term potentiation (LTP) and
long-term depression (LTD) in the synaptic weight that have been proposed
in the literature. Specifically, the possibility of biasing M
w
with subthresh-
old voltages on the order of hundreds of mV makes the DPI compatible
with many of the spike-timing-dependent plasticity circuits previously pro-
posed (Indiveri et al., 2006; Mitra et al., 2006; Arthur & Boahen, 2006; Bofill,
Murray, & Thompson, 2002).
Similarly the DPI synapse is naturally extended with the short-term de-
pression circuit proposed by Rasche and Hahnloser (2001), where the synap-
tic weight decreases with increasing number of input spikes and recovers
during periods of presynaptic inactivity. From the computational point of
2
The initial weight V
w
can be set by an external voltage reference or by on-chip bias
generators.
Synaptic Dynamics in Analog VLSI 2599
0 50 100 150
80
90
100
110
120
130
Time (ms)
V
mem
(mV)
V
lk
=190mV
V
lk
=160mV
Figure 10: Short-term depression: Membrane potential of the leaky I&F neuron,
when the short-term depressing synapse is stimulated with a regular spike train
at 50 Hz. The different traces of the membrane potential correspond to different
values of the leakage current of the neuron. Note how (from the second spike
on) the EPSP amplitude decreases with each input spike.
view, STD is a nonlinear mechanism that plays an important role for im-
plementing selectivity to transient stimuli and contrast adaptation (Chance,
Nelson, & Abbott, 1998). In Figure 10, we show the EPSPs of the I&F neuron
connected to the synapse, having activated the STD block of Figure 7. These
results confirm the compatibility between the DPI and the STD circuits
and show qualitatively the effect of short-term depression. Quantitative
considerations and comparisons to short-term depression computational
models have already been presented elsewhere (Rasche & Hahnloser, 2001;
Boegerhausen et al. 2003).
Another valuable property of biological synapses is the homeostatic
mechanism known as activity-dependent synaptic scaling (Turrigiano et al.,
1998). It acts by scaling the synaptic weights in order to keep the neurons,
firing rate within a functional range in the face of chronic changes of their
activity level while preserving the relative differences between individual
synapses. As demonstrated in section 2.6 and Figure 11, we can scale the
total synaptic efficacy of the DPI by independently varying either I
w
or I
ga in
(see also Figure 6 (left)). We can exploit these two independent degrees of
freedom for learning the synaptic weight V
w
with “fast” spike-based learn-
ing rules, while adapting the bias V
thr
to implement homeostatic synaptic
scaling, on much slower timescales. A control algorithm that exploits the
2600 C. Bartolozzi and G. Indiveri
Figure 11: Independent scaling of EPSC amplitude by adjusting either V
thr
or
V
w
. The plots show the time course of mean and standard deviation (over 10
repetitions of the same experiment) of the current I
syn
, in response to a single-
input voltage pulse. In both plots, the lower EPSC traces share the same set
of V
thr
and V
w
, in (Left) The higher EPSC is obtained by increasing V
w
and
(right) by decreasing V
thr
, with respect to the initial bias set. Superimposed to
the experimental data, we plot theoretical fits of the decay from equation 2.21.
The time constant of all plots is the same and equal to 5 ms.
properties of the DPI to implement the activity-dependent synaptic scal-
ing homeostatic mechanism has been recently proposed by Bartolozzi and
Indiveri (2006).
5Conclusion
We have proposed a new analog VLSI synapse circuit (the DPI of section 2.6)
useful for implementing postsynaptic currents in neuromorphic VLSI net-
works of spiking neurons with biologically realistic temporal dynamics.
We showed in analytical derivations and experimental data that the circuit
proposed matches detailed computational models of synapses. We com-
pared our VLSI synapse to previously proposed circuits that implement an
equivalent functionality and derived analytically their transfer functions.
Our analysis showed that the DPI circuit incorporates most of the strengths
of previously proposed circuits, while providing additional favorable prop-
erties. Specifically, the DPI implements a linear integrator circuit with
two independent tunable gain parameters and one independently tunable
timeconstant parameter. The circuit’s mean output current encodes linearly
the input frequency of the presynaptic spike train. As the DPI performs
linear temporal summation of its input spikes, it can be used for processing
multiple spike trains generated by different sources multiplexed together,
modeling the contribution of many different synapses that share the same
weight.
Synaptic Dynamics in Analog VLSI 2601
Next to being linear and compact, this circuit is compatible with existing
implementation of both short-term and long-term plasticity. The favorable
features of linearity, compactness, and compatibility with existing synaptic
circuit elements make it an ideal building block for constructing adaptive
dynamic synapses and implementing dense and massively parallel net-
works of spiking neurons capable of processing spatiotemporal signals in
real time. The VLSI implementation of such networks constitutes a power-
ful tool for exploring the computational role of each element described in
this work, from the voltage-gated NMDA channels, to shunting inhibition,
and homeostasis, using real-world stimuli while observing the network’s
behavior in real time.
Acknowledgments
This work was supported in part by the EU Grants ALAVLSI (IST-
2001-38099) and DAISY (FP6-2005-015803) and in part by the ETH TH
under Project 0-20174-04. The chip was fabricated via the EUROPRACTICE
service. We thank Pratap Kumar for fruitful discussions about biological
synapses.
References
Anderson, J., Carandini, M., & Ferster, D. (2000). Orientation tuning of input conduc-
tance, excitation, and inhibition in cat primary visual cortex. Journal of Physiology,
84, 909–926.
Arthur, J., & Boahen, K. (2004, July). Recurrently connected silicon neurons with
active dendrites for one-shot learning. In IEEE International Joint Conference on
Neural Networks (Vol. 3, pp. 1699–1704). Piscataway, NJ: IEEE.
Arthur, J., & Boahen, K. (2006). Learning in silicon: Timing is everything. In Y. Weiss,
B. Sch
¨
olkopf, & J. Platt (Eds.), Advances in neural information processing systems, 18.
Cambridge, MA: MIT Press.
Bartolozzi, C., & Indiveri, G. (2006). Silicon synaptic homeostasis. In Brain Inspired
Cognitive Systems 2006.[CD]
Boahen, K. A. (1997). Retinomorphic vision systems: Reverse engineering the vertebrate
retina. Unpublished doctoral dissertation, California Institute of Technology.
Boahen, K. (1998). Communicating neuronal ensembles between neuromorphic
chips. In T. S. Lande (Ed.), Neuromorphic systems engineering (pp. 229–259). Nor-
well, MA: Kluwer Academic.
Boegerhausen, M., Suter, P., & Liu, S.-C. (2003). Modeling short-term synaptic de-
pression in silicon. Neural Computation, 15(2), 331–348.
Bofill, A., Murray, A., & Thompson, D. (2002). Circuits for VLSI implementa-
tion of temporally asymmetric Hebbian learning. In T. G. Dietlerich, S. Becker,
& Z. Ghahramani (Eds.), Advances in neural information processing systems, 14.
Cambridge, MA: MIT Press.
2602 C. Bartolozzi and G. Indiveri
Borgstrom, T., Ismail, M., & Bibyk, S. (1990). Programmable current-mode neural
network for implementation in analogue MOS VLSI. IEE Proceedings G, 137(2),
175–184.
Carandini, M., Heeger, D. J., & Movshon, J. A. (1997). Linearity and normalization in
simple cells of the macaque primary visual cortex. Journal of Neuroscience, 17(21),
8621–8644.
Chance, F., Abbott, L., & Reyes, A. (2002). Gain modulation from background synap-
tic input. Neuron, 35, 773–782.
Chance, F. S., Nelson, S. B., & Abbott, L. F. (1998). Synaptic depression and the
temporal response characteristics of V1 cells. Journal of Neuroscience, 18(12), 4785–
99.
Chicca, E. (2006). A neuromorphic VLSI system for modeling spike-based cooperative com-
petitive neural networks. Unpublished doctoral dissertation, ETH Zurich, Zurich,
Switzerland.
Chicca, E., Badoni, D., Dante, V., D’Andreagiovanni, M., Salina, G., Fusi, S., et al.
(2003). A VLSI recurrent network of integrate-and-fire neurons connected by
plastic synapses with long term memory. IEEE Transactions on Neural Networks,
14(5), 1297–1307.
Chicca, E., Indiveri, G., & Douglas, R. (2003). An adaptive silicon synapse. In Proc.
IEEE International Symposium on Circuits and Systems (pp. I-81–I-84). Piscataway,
NJ: IEEE.
Destexhe, A., Mainen, Z., & Sejnowski, T. (1998). Kinetic models of synaptic trans-
mission. In C. Koch & I. Segev (Eds.), Methods in neuronal modelling, from ions to
networks (pp. 1–25). Cambridge, MA: MIT Press.
Fusi, S., Annunziato, M., Badoni, D., Salamon, A., & Amit, D. J. (2000). Spike-driven
synaptic plasticity: Theory, simulation, VLSI implementation. Neural Computation,
12, 2227–2258.
Gordon, C., Farquhar, E., & Hasler, P. (2004, May). A family of floating-gate adapt-
ing synapses based upon transistor channel models. In 2004 IEEE International
Symposium on Circuits and Systems (Vol. 1, pp. 317–1320). Piscataway, NJ: IEEE.
G
¨
utig, R., & Sompolinsky, H. (2006). The tempotron: A neuron that learns spike
timing–based decisions. Nature Neuroscience, 9, 420–428.
Hertz, J., Krogh, A., & Palmer, R. G. (1991). Introduction to the theory of neural compu-
tation. Reading, MA: Addison-Wesley.
Horiuchi, T., & Hynna, K. (2001). A VLSI-based model of azimuthal echolocation in
the big brown bat. Autonomous Robots, 11(3), 241–247.
Hynna, K., & Boahen, K. (2001). Space-rate coding in an adaptive silicon neuron.
Neural Networks, 14, 645–656.
Hynna, K. M., & Boahen, K. (2006, May). Neuronal ion-channel dynamics in silicon.
In 2006 IEEE International Symposium on Circuits and Systems (pp. 3614–3617).
Piscataway, NJ: IEEE.
Indiveri, G. (2000). Modeling selective attention using a neuromorphic analog VLSI
device. Neural Computation, 12(12), 2857–2880.
Indiveri, G., Chicca, E., & Douglas, R. (2006). A VLSI array of low-power spik-
ing neurons and bistable synapses with spike-timing dependent plasticity. IEEE
Transactions on Neural Networks, 17(1), 211–221.
Kandel, E. R., Schwartz, J., & Jessell, T. M. (2000). Principles of neural science.New
York: McGraw-Hill.
Synaptic Dynamics in Analog VLSI 2603
Koch, C. (1999). Synaptic input. In M. Stryker (Ed.), Biophysics of computation: In-
formation processing in single neurons (pp. 85–116). New York: Oxford University
Press.
Koch, C., Poggio, T., & Torre, V. (1983). Nonlinear interactions in a dendritic
tree: Localization, timing, and role in information processing. PNAS, 80, 2799–
2802.
Lazzaro, J. P. (1994). Low-power silicon axons, neurons, and synapses. In M. E.
Zaghloul, J. L. Meador, & R. W. Newcomb (Eds.), Silicon implementation of pulse
coded neural networks (pp. 153–164). Norwell, MA: Kluwer.
Liu, S.-C., Kramer, J., Indiveri, G., Delbruck, T., Burg, T., & Douglas, R. (2001).
Orientation-selective aVLSI spiking neurons. Neural Networks, 14(6/7), 629–
643.
Liu, S.-C., Kramer, J., Indiveri, G., Delbr
¨
uck, T., & Douglas, R. (2002). Analog VLSI:
Circuits and principles.Cambridge,MA:MITPress.
Mead, C. (1989). Analog VLSI and neural systems. Reading, MA: Addison-Wesley.
Merolla, P., & Boahen, K. (2004). A recurrent model of orientation maps with simple
and complex cells. In S. Thr
¨
un,L.K.Saul,&B.Sch
¨
olkopf (Eds.), Advances in neural
information processing systems, 16 (pp. 995–1002). MIT Press. Cambridge, MA:
Mitra, S., Fusi, S., & Indiveri, G. (2006, May). A VLSI spike-driven dynamic synapse
which learns. In Proceedings of IEEE International Symposium on Circuits and Systems
(pp. 2777–2780). Piscataway, NJ: IEEE.
Morris, R., Davis, S., & Butcher, S. (1990). Hippocampal synaptic plasticity and
NMDA receptors: A role in information storage? Philosophical Transactions: Bio-
logical Sciences, 329(1253), 187–204.
Murray, A. F. (1998). Pulse-based computation in VLSI neural networks. In W. Maass
& C. M. Bishop (Eds.), Pulsed neural networks (pp. 87–109). Cambridge, MA: MIT
Press.
Northmore, D. P. M., & Elias, J. G. (1998). Building silicon nervous systems with
dendritic tree neuromorphs. In W. Maass & C. M. Bishop (Eds.), Pulsed neural
networks (pp. 135–156). Cambridge, MA: MIT Press.
Rasche, C., & Hahnloser, R. (2001). Silicon synaptic depression. Biological Cybernetics,
84(1), 57–62.
Satyanarayana, S., Tsividis, Y., & Graf, H. (1992). A reconfigurable VLSI neural
network. IEEE J. Solid-State Circuits, 27(1), 67–81.
Shi, R., & Horiuchi, T. (2004a). A summating, exponentially-decaying CMOS synapse
for spiking neural systems. In S. Thr
¨
un,L.Saul,&B.Sch
¨
olkopf (Eds.), Advances
in neural information processing systems, 16. Cambridge, MA: MIT Press.
Shi, R., & Horiuchi, T. (2004b). A VLSI model of the bat lateral superior olive for az-
imuthal echolocation. In Proceedings of the 2004 International Symposium on Circuits
and Systems (ISCAS04) (Vol. 4, pp. 900–903). Piscataway, NJ: IEEE.
Turrigiano, G., Leslie, K., Desai, N., Rutherford, L., & Nelson, S. (1998). Activity-
dependent scaling of quantal amplitude in neocortical neurons. Nature, 391, 892–
896.
Wang, X. (1999). Synaptic basis of cortical persistent activity: The Importance of
NMDA receptors to working memory. Journal of Neuroscience, 19, 9587–9603.
Received May 11, 2006; accepted September 27, 2006.
... Several CMOS synaptic (connections between two neurons) circuits are presented in [6,19] and [20]. These models convert a presynaptic voltage obtained from a source neuron into a postsynaptic current injected to a target neuron [19]. ...
... Several CMOS synaptic (connections between two neurons) circuits are presented in [6,19] and [20]. These models convert a presynaptic voltage obtained from a source neuron into a postsynaptic current injected to a target neuron [19]. Synapses are also designed with memristors (a non-linear passive electronic memory element) in hardware [21,22]. ...
... Two neurons are connected by synapses [19]. In this subsection, circuits are designed to mimic two biological synapses, viz. ...
Article
Full-text available
Objective. This study aims to introduce a novel approach for integrating the post-inhibitory rebound excitation (PIRE) phenomenon into a neuronal circuit. Excitatory and inhibitory synapses are designed to establish a connection between two such hardware neurons, effectively forming a network. The model demonstrates the occurrence of PIRE under strong inhibitory input. Emphasizing the significance of incorporating PIRE in neuromorphic circuits, the study showcases the generation of persistent activity within cyclic and recurrent spiking neuronal networks. Approach. The neuronal and synaptic circuits are designed and simulated in Cadence Virtuoso using TSMC 180 nm technology. The operating mechanism of the PIRE phenomenon integrated into a hardware neuron is discussed. The proposed circuit encompasses several parameters for effectively controlling different electrophysiological features of a neuron. Main results. The neuronal circuit has been tuned to match the response of a biological neuron. The efficiency of this circuit is evaluated by computing the average power dissipation and energy consumption per spike through simulation. The sustained firing of neural spikes is observed till 1.7 seconds using the two neuronal networks. Significance. Persistent activity has significant implications for various cognitive functions such as working memory, decision-making, and attention. Further, functions like attention are used in the recent development of neural networks and algorithms. Therefore, hardware implementation of these functions will require our PIRE-integrated model. Such energy-efficient neuromorphic systems are useful in many artificial intelligence applications, including human-machine interaction, IoT devices, autonomous systems, and brain-computer interfaces.
... These neurons measure the time to travel of a visual stimulus between two visual locations. TDE neurons can be realised with specialised circuits, such as Milde et al.'s circuit [21] based on the Differential Pair Integrator (DPI) [22] which encodes time to travel in the firing rate of output spike bursts. Due to the absence of such a circuit on the Dynap-SE processor, we implement a delay chain to encode time to travel for speed detection. ...
... This condition is met when the time difference between two consecutive input spikes aligns with the time it takes a spike to propagate along the delay chain. Consequently, the Encoding (TDE) unit [20][21][22]. The input neuron disinhibits the output neuron and triggers a delay chain. ...
Preprint
Full-text available
Recent advances in computer vision and deep learning have led to a surge of interest in the field of AI-generated art, including digital image creation and robot-assisted painting. Traditional painting machines rely on static images and offline processing to incorporate visual feedback into their painting process. However, this approach does not consider the dynamic nature of painting and fails to decompose complex overlapping patterns into individual strokes. As an alternative to frame-based RGB cameras, neuromorphic cameras capture changes in light intensity within a scene via asynchronous event streams, promising to overcome some of the inherent limitations of traditional computer vision techniques. In this project, a robotic system for physical painting is presented which utilizes event-based visual input from a Dynamic Vision Sensor (DVS) camera. To take advantage of the camera's ultra-low latency and sparse encoding, the proposed system also employs event-based information processing, implemented with spiking neural networks on the neuromorphic DynapSE-1 processor. The robotic system receives DVS sensory data which represents the trajectory of a brush stroke and computes the required joint velocities to recreate the stroke with a 6-DOF robotic arm in a closed-loop manner. The controller additionally integrates tactile feedback from a force-torque sensor to dynamically adjust the end-effector’s distance towards the canvas depending on the brush’s deformation. Within the scope of the project, it was further demonstrated how speed information about a perceived brush stroke can be extracted from DVS data. The system was tested in a real-world setting and successfully generated a collection of physical brush strokes. The proposed network is a first step towards a fully spiking robotic controller with the ability to seamlessly incorporate event-based sensory feedback, providing ultra-low latency responsiveness. Beyond its utility in robot-assisted painting, the developed network is applicable to any robotic task requiring real-time adaptive control.
... Comprehensive reviews, published in recent years, explored various facets of neuromorphic computing, covering device physics [5][6][7], circuit design [8,9], and network integration [10,11]. Training of neuromorphic systems, associated with physical learning, requires the modification of physical elements to provide desired computational outcomes. ...
... It is observed that the nature of the steady state, as described in Eq. (9), is dictated by the ratios of viscous resistances, namely, 1 / 3 and 2 / 4 . To illustrate the system's behavior, our analysis is delineated into two distinct scenarios-the first involves identical ratios. ...
Preprint
Full-text available
Artificial neural networks (ANNs), which are inspired by the brain, are a central pillar in the ongoing breakthrough in artificial intelligence. In recent years, researchers have examined mechanical implementations of ANNs, denoted as Physical Neural Networks (PNNs). PNNs offer the opportunity to view common materials and physical phenomena as networks, and to associate computational power with them. In this work, we incorporated mechanical bistability into PNNs, enabling memory and a direct link between computation and physical action. To achieve this, we consider an interconnected network of bistable liquid-filled chambers. We first map all possible equilibrium configurations or steady states, and then examine their stability. Building on these maps, both global and local algorithms for training multistable PNNs are implemented. These algorithms enable us to systematically examine the network's capability to achieve stable output states and thus the network's ability to perform computational tasks. By incorporating PNNs and multistability, we can design structures that mechanically perform tasks typically associated with electronic neural networks, while directly obtaining physical actuation. The insights gained from our study pave the way for the implementation of intelligent structures in smart tech, metamaterials, medical devices, soft robotics, and other fields.
... In the biological synapse, chemical reactions allow pre-synaptic neuron to transmit electrical signals through synapse, releasing neurotransmitters that give rise to a flow of ionic current into or out of the post-synaptic neuron. Excitatory or inhibitory postsynaptic currents (EPSC or IPSC, respectively) have temporal dynamics with a characteristic time that can last up to several hundreds of milliseconds [1]. Silicon synapses translate pre-synaptic voltage pulses (spikes) into a post-synaptic current, which are then integrated into the membrane of the target neuron, with a gain typically referred to as the synaptic weight. ...
... This section discusses the synaptic circuits [1] along with transient analysis of 4 crucial circuit parameters namely, Presynaptic Voltage (Vpre_syn), Presynaptic Current (Iw), Synaptic Voltage (Vsyn) and Postsynaptic Current (Isyn). The circuits have been designed and analyzed with maximum synaptic current of 30nA, and a 10ms duration of input spike for fair comparison. ...
... Furthermore, neuronal AND and OR operators are simulated in Cadence Virtuoso, (TSMC 180 nm) using the hardware neuron circuit presented in [16] and the synapse circuit presented in [46]. N A , N B and N C are considered as eight. ...
Article
Full-text available
Spiking Neural Networks (SNN) along with neuromorphic computing has the advantage of low power dissipation compared to traditional Von Neumann architectures. Most of the studies related to SNNs have been influenced by the major advancement of artificial neural network (ANN) and deep learning (DL) over the last two decades. However, the deterministic floating point operations that facilitate the success of deep learning based solutions is vastly different from the information processing mechanism of a biological brain. Intelligence and cognition of biological brain are acquired through probabilistic operations. In this work a novel probabilistic computational approach suitable for SNNs is presented. The Boolean logic realization had been the first major breakthrough in the artificial neuron paradigm. In this work a single spiking neuron is modeled as a probabilistic Boolean operator. A stochastic time to first spike (TTFS) encoding scheme is adopted. A common framework for realizing several Boolean operators is presented. A physical variable $q$ , relating to the amplitude of the post synaptic potential is shown to be a key variable to control the probability of a particular Boolean operation. A novel relationship between $q$ and the probability of firing a logic HIGH is established. The present framework of implementing Boolean operations with stochastic TTFS encoding is shown to improve power efficiency compared to traditional rate coding based approaches. The framework should be useful in readdressing the problems in SNN by incorporating probabilistic algorithms efficiently in a neuromorphic platform.
... For this purpose, a Complementary Metal Oxide Semiconductor (CMOS) circuit could be developed to decode tactile stimuli. Implementing this would be straightforward, given the documented existence of CMOS equivalents for all the constituent building blocks [21][22][23][24]. In this scenario, the integrated circuits could be embodied into robots or prosthetics, enhancing the ability of artificial agents to discern textures, alongside other sensory stimuli such as vision and audio [25], while maintaining extremely low power consumption and minimal latency [26]. ...
Preprint
Full-text available
Neural spikes can encode a rich set of information, ranging from the perceived intensity of light sources to the likelihood associated with decisions made in the cortex. Among these capabilities, previous studies demonstrated that spikes can also encode in their activity multiple frequencies at the same time, such as those generated by skin vibrations during textures scanning. However, the mechanism responsible for decoding spikes containing multiple frequencies is yet to be uncovered. In this paper, we introduce a novel spiking neural network model tailored for frequency decomposition of spike trains. Our model mimics neural microcircuits hypothesized in the somatosensory cortex, making it a biologically plausible candidate for decoding spike trains observed in tactile peripheral nerves. We showcase the ability of simple neurons and synapses to replicate the functionality of a phase-locked loop (PLL) and delve into the emergent properties when multiple spiking phase-locked loops (sPLLs) interact with diverse inputs. Furthermore, we demonstrate how these sPLLs can decode textures by leveraging the spectral features of spike trains generated in peripheral nerves. By harnessing our model's frequency decomposition capabilities, we achieve significant performance enhancements over state-of-the-art approaches on a Multifrequency Spike Train (MST) dataset. Our findings underscore the potential of sPLLs in elucidating the mechanisms behind texture decoding in the brain, while also showcasing their potential to outperform conventional SNNs in handling spike trains with multiple frequencies. We believe this study sheds light into the neuronal mechanisms behind texture decoding, while presenting a practical framework for augmenting the capabilities of artificial neural networks in intricate pattern recognition tasks.
... The neuron circuits in the DYNAP-SE implement a model equivalent to the Adaptive-Exponential Integrate and Fire (AdExp-I&F) [5], [50], whose parameters can be configured to behave like LIF neurons. Synapses and biophysically realistic synapse dynamics are implemented using the current-mode Differential Pair Integrator (DPI) log-domain filter [51], which can be configured to give rise to 4 possible synapse types: AMPA (fast, excitatory), NMDA (slow, excitatory), GABA B (subtractive inhibitory) and GABA A (shunting inhibitory). ...
Preprint
Full-text available
The need for processing at the edge the increasing amount of data that is being produced by multitudes of sensors has led to the demand for mode power efficient computational systems, by exploring alternative computing paradigms and technologies. Neuromorphic engineering is a promising approach that can address this need by developing electronic systems that faithfully emulate the computational properties of animal brains. In particular, the hippocampus stands out as one of the most relevant brain region for implementing auto associative memories capable of learning large amounts of information quickly and recalling it efficiently. In this work, we present a computational spike-based memory model inspired by the hippocampus that takes advantage of the features of analog electronic circuits: energy efficiency, compactness, and real-time operation. This model can learn memories, recall them from a partial fragment and forget. It has been implemented as a Spiking Neural Networks directly on a mixed-signal neuromorphic chip. We describe the details of the hardware implementation and demonstrate its operation via a series of benchmark experiments, showing how this research prototype paves the way for the development of future robust and low-power mixed-signal neuromorphic processing systems.
... It uses forward Euler updates to predict the time-dependent dynamics and solves the characteristic circuit transfer functions in time. Specifically, a 'DynapSim' neuron solves the silicon neuron [16] and silicon synapse [17] circuit equations, making use of assumptions and simplifications from [18]. Further details of the application and implementation can be found in [15]. ...
Article
Full-text available
Mixed-signal neuromorphic processors provide extremely low-power operation for edge inference workloads, taking advantage of sparse asynchronous computation within spiking neural networks (SNNs). However, deploying robust applications to these devices is complicated by limited controllability over analog hardware parameters, as well as unintended parameter and dynamical variations of analog circuits due to fabrication non-idealities. Here we demonstrate a novel methodology for offline training and deployment of SNNs to the mixed-signal neuromorphic processor DYNAP-SE2. Our methodology applies gradient-based training to a differentiable simulation of the mixed-signal device, coupled with an unsupervised weight quantization method to optimize the network’s parameters. Parameter noise injection during training provides robustness to the effects of quantization and device mismatch, making the method a promising candidate for real-world applications under hardware constraints and non-idealities. This work extends Rockpool, an open-source deep-learning library for SNNs, with support for accurate simulation of mixed-signal SNN dynamics. Our approach simplifies the development and deployment process for the neuromorphic community, making mixed-signal neuromorphic processors more accessible to researchers and developers.
... Furthermore, this approach fits particularly well with the intended use of SHIP. Analytically-solvable models (leaky integrate-fire (LIF) neuron, n th order leaky synapse with n ≥ 0) can be used to mimic artificial neuron and synapse components with good approximation (see e.g., Bartolozzi and Indiveri, 2007;Chicca et al., 2014;Brivio et al., 2019;Yang et al., 2020;Fang et al., 2022b). We anticipate that we will also explore an example using a non-solved ODE system, the Izhikevic model (Izhikevich, 2003), which is implemented by way of the Forward-Euler approach (see Section 3.2.3). ...
Article
Full-text available
Investigations in the field of spiking neural networks (SNNs) encompass diverse, yet overlapping, scientific disciplines. Examples range from purely neuroscientific investigations, researches on computational aspects of neuroscience, or applicative-oriented studies aiming to improve SNNs performance or to develop artificial hardware counterparts. However, the simulation of SNNs is a complex task that can not be adequately addressed with a single platform applicable to all scenarios. The optimization of a simulation environment to meet specific metrics often entails compromises in other aspects. This computational challenge has led to an apparent dichotomy of approaches, with model-driven algorithms dedicated to the detailed simulation of biological networks, and data-driven algorithms designed for efficient processing of large input datasets. Nevertheless, material scientists, device physicists, and neuromorphic engineers who develop new technologies for spiking neuromorphic hardware solutions would find benefit in a simulation environment that borrows aspects from both approaches, thus facilitating modeling, analysis, and training of prospective SNN systems. This manuscript explores the numerical challenges deriving from the simulation of spiking neural networks, and introduces SHIP, Spiking (neural network) Hardware In PyTorch, a numerical tool that supports the investigation and/or validation of materials, devices, small circuit blocks within SNN architectures. SHIP facilitates the algorithmic definition of the models for the components of a network, the monitoring of states and output of the modeled systems, and the training of the synaptic weights of the network, by way of user-defined unsupervised learning rules or supervised training techniques derived from conventional machine learning. SHIP offers a valuable tool for researchers and developers in the field of hardware-based spiking neural networks, enabling efficient simulation and validation of novel technologies.
Article
This research demonstrates an OTS-based temperature-sensing afferent neuron that features low power consumption and a compact circuit structure.
Chapter
Full-text available
Most practical applications of artificial neural networks are based on a computational model involving the propagation of continuous variables from one processing unit to the next. In recent years, data from neurobiological experiments have made it increasingly clear that biological neural networks, which communicate through pulses, use the timing of the pulses to transmit information and perform computation. This realization has stimulated significant research on pulsed neural networks, including theoretical analyses and model development, neurobiological modeling, and hardware implementation. This book presents the complete spectrum of current research in pulsed neural networks and includes the most important work from many of the key scientists in the field. Terrence J. Sejnowski's foreword, "Neural Pulse Coding," presents an overview of the topic. The first half of the book consists of longer tutorial articles spanning neurobiology, theory, algorithms, and hardware. The second half contains a larger number of shorter research chapters that present more advanced concepts. The contributors use consistent notation and terminology throughout the book. Contributors Peter S. Burge, Stephen R. Deiss, Rodney J. Douglas, John G. Elias, Wulfram Gerstner, Alister Hamilton, David Horn, Axel Jahnke, Richard Kempter, Wolfgang Maass, Alessandro Mortara, Alan F. Murray, David P. M. Northmore, Irit Opher, Kostas A. Papathanasiou, Michael Recce, Barry J. P. Rising, Ulrich Roth, Tim Schönauer, Terrence J. Sejnowski, John Shawe-Taylor, Max R. van Daalen, J. Leo van Hemmen, Philippe Venier, Hermann Wagner, Adrian M. Whatley, Anthony M. Zador Bradford Books imprint
Book
Full-text available
1 Introduction: the kinetic interpretation of ion channel gating The remarkably successful quantitative description of the action potential introduced by Hodgkin and Huxley (1952) is still widely used over 40 years since its introduction. The classical Hodgkin-Huxley description was not only accurate, it was also readily extensible
Chapter
Full-text available
The small number of input-output connections available with standard chip-packaging technology, and the small number of routing layers available in VLSI technology, place severe limitations on the degree of intra- and interchip connectivity that can be realized in multichip neuromorphic systems. Inspired by the success of time-division multiplexing in communications [16] and computer networks [19], many researchers have adopted multiplexing to solve the connectivity problem [12, 67, 17]. Multiplexing is an effective way of leveraging the 5 order-of-magnitude difference in bandwidth between a neuron (hundreds of Hz) and a digital bus (tens of megahertz), enabling us to replace dedicated point-to-point connections among thousands of neurons with a handful of high-speed connections and thousands of switches (transistors). This approach pays off in VLSI technology because transistors take up a lot less area than wires, and are becoming relatively more and more compact as the fabrication process scales down to deep submicron feature sizes.
Article
In a passive dendritic tree, inhibitory synaptic inputs activating ionic conductances with an equilibrium potential near the resting potential can effectively veto excitatory inputs. Analog interactions of this type can be very powerful if the inputs are appropriately timed and occur at certain locations. We examine with computer simulations the precise conditions required for strong and specific interactions in the case of a delta-like ganglion cell of the cat retina. We find some critical conditions to be that (i) the peak inhibitory conductance changes must be sufficiently large (i.e., approximately equal to 50 nS or more), (ii) inhibition must be on the direct path from the location of excitation to the soma, and (iii) the time course of excitation and inhibition must substantially overlap. Analog AND-NOT operations realized by satisfying these conditions may underlie direction selectivity in ganglion cells.
Book
1. A Neural Processor for Maze Solving.- 2 Resistive Fuses: Analog Hardware for Detecting Discontinuities in Early Vision.- 3 CMOS Integration of Herault-Jutten Cells for Separation of Sources.- 4 Circuit Models of Sensory Transduction in the Cochlea.- 5 Issues in Analog VLSI and MOS Techniques for Neural Computing.- 6 Design and Fabrication of VLSI Components for a General Purpose Analog Neural Computer.- 7 A Chip that Focuses an Image on Itself.- 8 A Foveated Retina-Like Sensor Using CCD Technology.- 9 Cooperative Stereo Matching Using Static and Dynamic Image Features.- 10 Adaptive Retina.
Book
Neural network research often builds on the fiction that neurons are simple linear threshold units, completely neglecting the highly dynamic and complex nature of synapses, dendrites, and voltage-dependent ionic currents. Biophysics of Computation: Information Processing in Single Neurons challenges this notion, using richly detailed experimental and theoretical findings from cellular biophysics to explain the repertoire of computational functions available to single neurons. The author shows how individual nerve cells can multiply, integrate, or delay synaptic inputs and how information can be encoded in the voltage across the membrane, in the intracellular calcium concentration, or in the timing of individual spikes. Key topics covered include the linear cable equation; cable theory as applied to passive dendritic trees and dendritic spines; chemical and electrical synapses and how to treat them from a computational point of view; nonlinear interactions of synaptic input in passive and active dendritic trees; the Hodgkin-Huxley model of action potential generation and propagation; phase space analysis; linking stochastic ionic channels to membrane-dependent currents; calcium and potassium currents and their role in information processing; the role of diffusion, buffering and binding of calcium, and other messenger systems in information processing and storage; short- and long-term models of synaptic plasticity; simplified models of single cells; stochastic aspects of neuronal firing; the nature of the neuronal code; and unconventional models of sub-cellular computation. Biophysics of Computation: Information Processing in Single Neurons serves as an ideal text for advanced undergraduate and graduate courses in cellular biophysics, computational neuroscience, and neural networks, and will appeal to students and professionals in neuroscience, electrical and computer engineering, and physics.
Conference Paper
We present a simple silicon circuit for modelling voltage-dependent ion channels found within neural cells, capturing both the gating particle's sigmoidal activation (or inactivation) and the bell-shaped time constant. In its simplest form, our ion-channel analog consists of two MOS transistors and a unity-gain inverter. We present equations describing its nonlinear dynamics and measurements from a chip fabricated in a 0.25 mum CMOS process. The channel analog's simplicity allows tens of thousands to be built on a single chip, facilitating the implementation of biologically realistic models of neural computation