ArticlePDF Available

Reconfigurable 2D-ferroelectric platform for neuromorphic computing

Authors:

Abstract and Figures

To meet the requirement of data-intensive computing in the data-explosive era, brain-inspired neuromorphic computing have been widely investigated for the last decade. However, incompatible preparation processes severely hinder the cointegration of synaptic and neuronal devices in a single chip, which limited the energy-efficiency and scalability. Therefore, developing a reconfigurable device including synaptic and neuronal functions in a single chip with same homotypic materials and structures is highly desired. Based on the room-temperature out-of-plane and in-plane intercorrelated polarization effect of 2D α-In 2 Se 3 , we designed a reconfigurable hardware platform, which can switch from continuously modulated conductance for emulating synapse to spiking behavior for mimicking neuron. More crucially, we demonstrate the application of such proof-of-concept reconfigurable 2D ferroelectric devices on a spiking neural network with an accuracy of 95.8% and self-adaptive grow-when required network with an accuracy of 85% by dynamically shrinking its nodes by 72%, which exhibits more powerful learning ability and efficiency than the static neural network.
Content may be subject to copyright.
Appl. Phys. Rev. 10, 011408 (2023); https://doi.org/10.1063/5.0131838 10, 011408
© 2023 Author(s).
Reconfigurable 2D-ferroelectric platform for
neuromorphic computing
Cite as: Appl. Phys. Rev. 10, 011408 (2023); https://doi.org/10.1063/5.0131838
Submitted: 25 October 2022 • Accepted: 06 January 2023 • Published Online: 06 February 2023
Yongbiao Zhai, Peng Xie, Jiahui Hu, et al.
COLLECTIONS
This paper was selected as Featured
This paper was selected as Scilight
ARTICLES YOU MAY BE INTERESTED IN
Ferroelectrically modulated ion dynamics in Li+ electrolyte-gated transistors for neuromorphic
computing
Applied Physics Reviews 10, 011407 (2023); https://doi.org/10.1063/5.0130742
Ferroelectric field effect transistors for electronics and optoelectronics
Applied Physics Reviews 10, 011310 (2023); https://doi.org/10.1063/5.0090120
Programmable vapor-phase metal-assisted chemical etching for versatile high-aspect ratio
silicon nanomanufacturing
Applied Physics Reviews 10, 011409 (2023); https://doi.org/10.1063/5.0132116
Reconfigurable 2D-ferroelectric platform
for neuromorphic computing
Cite as: Appl. Phys. Rev. 10, 011408 (2023); doi: 10.1063/5.0131838
Submitted: 25 October 2022 .Accepted: 6 January 2023 .
Published Online: 6 February 2023
Yongbiao Zhai,
1
Peng Xie,
2
Jiahui Hu,
1
Xue Chen,
2
Zihao Feng,
3
Ziyu Lv,
1
Guanglong Ding,
3
Kui Zhou,
3
Ye Zhou,
3
and Su-Ting Han
1,a)
AFFILIATIONS
1
College of Electronics and Information Engineering, Shenzhen University, Shenzhen 518060, People’s Republic of China
2
Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen 518060, People’s Republic of China
3
Institute for Advanced Study, Shenzhen University, Shenzhen 518060, People’s Republic of China
a)
Author to whom correspondence should be addressed: sutinghan@szu.edu.cn
ABSTRACT
To meet the requirement of data-intensive computing in the data-explosive era, brain-inspired neuromorphic computing have been widely
investigated for the last decade. However, incompatible preparation processes severely hinder the cointegration of synaptic and neuronal
devices in a single chip, which limited the energy-efficiency and scalability. Therefore, developing a reconfigurable device including synaptic
and neuronal functions in a single chip with same homotypic materials and structures is highly desired. Based on the room-temperature out-
of-plane and in-plane intercorrelated polarization effect of 2D a-In
2
Se
3
, we designed a reconfigurable hardware platform, which can switch
from continuously modulated conductance for emulating synapse to spiking behavior for mimicking neuron. More crucially, we demonstrate
the application of such proof-of-concept reconfigurable 2D ferroelectric devices on a spiking neural network with an accuracy of 95.8% and
self-adaptive grow-when required network with an accuracy of 85% by dynamically shrinking its nodes by 72%, which exhibits more power-
ful learning ability and efficiency than the static neural network.
Published under an exclusive license by AIP Publishing. https://doi.org/10.1063/5.0131838
INTRODUCTION
Inspired by the human brain which is a tremendously dense neu-
ral networks, including 10
11
neurons connected by 10
15
synapses
with parallel information processing, memorizing, and learning capa-
bility [Fig. 1(a)], the software-based neural network has achieved huge
breakthroughs in the artificial intelligence and internet-of-things for
the past few decades.
1–5
However, since the implementation of a neu-
ral network is still based on the von Neumann architecture, great
amount of data shuffled between central processing units and main
memory during training and inference processes induces the latency
issue and low power efficiency.
6
To obtain comparable capabilities of
human brain which could operate at petaflop with power consump-
tion less than 20W, implementation of the neural network in hard-
ware withconstrained power and the chip area is highly desired.
Hardware-implemented neural networks with anticipated level of
computational complexity strongly rely on the neuromorphic devices
with various switching characteristics.
7–10
For example, spiking neural
network [SNN, Fig. 1(b)] consists of spiking neurons with volatile
nature to encode the analog input information into the spike trains
and interconnecting synapse with nonvolatile property to propagate
spike trains in either excitatory or inhibitory way.
11,12
Reservoir com-
puting contains volatile reservoir to map input information into high-
dimensional feature and nonvolatile synapse-based readout layer for
classification.
13–15
However, the incompatibility of the fabrication pro-
cesses usually hinders combining different types of devices on a mono-
lithic circuit.
16–18
Reconfigurable devices, including the neuronal and
synaptic functions in a single device with same homotypic materials
and structures, enable compact and energy-efficient neuromorphic
computing in a single chip.
19–22
Constructing reconfigurable neuromorphic device posts chal-
lenges on traditional complementary metal–oxide–semiconductor
(CMOS) technology owing to the limited tunability of fundamental sil-
icon transistor. Recently, ferroelectric field-effect transistors (Fe-FETs)
with nondestructive operation, nonvolatility, and high dense integra-
tion have attracted extensively interest as a promising building block
for high-performance analog computing.
23–26
In a Fe-FET, ferroelec-
tric materials are utilized as the gate insulators where the conductance
of semiconductor can be modulated by the polarization switching of a
ferroelectric dielectric layer under an external electrical field. The
short retention time originated from the gate leakage current and
Appl. Phys. Rev. 10, 011408 (2023); doi: 10.1063/5.0131838 10, 011408-1
Published under an exclusive license by AIP Publishing
Applied Physics Reviews ARTICLE scitation.org/journal/are
depolarization field-induced charge trap is the main obstacle for the
commercialization of Fe-FET. Additionally, high-integration hard-
ware-implemented neuromorphic computing demands the design of
computing elements toward miniaturization.
Particularly, 2D layered a-In
2
Se
3
with atomic thickness possesses
the potential for continuous shrinking which also exhibits robust and
long-term ferroelectric polarization even at atomic scale.
27–30
Recently,
the ferroelectric semiconductor field-effect transistors (FeS-FETs) by
employing a-In
2
Se
3
as the channel materials have been proposed to
show large memory windows of 70 V, high on/off ration of 10
8
,and
fast write speed of 40 ns.
31,32
Moreover, a-In
2
Se
3
exhibits unique in-
plane (IP) and out-of-plane (OOP) intercorrelated polarization effect
due to the centrosymmetry breaking, and such interlocked IP and
OOP polarization can be simultaneously switched by electrical field
FIG. 1. The reconfigurable device concept. (a) Human brain is composed of 10
11
neurons and 10
15
synapses, which possesses parallel information processing, memoriz-
ing, and learning capability. (b) Schematic diagram of spiking neural network framework. (c) Reconfigurable device can be transformed between nonvolatile synapse and vola-
tile neuron for a high-integrated all-hardware-driven neuromorphic computing system through applying/removing gate pulse. (d) Schematic of the dynamic grow-when-required
network framework. When a category is added or removed in the input information flow, the network can spontaneously grow or shrink its node number to adapt this change.
Applied Physics Reviews ARTICLE scitation.org/journal/are
Appl. Phys. Rev. 10, 011408 (2023); doi: 10.1063/5.0131838 10, 011408-2
Published under an exclusive license by AIP Publishing
along the IP and OOP directions, which provides a freedom degree to
flexible design of the a-In
2
Se
3
reconfigurable devices.
27,33–35
Although the a-In
2
Se
3
FeS-FET has been integrated in the appli-
cation of neuromorphic computing, completely reconfigurable neuro-
morphic functions have not been achieved yet. In the report, we first
report a reconfigurable FeS-FET by employing 2D layered a-In
2
Se
3
as
channel materials. Based on the room-temperature OOP and IP inter-
correlated polarization effect, a single post-fabricated device could
switch from continuously modulated conductance with nonvolatility
for emulating synapse to spiking behavior with volatility for mimick-
ingneuronbysinglepulseoperation[Fig. 1(c)]. Based on the charac-
teristics of neurons and synapses obtained from reconfigurable
dynamic elements, a proof-of-concept monolithically neural network
consisting of six reconfigurable a-In
2
Se
3
FeS with 2 2 nonvolatile-
operated synapses and two volatile-operated neurons is experimentally
demonstrated with spatiotemporal dynamic behavior, verifying the
feasibility of interface cointegration. In addition, to further validate the
reconfigurability of a-In
2
Se
3
FeS-FETs for the application of artificial
intelligence, the spiking neural networks for pattern recognition with
95.8% face recognition accuracy on Yale face dataset and grow-when-
required [GWR, a kind of self-adaptive dynamic neural network, Fig.
1(d)] network with accuracy of 85% by dynamically shrinking its
nodes by 72%, was constructed by simulation based on the experimen-
tal data from our reconfigurable FeS-FET. We observed that such net-
work can provide powerful learning ability can efficiency compared
with static counterparts by dynamically creating or removing network
nodes.
RESULTS AND DISCUSSION
The structure of the reconfigurable 2D a-In
2
Se
3
FeS-FET device
is sketched in Fig. 2(a). We fabricated the device with a-In
2
Se
3
semi-
conducting channel and Cr/Au source/drain electrodes by mechani-
cally exfoliating layered a-In
2
Se
3
on p-Si substrate with a 100 nm SiO
2
capping layer followed by photolithography, 5 nm Cr/50 nm Au depo-
sition, and lift-off for source–drain electrode fabrication. The channel
length and width of the reconfigurable device are 2 and 2.5 lm. The
falsecoloratomicforcemicroscope(AFM)imageisshowninFig.
2(b). The detailed thickness and composition information of the a-
In
2
Se
3
(35 nm) can be obtained by performing aberration-corrected
cross-sectional high-resolution transmission electron microscopy
FIG. 2. The characterization of 2D In
2
Se
3
ferroelectrics and device structure. (a) The schematic diagram of as-prepared FeS-FET device. (b) False color AFM image of the 2D
a-In
2
Se
3
FeS-FET device. (c) Aberration-corrected high-resolution transmission electron microscope (TEM) cross section image and corresponding energy dispersive x-ray
spectroscopy (EDS) element mapping. (d) x-ray diffraction image of the 2D a-In
2
Se
3
. (e) Raman spectrum characterization of the 2D In
2
Se
3
. (f) OOP phase (left) and corre-
sponding IP phase (right) images of 32 nm a-In
2
Se
3
. (g) and (h) PFM amplitude and PFM phase images under external electric field based on a metal-semiconductor-metal
(MSM) structure. (i) and (j) PFM amplitude and PFM phase images under external electric field based on a metal-oxide-semiconductor (MOS) structure.
Applied Physics Reviews ARTICLE scitation.org/journal/are
Appl. Phys. Rev. 10, 011408 (2023); doi: 10.1063/5.0131838 10, 011408-3
Published under an exclusive license by AIP Publishing
(HRTEM) with corresponding energy dispersive x-ray spectroscopy
(EDS) element mapping [Fig. 2(c)]. The ferroelectric a-In
2
Se
3
possesses
two crystal structure according to its layer stacking (2H phase with the
hexagonal structure and 3R phase with the rhombohedral structure)
and both of which exhibit IP and OOP ferroelectricity owing to their
non-centrosymmetry.
36,37
X-ray diffraction and Raman spectrum were
employed to characterize the a-In
2
Se
3
crystal to confirm that the a-
In
2
Se
3
used in this work is 2H phase [Figs. 2(d) and 2(e)].
To investigate the intercorrelated ferroelectric polarization, a-
In
2
Se
3
was transferred onto the Au-coated Si substrate for performing
piezoresponse force microscopy (PFM). After programming two
square patterns with opposite tip voltage (8andþ8V) on the a-
In
2
Se
3
/Au/Si, the OOP and IP ferroelectricity of the a-In
2
Se
3
were
simultaneously collected by PFM with a driving frequency of 30kHz
and a driving voltage of 8 V. As shown in Fig. 2(f), the OOP phase
change of a-In
2
Se
3
is synchronized with the variation of IP phase, sug-
gesting close intercorrelation between OOP and IP polarization. The
butterfly-like PFM amplitude hysteresis loop [Fig. 2(g)] and sharp
phase change (180) in the PFM phase hysteresis loops [Fig. 2(h)]
clarify the clear ferroelectric polarization switching under external
applied voltage.
27,35
However, free moving electrons in the a-In
2
Se
3
semiconductor may partially shield or prevent the electric field to pen-
etrate into the body of the a-In
2
Se
3
, resulting in an unpredictable
behavior of ferroelectric polarization switching in the metal–oxide–
semiconductor (MOS) structure. Through depositing a 10 nm Al
2
O
3
layer on the a-In
2
Se
3
, measurements of PFM amplitude and PFM
phase were carried out. Similar ferroelectric butterfly-like amplitude
hysteresis loop [Fig. 2(i)] and phase hysteresis loops [Fig. 2(j)] suggest
that switchable polarization is still exist in the MOS structure. Thus, it
is feasible to employ a-In
2
Se
3
as the ferroelectric channel for FeS-FET
devices.
Electrical reconfiguration of a-In
2
Se
3
FeS-FET is summarized in
Fig. 1(c). The reconfigurable device can perform two functions: (i)
FET operation for emulating synaptic behavior: the conductance state
of the device with long-term retention capability can be well-
controlled by applying positive/negative pulses on gate electrode to
modulate the OOP polarization. (ii) Lateral memristor operation for
mimicking neuron with integrate-fire property: the switching between
the high resistance state (HRS) and low resistance state (LRS) are
obtained by applying a pulse voltage on the drain electrode to modu-
late the IP polarization, inducing threshold switching and firing
characteristics. First, FET operation of device with continuous conduc-
tance modulation was investigated. Figure 3(a) shows the transfer
characteristics of as-fabricated a-In
2
Se
3
transistor by applying a scan-
ning voltage of V
g
from 40 to 40 V and a fixed source–drain voltage
(V
ds
) of 0.5 V. The clockwise hysteresis loop enlarged with the
increased sweeping range of gate voltage, which originates from the
accumulation of ferroelectric polarization switching in the channel
OOP direction. The retention capability of the conductance state can
be achieved more t han exceeds 2 10
4
s with high ON/OFF ratio
(>10
3
)asshowninFig. 3(b), indicating stable ferroelectric polariza-
tion property of a-In
2
Se
3
. This is mainly attributed to the semicon-
ducting nature of 2D a-In
2
Se
3
ferroelectric material where the
existence of the movable charge carriers can create build-in electrical
field to consolidate the polarization of the ferroelectric dipole and
improve the endurance property.
31
The capability of 2D a-In
2
Se
3
FeS-
FET for mimicking the synaptic functions was further validated in
which gate/drain electrodes are defined as the pre-/post-synaptic ter-
minals, respectively, and the source–drain channel current is
appointed as the post-synaptic current (PSC). The PSC increased after
the application of a series of short negative pulses in the gate electrode,
demonstrating an excitatory synaptic behavior. Moreover, the devices
exhibit gradually incremental PSC with the increased amplitude of V
g
pulses as shown in Fig. 3(c), and the PSC return to its initial state
quickly, which represented a typically biological short-term plasticity
(STP) behavior. Likewise, the inhibitory behavior was also observed by
applying a positive gate pulse as shown in Fig. S1. In addition, the
response of device to the gate pulses with varying pulse width and
pulse interval was also investigated. The conductance of device can be
modulated by adjusting the width and interval of gate voltage pulses,
thus enabling the large dynamic range. Typically, a pulse with longer
duration gives rise to larger excitatory post-synaptic current (EPSC) or
inhibitory post-synaptic current (IPSC) [Figs. 3(d) and S2]. By short-
ing the pulse interval from 350 to 50 ms, the drain current gradually
fails to relax back to the initial value, representing a transition from
short-term plasticity to long-term plasticity as shown in Fig. S3.
Long-term potentiation (LTP) and long-term depression (LTD)
are two essential synaptic functions for implementing memory and
learning functions in the human brain. In Fig. 3(e),byapplyingagate
pulse sequence consisting of continuous 50 negative pulses (5V,
100ms)and50positivepulses(5V,100ms),thedeviceexhibits
repeatable and stable PSC response. During the application of negative
pulses train, the PSC progressively increases for mimicking the LTP
process, representing an increasingly strengthened synaptic connec-
tion. By contrast, the application of positive pulses train results in a
gradual decrease in PSC, emulatingthe LTD function with a weakened
synaptic connection. The shortest pulse stimulations to drive the
device is as short as 100 ls (Fig. S4), which is comparable to the previ-
ous reported synaptic transistor, beneficial for faster learning and rea-
soning process. The spike timing-dependent plasticity (STDP) which
is an essential function for synaptic learning was implemented to
reflect the relationship between the change of synaptic weight and the
timing of the pre/post-synaptic spikes. By adjusting pulse timing inter-
val according to the inset in Fig. 3(f), a positive change in synaptic
weight was obtained when Dt>0whileDt<0 led to a negative synap-
tic weight change. The time constants of 19.33 and 19.85 ms for poten-
tiation and depression response were estimated through exponentially
fitting.
Except for the tunable OOP ferroelectric switching characteristics
for synaptic emulation in the operation mode of FET, the three-
terminal a-In
2
Se
3
device can also be operated as a lateral memristor to
exhibit IP ferroelectricity for mimicking neuron behavior. In the oper-
ation mode of the lateral memristor, the source–drain electrode func-
tion as the electrode interfaced with a-In
2
Se
3
switching layer and gate
electrode function as additional control terminal. Figure 3(g) shows
the typically IVcurves of the lateral memristor by applying the dual
drain voltage sweeping from 0 to 10 to 0 V and a fixed gate voltage
of 40 V. The device initially exhibits a HRS when V
ds
is lower than
4.4 V. Nevertheless, the device can be switched into low resistive
state (LRS) with ON/OFF ratio of 10
3
when the V
ds
exceeds threshold
voltage (V
th
). Moreover, memristors can return back to HRS during
V
ds
sweeping from 10 to 0V. The repeatable unipolar scans reveal
that the resistive switching loops trace previous scan, demonstrating
threshold resistive switching phenomenon with volatile behavior.
Applied Physics Reviews ARTICLE scitation.org/journal/are
Appl. Phys. Rev. 10, 011408 (2023); doi: 10.1063/5.0131838 10, 011408-4
Published under an exclusive license by AIP Publishing
FIG. 3. The performance of reconfigurable synaptic and neuronal device. (a) Transfer characteristic curves of the 2D In
2
Se
3
-based FeS-FET device. (b) The retention charac-
teristics of the 2D FeS-FET device with a high LRS/HRS ration (>10
3
). (c) The post-synaptic current under a short pulse gradually increases with the increase in the pulse
amplitude. (d) The post-synaptic current can also be controlled by the pulse width. (e) LTP and LTD characteristics of a synapse through applying consecutive 50 negative V
g
pulses (5 V, 100 ms) and 50 positive V
g
pulses (5 V, 100 ms). The reading voltage is V
ds
¼0.5 V. (f) Synaptic weight change as a function of time interval (Dt) between pre/
post-synaptic pulses. (g) Consecutive five cycles of unipolar V
ds
scans on the negative side, indicating a volatile property. (h) Firing threshold voltage as the function of gate
pulse amplitude. Resistive switching by applying trains of pulses with different pulse amplitudes of (i) 2 V and (j) 6 V. (k) A typically spiking behavior by applying consecutive
pulses. (l) The spiking probability as a function of pulse amplitude. (m) The schematic diagram of the a-In
2
Se
3
FeS-FET device and corresponding energy-band under negative
or positive gate voltage with high effective oxide thickness (EOT). When a negative voltage is applied in the gate electrode, the polarization direction is downward, positive
polarized bound charges are distributed on the bottom surface, and thus, those movable electrons are induced in the bottom, which resulting in a LRS state. Conversely, when
polarization is upward, negative bound charges are located on the bottom surface. Due to the movable positive charges depletion, the conductance state is HRS. (n) The
energy band diagram of the working mechanism of the reconfigurable device as neurons.
Applied Physics Reviews ARTICLE scitation.org/journal/are
Appl. Phys. Rev. 10, 011408 (2023); doi: 10.1063/5.0131838 10, 011408-5
Published under an exclusive license by AIP Publishing
The value of the V
th
can be tuned by controlling the OOP polarization
strength of the channel [Fig. 3(h)], which is vital critical to implement a
reliable neuromorphic system. It is worth noting that the role of the
gate tuning here is only to keep the channel (a-In
2
Se
3
)inahigh-
resistance state. We have made detailed investigation about the lateral
memristor when the channel (a-In
2
Se
3
) is in HRS and LRS, respectively.
The results show that such obviously volatile nature and threshold firing
phenomenon can be found only when the channel (a-In
2
Se
3
)isinthe
HRS. Otherwise, the I
ds
flows regardless of V
ds
when the channel is in
theLRSasshowninFig.S5.
Thevolatilenatureofthedevicewasfurtherinvestigatedbyper-
forming a single pulse measurement with different amplitudes, fol-
lowed by a read pulse with a low amplitude of 0.5 V to record the
conductance state. When a programming pulse with an amplitude of
2 V and a width of 10 ms was first applied on the drain, the device
cannot be switched with current maintained at the ground level [Fig.
S6(a)]. While the single voltage pulse with an amplitude of 6 V could
switch on the device with a short delay in current response. No obvi-
ous current change was detected during the read operation, indicating
that the device relaxes to low conductance state after removal of pro-
gramming voltage [Fig. S6(b)].
The combination of volatile nature and threshold firing open a
way to implement the neural function of “leaky integrate and fire,”
ensuring the implementation of a basic spiking neuron behavior for
analog neuromorphic computing. Therefore, the pulse train characteri-
zation with six pulses, amplitude of 2V,widthof10ms,andinterval
of 20 ms was carried out on the device, as illustrated on Fig. 3(i).Dueto
the pulse amplitude of 2V is less than the V
c
(4.4 V), the drain cur-
rent (I
ds
) does not flow from drain to source and, thus, the charges are
accumulated in the parasitic capacitor (in the interface of drain electrode
and a-In
2
Se
3
). Considering the integration process, once the voltage of
the capacitor reaches the V
c
, the device will switch from HRS to LRS.
Therefore, under such pulse train, the conductance of the device is ini-
tially at HRS and an abrupt switch from HRS to LRS was triggered in
the sixth pulse (N
fire
¼6), mimicking the spiking neuron with firing
behavior. Comparatively, when the pulse amplitude gradually increases
and exceeds the V
c
, the required number of pulses to induce firing
behavior gradually decreases, even the device can directly switch from
HRS to LRS during the application of the first pulse (N
fire
¼1) when the
pulse amplitude is 6V [Fig. 3(j)]. By adjusting the pulse amplitude,
width, and interval, an abruptly change in the current (spiking behavior)
can be observed in Fig. 3(k), which is a critical function for mimicking
neuron, and a typical spiking probability curve as shown in Fig. 3(l).
To clearly demonstrate the work mechanism of the reconfigura-
ble a-In
2
Se
3
FeS-FET device, detailed energy band diagrams were
depicted in Figs. 3(m) and 3(n) that correspond to synaptic and neural
operation, respectively. The nonvolatile or volatile response of the
reconfigurable FeS-FET device is mainly dependent on both polariza-
tion direction and polarization strength. For nonvolatile synaptic sim-
ulation based on FET operation, vertical OOP polarization plays a
critical role for weight update according to the polarization strength,
in which the magnitude, width, and number of the gate pulse deter-
mine the polarization intensity, further affect the induced movable
charges, and finally lead to the update of the synaptic weight. While
the volatile neuronal operation mainly depends both on Schottky bar-
rier and lateral IP polarization, which directly resulting in the occur-
rence of integration and firing.
When a negative voltage is applied in the gate, the polarization
direction is downward, and the positive bound charges are distributed
on the bottom surface of the a-In
2
Se
3
channel [Figs. 3(m-I)]. Thus, the
energy band is upward and mobile negative charges (due to the semi-
conductor nature) are accumulated at the bottom surface, resulting in
a high-conductance state, as shown in Fig. 3(m-II). Conversely, in the
polarization-up state [Fig. 3(m-III)], when a positive voltage is applied
in the gate, negative bound charges are distributed on the bottom sur-
face, the energy band bends downward with mobile charges depleted
at the bottom surface, thus inducing the high channel resistance [Fig.
3(m-IV)]. It is worth noting that the thickness of the dielectric layer
(SiO
2
) is 100 nm, the electric field across the semiconductor is not
strong enough to penetrate across the ferroelectric semiconductor, and
only a part of a-In
2
Se
3
channel switches near the oxide/semiconductor
interface. Thus, the drain current (I
ds
) of the single-transistor is deter-
mined by the bottom surface of the semiconductor.
31
The unusual volatile resistive switching phenomenon based on
the lateral memristor operation can be rationalized through modeling
the device as a back-to-back Schottky diode with a build-in electric
field pointing from semiconductor toward metal.
35
Briefly, the hystere-
sis curves and volatile nature observed in the I–V curves mainly stem
from the V
ds
tuned IP polarization and Schottky barrier. Since the
channel is initialized to high resistance state by applying a positive gate
pulse, a large barrier well forms between the Au-a-In
2
Se
3
-Au diode, as
shown in Fig. 3(n-I).WhenV
ds
sweeps from 0 V to negative polarity
and V
ds
is smaller than the threshold firing voltage, the polarization
direction points right and polarization strength is small. A higher bar-
rier blocks the injection and flow of the carriers from drain to source,
inducing the maintenance of HRS [the phase (II) in Fig. 3(n)]. In this
state, the carriers are accumulated in the interface between drain elec-
trode and a-In
2
Se
3
, corresponding to an integration process. As the
V
ds
exceeds threshold voltage, IP polarization becomes the dominant
factor, a parallel orientation and lower barrier are generated to dis-
charge the accumulated carriers, which switches the device from HRS
to LRS as shown in Fig. 3(n-III).Thisstatecanberegardedasthefir-
ing process. Finally, in Fig. 3(n-IV), with the drain voltage decreases to
0, the large barrier was recovered to block the carrier flow from drain
to source again, and, thus, inducing the relaxation back to HRS.
In the neuromorphic computing system, artificial synapses and
neurons work together to realize the information processing. In other
word, the synapses modulate the information passing through contin-
uous weight update while the neurons integrate the modulated infor-
mation and trigger a firing action if threshold value is reached.
Therefore, we experimentally demonstrate the interactions between
the artificial neurons and synapses, which serves as the basis of learn-
ing for all biological neural systems. Spatial and temporal summation
in artificial neurons is vitally significant for computation and memory
in neuromorphic hardware. We design the circuit diagram in Fig. 4(a)
to emulate this function, and Fig. 4(b) is a schematic diagram of the
integration of two presynaptic inputs with simultaneous stimuli
(V
S1
¼4VandV
S2
¼4 V, pulse width is 10ms, and the pulse
interval is 10ms), where the two presynaptic inputs are integrated at
the postsynaptic neuron. It is obviously that when only one input (V1
or V2) is applied, the neuron can perform temporal summation and
fire a spike after application of seven and six pulses, respectively, as
shown in the left and middle panel of Fig. 4(c).However,whentwo
presynaptic inputs are applied simultaneously, the neurons can fire
Applied Physics Reviews ARTICLE scitation.org/journal/are
Appl. Phys. Rev. 10, 011408 (2023); doi: 10.1063/5.0131838 10, 011408-6
Published under an exclusive license by AIP Publishing
easily only after application of one pulses, as shown in right panel of
Fig. 4(c), indicating a spatial summation. In addition, spatial summa-
tion can also be observed by varying the pulse amplitude of both pre-
synaptic inputs (from 2to3 V). Similarly, spatiotemporal
integration was realized according to the schematic circuit diagram in
Fig. 4(d) by adjusting pulsepeak position. The collective effect through
applying two input voltage has a higher firing frequency at the post-
synaptic neuron than that of the single pulse input [Fig. 4(e)].
To further demonstrate the interactions between the artificial
neurons and synapses, a 2 2 synapses array was fabricated to
interconnect with two artificial neurons at each output as shown in
Figs. 4(f) and 4(h). Notably, all the synapses were initialized to small
weight with some variation due to the stochastic nature. A train of
rectangular spikes was only applied to the first row of the synapses
and the second row is kept at nearly zero bias as shown in Fig. 4(f).As
a result, the neuron 2 (N
2
) connected to the right-hand columns fires
because the synapse S
12
has a slightly larger initial weight [Fig. 4(g)].
The firing of the N
2
further pulls down the voltage of S
12
, results in a
large spike across S
12
, further enhancing its weight. Next, we simulta-
neously applied a train of rectangular voltage pulses to the first and
FIG. 4. Experimental demonstration of spatial and spatiotemporal integration through constructing synapse and neuron neural network. (a) Circuit diagram of presynaptic inputs
and post-neurons. (b) The schematic diagram of spatial integration through different pulse inputs. (c) Neuron response by single S
1
input (left), by single S
2
input (middle), and
by both S
1
and S
2
inputs (right). The input pulse magnitude is 4 V and the pulse width is 10 ms. (d) The schematic diagram of spatiotemporal integration with different pulse
times. (e) The neuron response triggered by single S
1
input (left), by single S
2
input (middle), and by both S
1
and S
2
inputs (right). The pulse width is 5 ms, the pulse interval is
15 ms, and the Dt¼5 ms. (f) and (h) The schematic diagram of circuits with 2 2 artificial synapse array and two artificial neurons. Inset: the resistance maps of the synapses
array before and after training. (g) and (i) The measured neuron signals under different inputs.
Applied Physics Reviews ARTICLE scitation.org/journal/are
Appl. Phys. Rev. 10, 011408 (2023); doi: 10.1063/5.0131838 10, 011408-7
Published under an exclusive license by AIP Publishing
second row of the synapses as shown in Fig. 4(i). As a result, both neu-
ron 1 (N
1
)andneuron2(N
2
)arefired.
To demonstrate the great application potential of the a-In
2
Se
3
-
based FeS-FET device in constructing a full spiking neuron network
(SNN), an on-chip learning simulation was performed based on the
experimental results. First, we designed a fully connected three-layer
neural network as shown in Fig. 5(a), which consist of 1024 input neu-
rons, 100 neurons in the middle layer, and six output neurons. Among
them, 1024 input neurons and six output neurons correspond to the
Yale Face image data size of 32 32 pixels and six different face/
expression classes (from faces 1 to 6), respectively. The performance of
as-designed SNN was evaluated through the accuracy rate of facial rec-
ognition and expression classification, and the on-chip learning pro-
cess was trained through the back-propagation method based on the
experimental results of the reconfigurable device for the operation of
synapses and neurons. Subsequently, all the face images were input to
the SNN, a 32 32 pixels was converted into 1 1024 vector, and
each pixel generated a random spike voltage. It should be emphasized
that the spike intensity is proportional to the pixel values as depicted
in Fig. 5(b). The whole simulation process requires to last for 500 steps
with 1024 spike vector for each time step. These obtained spike trains
flowed into the synaptic devices to be converted into the weight cur-
rent sums through the columns. A line of trans-impedance amplifiers
were required to convert the current into analog voltage. These post-
synaptic neurons integrated these analog voltages and generated spike
pulse when the voltage reaches a firing threshold. At last, we counted
the spiking number of the output neurons and obtained the prediction
result. To better handle the error problem, soft activation function was
applied to replace step function during the backward propagation of
error and implemented this training for 100 epochs.
Face recognition was evaluated through 60 different face images,
containing ten different expressions per person [Fig. 5(c)]. After the
training of 100 epochs, the spiking number of six output neurons was
shown in Fig. 5(d), which corresponds the category numbers them-
selves. The accuracy rate of three-layer SNN reaches 71.6% after 100
epochs for expression classification, and the face recognition accuracy
FIG. 5. The simulation of three-layer spiking neural network. (a) The schematic of spiking neural network for facial and expression classification. (b) The designed pulse
scheme for facial and expression classification. (c) 40 test images for facial and expression recognition in the Yale Face dataset. (d) The obtained spiking number for the six
output neurons over 100 epochs in testing dataset. (e) The accuracy rate of facial recognition and expression classification after 100 testing epochs. Confusion matrix of face
recognition (f) and expression classification (g) of the testing results.
Applied Physics Reviews ARTICLE scitation.org/journal/are
Appl. Phys. Rev. 10, 011408 (2023); doi: 10.1063/5.0131838 10, 011408-8
Published under an exclusive license by AIP Publishing
of simulation network is as high as 95.8% on Yale face dataset [Fig.
5(e)]. Furthermore, a confusion matrix of the testing results for face
recognition and expression classification are also displayed in Figs. 5(f)
and 5(g). As a measurement index for classification accuracy, the row
of the confusion matrix represents input images (expected results) and
the column of the matrix displays the classification results (obtained
images) after 500 tests, in which the color depth depicts the number of
obtained images. Consequently, we can directly observe the distribu-
tion of the response of the training output neurons, suggesting that the
input signals were better identification after training.
Cointegrating the synaptic functions and neural functions into
a single device with the same structure is helpful to develop energy-
efficient and compact neuromorphic computing system. Moreover,
the ability of multiply functions for reconfigurable devices have
widely been studied in emerging a dynamic self-adaptive GWR net-
work.
20,38,39
Comparing to the static neural network (such neural
networks are trained on specific or static data; the addition of new
data classes will interfere previously learning results and eventually
leads to poor performance), the dynamic GWR network can self-
adaptively create or remove the network nodes through an unsuper-
vised manner according to the number of input classes, so it can
efficiently manage network nodes and reduce the energy-consumption
without degrading the recognition accuracy. According to GWR
algorithm proposed by Lecun et al., we simulated the GRW neural
network based on reconfigurable device dynamics. MNIST (Modified
National Institute of Standards and Technology) hand-written
digital dataset
40
were employed as training dataset in the GWR
network.
The dynamic response capability of the GWR network with the
change of input classes has been visually shown in Fig. 6(a).Inthe
stage 1, only five MNIST digital classes (“0” to “4”), including 20 thou-
sand samples are input and trained, the GRW network gradually
grows and begins to learn. The recognition rate for 0 to 4 at this stage
can reach 85%, but it is unable to recognize “5”–“9” five classes as
shown in Fig. 6(b). Next, in the stage 2, all ten MNIST digital classes
are input and trained in the GWR network. The network size is further
increased to accommodate the additional five MNIST classes (5–9).
The recognition accuracy indicates that the GWR network can not
only identify first five classes (0–4) but also recognize the additional
five classes [5–9, as shown in Fig. 6(c)] by dynamically adding network
nodes. Furthermore, the GWR network can gradually shrink its nodes
when the input classes are reduced from ten to five as shown in the
stage 3 of Fig. 6(a). The results of accuracy rate in Fig. 6(d) also dem-
onstrate that the removed five classes have become inactive.
To highlight the advantages of the GWR network more intui-
tively, we made a detail comparison between dynamic GWR neural
network and static network, in which both networks utilize the
Hebbian learning rule. In the incremental learning, the number of the
nodes gradually grows as the new classes were trained in the dynamic
network as shown in Fig. 6(e). Meanwhile, the number of nodes for
the static network is designed to equal to the maximum number of
nodes required in the GWR network. Such design rules can ensure
that the difference in the performance between the dynamic and static
network we obtained is not caused by the size of the network, but
because the dynamic GWR network has the ability of self-adaptive
and learning. By evaluating the recognition accuracy of ten datasets
FIG. 6. Simulation performance of the dynamic GWR network based on reconfigurable ferroelectric semiconductor FET. (a) The dynamic response capability of the GWR net-
work with the change in inputted MNIST digital classes over time. (b) The accuracy rate of first five digital classes (0–4) after the training of stage 1. (c) The accuracy rate of
ten digital classes after the training of stage 2. In this stage, five new classes (5–9) were dynamic added into the network. (d) The accuracy rate of all the classes after the
training of stage 3, indicating that the added five classes (5–9) have dynamically been removed and the network has shrunk its size. (e) The dynamic change in the number of
nodes with the increase in the input MNIST classes in the GWR network. (f) The comparison of the accuracy rate between the dynamic GWR network and static CNN. (g) The
dynamic change in the number of nodes with the increase in the input MNIST classes in the GWR network. (h) Similar accuracy rate were obtained, while the GWR network
obtained this result using 72% the number of nodes compared with its static counterpart.
Applied Physics Reviews ARTICLE scitation.org/journal/are
Appl. Phys. Rev. 10, 011408 (2023); doi: 10.1063/5.0131838 10, 011408-9
Published under an exclusive license by AIP Publishing
trained by different networks, we observed that dynamic GWR can
maintain better learning ability than static network as more and more
new digital classes are input. The results of the accuracy rate in Fig.
6(f) indicate that the dynamic neural network avoids catastrophic deg-
radation in the performance (a 180% higher accuracy is obtained) as
the number of classes is increased.
Additionally, we also investigate the dynamic response capability
to the GWR network for adapting the change in input classes. Initially,
we input all ten datasets into the GWR network for training, the size
of the GWR network grow and eventually become saturated.
Afterward, when half of MNIST categories are removed, the GWR
network spontaneously reduces its number of nodes [as shown in Fig.
6(g)]. Similarly, we set the number of static network nodes to the max-
imum number of the GWR network nodes again. As a result, a very
close accuracy rate (accuracy difference is around 5%) was obtained
for GWR and static networks. However, the GWR network exhibits
higher efficiency due to it can dynamically shrink its nodes by 72% as
shownintherightsideofFig. 6(h).
CONCLUSION
In this work, we have demonstrated that 2D In
2
Se
3
-based FeS-
FET can reconfigure synaptic or neural functions for a brain-like
neuromorphic computing system on demand in a single chip. Such
reconfiguration ability is mainly attributed to unique OOP and IP
intercorrelated polarization effects in the 2D In
2
Se
3
materials. In
comparison to previous nonvolatile or volatile memristors, such
reconfigurable devices can be used not only in a static neural net-
work (such as SNN or convolutional neural network, CNN) but
also in a self-adaptive dynamic neural network (such as GWR net-
work). For the SNN simulation, an accuracy rate of 71.6% for
expression classification and 95.8% for face recognition were
obtained. Meanwhile, the dynamic recognition rate for MNIST dig-
ital images can reach 85% by dynamically shrinking its nodes by
72% in the self-adaptive GWR network. More crucially, the GWR
network has obviously surpassed its static counterpart in learning
power and efficiency.
SUPPLEMENTARY MATERIAL
See the supplementary material for details on the experimental
methods and additional data.
ACKNOWLEDGMENTS
This work was supported by the National Natural Science
Foundation of China (Grant Nos. 62122055, 62074104, 52003162, and
61974093), the Guangdong Provincial Department of Science
and Technology (Grant Nos. 2019A1515111090, 2021A1515012588,
and 2020A1515110883), and the Science and Technology Innovation
Commission of Shenzhen (Grant Nos. RCYX20200714114524157,
JCYJ20220818100206013, 20200804172625001), and NTUT-SZU
Joint Research Program.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
Yongbiao Zhai: Data curation (equal); Formal analysis (equal);
Funding acquisition (supporting); Investigation (lead); Writing
original draft (lead). Su-Ting Han: Conceptualization (lead); Funding
acquisition (lead); Supervision (lead); Writing review & editing
(lead). Peng Xie: Data curation (equal); Software (lead). Jiahui Hu:
Formal analysis (supporting). Xue Chen: Methodology (supporting).
Zihao Feng: Investigation (supporting). Ziyu Lv: Investigation (sup-
porting). Guanglong Ding: Methodology (supporting). Kui Zhou:
Data curation (supporting). Ye Zhou: Funding acquisition (support-
ing); Investigation (supporting).
DATA AVAILABILITY
The data that support the findings of this study are available
from the corresponding author upon reasonable request.
REFERENCES
1
J. Zhu, T. Zhang, Y. Yang, and R. Huang, Appl. Phys. Rev. 7, 011312 (2020).
2
G. C. Adam, A. Khiat, and T. Prodromakis, Nat. Commun. 9, 5267 (2018).
3
G. Milano, M. Aono, L. Boarino, U. Celano, T. Hasegawa, M. Kozicki, S.
Majumdar, M. Menghini, E. Miranda, C. Ricciardi, S. Tappertzhofen, K.
Terabe, and I. Valov, Adv. Mater. 34, e2201248 (2022).
4
S. Chen, M. R. Mahmoodi, Y. Shi, C. Mahata, B. Yuan, X. Liang, C. Wen, F.
Hui, D. Akinwande, D. B. Strukov, and M. Lanza, Nat. Electron. 3, 638 (2020).
5
C. Liu, H. Chen, S. Wang, Q. Liu, Y. G. Jiang, D. W. Zhang, M. Liu, and P.
Zhou, Nat. Nanotechnol. 15, 545 (2020).
6
D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J.
Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D.
Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K.
Kavukcuoglu, T. Graepel, and D. Hassabis, Nature 529, 484 (2016).
7
S. Choi, J. Yang, and G. Wang, Adv. Mater. 32, e2004659 (2020).
8
J. Tang, F. Yuan, X. Shen, Z. Wang, M. Rao, Y. He, Y. Sun, X. Li, W. Zhang, Y.
Li, B. Gao, H. Qian, G. Bi, S. Song, J. J. Yang, and H. Wu, Adv. Mater. 31,
e1902761 (2019).
9
Q. Duan, Z. Jing, X. Zou, Y. Wang, K. Yang, T. Zhang, S. Wu, R. Huang, and
Y. Yang, Nat. Commun. 11, 3399 (2020).
10
Z. Wang, S. Joshi, S. Savel’ev, W. Song, R. Midya, Y. Li, M. Rao, P. Yan, S.
Asapu, Y. Zhuo, H. Jiang, P. Lin, C. Li, J. H. Yoon, N. K. Upadhyay, J. Zhang,
M. Hu, J. P. Strachan, M. Barnell, Q. Wu, H. Wu, R. S. Williams, Q. Xia, and J.
J. Yang, Nat. Electron. 1, 137 (2018).
11
S. Subbulakshmi Radhakrishnan, S. Chakrabarti, D. Sen, M. Das, T. F.
Schranghamer, A. Sebastian, and S. Das, Adv. Mater. 34, e2202535 (2022).
12
Y. Zhang, Z. Wang, J. Zhu, Y. Yang, M. Rao, W. Song, Y. Zhuo, X. Zhang, M.
Cui, L. Shen, R. Huang, and J. J. Yang, Appl. Phys. Rev. 7, 011308 (2020).
13
C. Du, F. Cai, M. A. Zidan, W. Ma, S. H. Lee, and W. D. Lu, Nat. Commun. 8,
2204 (2017).
14
Y. Zhong, J. Tang, X. Li, B. Gao, H. Qian, and H. Wu, Nat. Commun. 12, 408
(2021).
15
J. Moon, W. Ma, J. H. Shin, F. Cai, C. Du, S. H. Lee, and W. D. Lu, Nat.
Electron. 2, 480 (2019).
16
D. Markovic´, A. Mizrahi, D. Querlioz, and J. Grollier, Nat. Rev. Phys. 2, 499
(2020).
17
Z. Lv, Y. Wang, J. Chen, J. Wang, Y. Zhou, and S. T. Han, Chem. Rev. 120,
3941 (2020).
18
Y. Zhu, Y. Zhu, H. Mao, Y. He, S. Jiang, L. Zhu, C. Chen, C. Wan, and Q. Wan,
J. Phys. D: Appl. Phys. 55, 053002 (2021).
19
R. A. John, Y. Demirag, Y. Shynkarenko, Y. Berezovska, N. Ohannessian, M.
Payvand, P. Zeng, M. I. Bodnarchuk, F. Krumeich, G. Kara, I. Shorubalko, M.
V. Nair, G. A. Cooke, T. Lippert, G. Indiveri, and M. V. Kovalenko, Nat.
Commun. 13, 2074 (2022).
20
H. T. Zhang, T. J. Park, A. Islam, D. S. J. Tran, S. Manna, Q. Wang, S. Mondal,
H. Yu, S. Banik, S. Cheng, H. Zhou, S. Gamage, S. Mahapatra, Y. Zhu,
Applied Physics Reviews ARTICLE scitation.org/journal/are
Appl. Phys. Rev. 10, 011408 (2023); doi: 10.1063/5.0131838 10, 011408-10
Published under an exclusive license by AIP Publishing
Y. Abate, N. Jiang, S. Sankaranarayanan, A. Sengupta, C. Teuscher, and S.
Ramanathan, Science 375, 533 (2022).
21
Y. Chen, R. Zhang, Y. Kan, S. Yang, and Y. Nakashima, IEEE Trans. Neural
Networks Learn. Syst. (published online 2022).
22
Y. Fu, Y. Zhou, X. Huang, B. Dong, F. Zhuge, Y. Li, Y. He, Y. Chai, and X.
Miao, Adv. Funct. Mater. 32, 2111996 (2022).
23
M. Yan, Q. Zhu, S. Wang, Y. Ren, G. Feng, L. Liu, H. Peng, Y. He, J. Wang, P.
Zhou, X. Meng, X. Tang, J. Chu, B. Dkhil, B. Tian, and C. Duan, Adv. Electron.
Mater. 7, 2001276 (2021).
24
J. Wang, F. Wang, Z. Wang, W. Huang, Y. Yao, Y. Wang, J. Yang, N. Li, L. Yin,
R. Cheng, X. Zhan, C. Shan, and J. He, Sci. Bull. 66, 2288 (2021).
25
A. Chanthbouala, V. Garcia, R. O. Cherifi, K. Bouzehouane, S. Fusil, X. Moya,
S. Xavier, H. Yamada, C. Deranlot, N. D. Mathur, M. Bibes, A. Barthelemy,
and J. Grollier, Nat. Mater. 11, 860 (2012).
26
M. K. Kim, I. J. Kim, and J. S. Lee, Sci. Adv. 7, eabe1341 (2021).
27
C. Cui, W. J. Hu, X. Yan, C. Addiego, W. Gao, Y. Wang, Z. Wang, L. Li, Y.
Cheng, P. Li, X. Zhang, H. N. Alshareef, T. Wu, W. Zhu, X. Pan, and L. J. Li,
Nano Lett. 18, 1253 (2018).
28
M. Wu, ACS Nano 15, 9229 (2021).
29
Y. Zhou, D. Wu, Y. Zhu, Y. Cho, Q. He, X. Yang, K. Herrera, Z. Chu, Y. Han,
M. C. Downer, H. Peng, and K. Lai, Nano Lett. 17, 5508 (2017).
30
W. Ding, J. Zhu, Z. Wang, Y. Gao, D. Xiao, Y. Gu, Z. Zhang, and W. Zhu, Nat.
Commun. 8, 14956 (2017).
31
M. Si, A. K. Saha, S. Gao, G. Qiu, J. Qin, Y. Duan, J. Jian, C. Niu, H. Wang, W.
Wu, S. K. Gupta, and P. D. Ye, Nat. Electron. 2, 580 (2019).
32
S. Wang, L. Liu, L. Gan, H. Chen, X. Hou, Y. Ding, S. Ma, D. W. Zhang, and P.
Zhou, Nat. Commun. 12, 53 (2021).
33
F. Xue, X. He, Z. Wang, J. R. D. Retamal, Z. Chai, L. Jing, C. Zhang, H. Fang,
Y. Chai, T. Jiang, W. Zhang, H. N. Alshareef, Z. Ji, L. J. Li, J. H. He, and X.
Zhang, Adv. Mater. 33, e2008709 (2021).
34
F.Xue,X.He,J.R.D.Retamal,A.Han,J.Zhang,Z.Liu,J.K.Huang,W.
Hu,V.Tung,J.H.He,L.J.Li,andX.Zhang,Adv. Mater. 31, e1901300
(2019).
35
L. Wang, X. Wang, Y. Zhang, R. Li, T. Ma, K. Leng, Z. Chen, I. Abdelwahab,
and K. P. Loh, Adv. Funct. Mater. 30, 2004609 (2020).
36
J. Li, H. Li, X. Niu, and Z. Wang, ACS Nano 15, 18683 (2021).
37
Z. Lu, G. P. Neupane, G. Jia, H. Zhao, D. Qi, Y. Du, Y. Lu, and Z. Yin, Adv.
Funct. Mater. 30, 2001127 (2020).
38
S.Marsland,J.Shapiro,andU.Nehmzow,Neural Networks 15,1041
(2002).
39
L. Sun, Y. Xie, and N. Yu, J. Syst. Simul. 19, 3749 (2007).
40
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, Proc. IEEE 86, 2278 (1998).
Applied Physics Reviews ARTICLE scitation.org/journal/are
Appl. Phys. Rev. 10, 011408 (2023); doi: 10.1063/5.0131838 10, 011408-11
Published under an exclusive license by AIP Publishing
... As the displacement current limit can define the amount of switched polarization, a single programming pulse with a designed displacement current limit for a specific polarization state is required. Thus, when the DCC method is used for multilevel programming of ferroelectric transistors, fast programming operation with a small distribution can be achieved, which is also beneficial for next-generation memory devices with high memory density and neuromorphic applications (40)(41)(42)(43)(44). Here, it should be noted that displacement current limit was applied as a form of compliance current, but in real cases, other circuit components such as bipolar junction transistor (BJT) or n-or p-MOS can be used to limit the current flowing through the IG and CG (37). ...
Article
Ferroelectric transistors based on hafnia-based ferroelectrics exhibit tremendous potential as next-generation memories owing to their high-speed operation and low power consumption. Nevertheless, these transistors face limitations in terms of memory window, which directly affects their ability to support multilevel characteristics in memory devices. Furthermore, the absence of an efficient operational technique capable of achieving multilevel characteristics has hindered their development. To address these challenges, we present a gate stack engineering method and an efficient operational approach for ferroelectric transistors to achieve 16-level data per cell operation. By using the suggested engineering method, we demonstrate the attainment of a substantial memory window of 10 V without increasing the device area. Additionally, we propose a displacement current control method, facilitating one-shot programming to the desired state. Remarkably, we suggest the compatibility of these proposed methods with three-dimensional (3D) structures. This study underscores the potential of ferroelectric transistors for next-generation 3D memory applications.
... However, the memristor-based neurons are often subjected to reliability issues such as large spatial and temporal variations 26 . A more reliable candidate may be the recently emerging ferroelectric neurons [27][28][29][30][31][32][33] , whose operation is based on the intrinsic polarization dynamics. However, the ferroelectric neurons still suffer from inefficiencies in implementing the firing and reset functions. ...
Preprint
Full-text available
Neuromorphic computing has attracted great attention for its massive parallelism and high energy efficiency. As the fundamental components of neuromorphic computing systems, artificial neurons play a key role in information processing. However, the development of artificial neurons that can simultaneously incorporate low hardware overhead, high reliability, high speed, and low energy consumption remains a challenge. To address this challenge, we propose and demonstrate a piezoelectric neuron with a simple circuit structure, consisting of a piezoelectric cantilever, a parallel capacitor, and a series resistor. It operates through the synergy between the converse piezoelectric effect and the capacitive charging/discharging. Thanks to this efficient and robust mechanism, the piezoelectric neuron not only implements critical leaky integrate-and-fire functions (including leaky integration, threshold-driven spiking, all-or-nothing response, refractory period, strength-modulated firing frequency, and spatiotemporal integration), but also demonstrates small cycle-to-cycle and device-to-device variations (~1.9% and ~10.0%, respectively), high endurance (10⁷), high speed (integration/firing: ~9.6/~0.4 μs), and low energy consumption (~13.4 nJ/spike). Furthermore, spiking neural networks based on piezoelectric neurons are constructed, showing capabilities to implement both supervised and unsupervised learning. This study therefore demonstrates the piezoelectric neuron as a simple yet reliable, fast, and energy-efficient artificial neuron, and also showcases its applicability in neuromorphic computing.
Article
Full-text available
Neurons and other excitable systems can release energy suddenly given a small stimulus. Excitability has recently drawn increasing interest in optics, as it is key to realize all-optical artificial neurons enabling speed-of-light information processing. However, the realization of all-optical excitable units and networks remains challenging. Here we demonstrate how laser-driven optical cavities with memory in their nonlinear response can sustain excitability beyond the constraints of memoryless systems. First we demonstrate different classes of excitability and spiking, and their control in a single cavity with memory. This single-cavity excitability is limited to a narrow range of memory times commensurate with the linear dissipation time. To overcome this limitation, we explore coupled cavities with memory. We demonstrate that this system can exhibit excitability for arbitrarily long memory times, even when the intercavity coupling rate is smaller than the dissipation rate. Our coupled-cavity system also sustains spike trains—a hallmark of neurons—that spontaneously break mirror symmetry. Our predictions can be readily tested in thermo-optical cavities, where thermal dynamics effectively give memory to the nonlinear optical response. The huge separation between thermal and optical timescales in such cavities is promising for the realization of artificial neurons that can self-organize to the edge of a phase transition, like many biological systems do.
Article
Full-text available
Realization of higher-order multistates with mutual interstate switching in ferroelectric materials is a perpetual drive for high-density storage devices and beyond-Moore technologies. Here we demonstrate experimentally that antiferroelectric van der Waals CuInP2S6 films can be controllably stabilized into double, quadruple, and sextuple polarization states, and a system harboring polarization order of six is also reversibly tunable into order of four or two. Furthermore, for a given polarization order, mutual interstate switching can be achieved via moderate electric field modulation. First-principles studies of CuInP2S6 multilayers help to reveal that the double, quadruple, and sextuple states are attributable to the existence of respective single, double, and triple ferroelectric domains with antiferroelectric interdomain coupling and Cu ion migration. These findings offer appealing platforms for developing multistate ferroelectric devices, while the underlining mechanism is transformative to other non-volatile material systems.
Article
The Von Neumann architecture has been the foundation of modern computing systems. Still, its limitations in processing large amounts of data and parallel processing have become more apparent as computing requirements increase. Neuromorphic computing, inspired by the architecture of the human brain, has emerged as a promising solution for developing next-generation computing and memory devices with unprecedented computational power and significantly lower energy consumption. In particular, the development of optoelectronic artificial synaptic devices has made significant progress toward emulating the functionality of biological synapses in the brain. Among them, the potential to mimic the function of the biological eye also paves the way for advancements in robot vision and artificial intelligence. This review focuses on the emerging field of optoelectronic artificial synapses and memristors based on low-dimensional nanomaterials. The unique photoelectric properties of these materials make them ideal for use in neuromorphic and optoelectronic storage devices, with advantages including high carrier mobility, size-tunable optical properties, and low resistor–capacitor circuit delay. The working mechanisms, device structure designs, and applications of these devices are also summarized to achieve truly sense-storage-computer integrated optoelectronic artificial synapses.
Article
In this big data era, the explosive growth of information puts ultra-high demands on the data storage/computing, such as high computing power, low energy consumption, and excellent stability. However, facing this challenge, the traditional von Neumann architecture-based computing system is out of its depth owing to the separated memory and data processing unit architecture. One of the most effective ways to solve this challenge is building brain inspired computing system with in-memory computing and parallel processing ability based on neuromorphic devices. Therefore, there is a research trend toward the memristors, that can be applied to build neuromorphic computing systems due to their large switching ratio, high storage density, low power consumption, and high stability. Two-dimensional (2D) ferroelectric materials, as novel types of functional materials, show great potential in the preparations of memristors because of the atomic scale thickness, high carrier mobility, mechanical flexibility, and thermal stability. 2D ferroelectric materials can realize resistive switching (RS) because of the presence of natural dipoles whose direction can be flipped with the change of the applied electric field thus producing different polarizations, therefore, making them powerful candidates for future data storage and computing. In this review article, we introduce the physical mechanisms, characterizations, and synthetic methods of 2D ferroelectric materials, and then summarize the applications of 2D ferroelectric materials in memristors for memory and synaptic devices. At last, we deliberate the advantages and future challenges of 2D ferroelectric materials in the application of memristors devices.
Article
Full-text available
Representation of external stimuli in the form of action potentials or spikes constitute the basis of energy efficient neural computation that emerging spiking neural networks (SNNs) aspire to imitate. With recent evidence suggesting that information in the brain is more often represented by explicit firing times of the neurons rather than mean firing rates, it is imperative to develop novel hardware that can accelerate sparse and spike-timing-based encoding. Here we introduce a medium scale integrated (MSI) circuit comprised of two cascaded three-stage inverters and one XOR logic gate fabricated using a total of 21 memtransistors based on photosensitive two-dimensional (2D) monolayer MoS2 for spike-timing-based encoding of visual information. We show that different illumination intensities can be encoded into sparse spiking with time-to-first-spike representing the illumination information, i.e., higher intensities invoke earlier spikes and vice versa. In addition, non-volatile and analog programmability in our photo encoder is exploited for adaptive photo encoding that allows expedited spiking under scotopic (low-light) and deferred spiking under photopic (bright-light) conditions, respectively. Finally, low energy expenditure of less than 1 μJ by the 2D memtransistor-based photo encoder highlights the benefits in-sensor and bio-inspired design that can be transformative for the acceleration of SNNs. This article is protected by copyright. All rights reserved.
Article
Full-text available
Many in-memory computing frameworks demand electronic devices with specific switching characteristics to achieve the desired level of computational complexity. Existing memristive devices cannot be reconfigured to meet the diverse volatile and non-volatile switching requirements, and hence rely on tailored material designs specific to the targeted application, limiting their universality. “Reconfigurable memristors” that combine both ionic diffusive and drift mechanisms could address these limitations, but they remain elusive. Here we present a reconfigurable halide perovskite nanocrystal memristor that achieves on-demand switching between diffusive/volatile and drift/non-volatile modes by controllable electrochemical reactions. Judicious selection of the perovskite nanocrystals and organic capping ligands enable state-of-the-art endurance performances in both modes – volatile (2 × 106 cycles) and non-volatile (5.6 × 103 cycles). We demonstrate the relevance of such proof-of-concept perovskite devices on a benchmark reservoir network with volatile recurrent and non-volatile readout layers based on 19,900 measurements across 25 dynamically-configured devices. Existing memristors cannot be reconfigured to meet the diverse switching requirements of various computing frameworks, limiting their universality. Here, the authors present a nanocrystal memristor that can be reconfigured on-demand to address these limitations
Article
Full-text available
Quantum effects in novel functional materials and new device concepts represent a potential breakthrough for the development of new information processing technologies based on quantum phenomena. Among the emerging technologies, memristive elements which exhibit resistive switching that relies on the electrochemical formation/rupture of conductive nanofilaments, exhibit quantum conductance effects at room temperature. Despite the underlying resistive switching mechanism having been exploited for the realization of next‐generation memories and neuromorphic computing architectures, the potentialities of quantum effects in memristive devices are still rather unexplored. Here, we present a comprehensive review on memristive quantum devices where quantum conductance effects can be observed by coupling ionics with electronics. Fundamental electrochemical and physicochemical phenomena underlying device functionalities are introduced, together with fundamentals of electronic ballistic conduction transport in nanofilaments. Quantum conductance effects including quantum mode splitting, stability and random telegraph noise are analyzed, reporting experimental techniques and challenges of nanoscale metrology for the characterization of memristive phenomena. Finally, potential applications and future perspectives are envisioned, including how memristive devices with controllable atomic‐sized conductive filaments can represent not only suitable platforms for the investigation of quantum phenomena but also promising building blocks for the realization of integrated quantum systems working in air at room temperature. This article is protected by copyright. All rights reserved
Article
Full-text available
The fully memristive neural network consisting of the threshold switching (TS) material‐based electronic neurons and resistive switching (RS) one‐based synapses shows the potential for revolutionizing the energy and area efficiency in neuromorphic computing while being confronted with challenges such as reliability and process compatibility between memristive synaptic and neuronal devices. Here, a spiking convolutional neural network (SCNN) is constructed with the forming‐and‐annealing‐free V/VOx/HfWOx/Pt memristive devices. Specifically, both highly reliable RS (endurance >1010, on‐off ratio >103) and TS (endurance >1012) are found in the same device by setting it at RRAM or selector mode with either the HfWOx or naturally oxidized VOx layers dominating the conductance tuning. Such reconfigurability enables the emulation of both synaptic and nonpolar neuronal behaviors within the same device. A V/VOx/HfWOx/Pt‐based hardware system is thus experimentally demonstrated at much simplified process complexity and higher reliability, in which typical neural dynamics including synaptic plasticity and nonpolar neuronal spiking response are imitated. At the network level, a fully memristive SCNN incorporating nonpolar neurons is proposed for the first time. The system level simulation shows competency in pattern recognition with a dramatically reduced hardware consumption, paving the way for implementing fully memristive intelligent systems. In this work, highly‐reliable V/VOx/HfWOx/Pt memristors, which deliver reconfigurable resistive and threshold switching, are utilized to demonstrate both synaptic and nonpolar neuronal dynamics for the first time. A nonpolar spiking convolutional neural network with doubling coding capability is constructed based on this multifunctional device, while synaptic and neuronal hardware overheads are drastically reduced.
Article
Synaptic plasticity divided into long-term and short-term categories is regarded as the origin of memory and learning, which also inspires the construction of neuromorphic systems. However, it is difficult to mimic the two behaviors monolithically, which is due to the lack of time-tailoring approaches for a certain synaptic device. In this Letter, indium-gallium-zinc-oxide (IGZO) nanofiber-based photoelectric transistors are proposed for realizing tunable photoelectric synaptic plasticity by the indium composition ratio. Notably, short-term plasticity to long-term plasticity transition can be realized by increasing the ratio of indium in the IGZO channel layer. The spatiotemporal dynamic logic and low energy consumption (<100 fJ/spike) are obtained in devices with low indium ratio. Moreover, the symmetric spike-timing-dependent plasticity is achieved by exploiting customized light and electric pulse schemes. Photoelectric long-term plasticity, multi-level characteristics, and high recognition accuracy (93.5%) are emulated in devices with high indium ratio. Our results indicate that such a composition ratio modulated method could enrich the applications of IGZO nanofiber neuromorphic transistors toward the photoelectric neuromorphic systems.
Article
A hardware-friendly bisection neural network (BNN) topology is proposed in this work for approximately implementing massive pieces of complex functions in arbitrary on-chip configurations. Instead of the conventional reconfigurable fully connected neural network (FC-NN) circuit topology, the proposed hardware-friendly topology performs NN behaviors in a bisection structure, in which each neuron includes two constant synapse connections for both inputs and outputs. Compared with the FC-NN one, the reconfiguration of the BNN circuit topology eliminates the remarkable amount of dummy synapse connections in hardware. As the main target application, this work aims at building a general-purpose BNN circuit topology that offers a great amount of NN regressions. To achieve this target, we prove that the NN behaviors of the FC-NN circuit topologies can be migrated to the BNN circuit topologies equivalently. We introduce two approaches including the refining training algorithm and the inverted-pyramidal strategy to further reduce the number of neurons and synapses. Finally, we conduct the inaccuracy tolerance analysis to suggest the guideline for ultra-efficient hardware implementations. Compared with the state-of-the-art FC-NN circuit topology-based TrueNorth baseline, the proposed design can achieve 17.8-22.2 × hardware reduction and less than 1% inaccuracy.
Article
Reconfigurable devices offer the ability to program electronic circuits on demand. In this work, we demonstrated on-demand creation of artificial neurons, synapses, and memory capacitors in post-fabricated perovskite NdNiO3 devices that can be simply reconfigured for a specific purpose by single-shot electric pulses. The sensitivity of electronic properties of perovskite nickelates to the local distribution of hydrogen ions enabled these results. With experimental data from our memory capacitors, simulation results of a reservoir computing framework showed excellent performance for tasks such as digit recognition and classification of electrocardiogram heartbeat activity. Using our reconfigurable artificial neurons and synapses, simulated dynamic networks outperformed static networks for incremental learning scenarios. The ability to fashion the building blocks of brain-inspired computers on demand opens up new directions in adaptive networks.
Article
Exploring materials with multiple properties who can endow a simple device with integrated functionalities has attracted enormous attention in the microelectronic field. One reason is the imperious demand for processors with continuously higher performance and totally new architecture. Combining ferroelectric with semiconducting properties is a promising solution. Here, we show that logic, in-memory computing, and optoelectrical logic and non-volatile computing functionalities can be integrated into a single transistor with ferroelectric semiconducting α-In2Se3 as the channel. Two-input AND, OR, and non-volatile NOR and NAND logic operations with current on/off ratios reaching up to five orders, good endurance (1000 operation cycles), and fast operating speed (10 μs) are realized. In addition, optoelectrical OR logic and non-volatile implication (IMP) operations, as well as ternary-input optoelectrical logic and in-memory computing functions are achieved by introducing light as an additional input signal. Our work highlights the potential of integrating complex logic functions and new-type computing into a simple device based on emerging ferroelectric semiconductors.
Article
Recent breakthroughs in two-dimensional (2D) van der Waals ferroelectrics have been impressive, with a series of 2D ferroelectrics having been realized experimentally. The discovery of ferroelectric order in atom-thick layers not only is important for exploring the interplay between dimensionality and ferroelectric order but may also enable ultra-high-density memory, which has attracted significant interest. However, understanding of 2D ferroelectrics goes beyond simply their atomic-scale thickness. In this Perspective, I suggest possible innovations that may resolve a number of conventional issues and greatly transform the roles of ferroelectrics in nanoelectronics. The major obstacles in the commercialization of nanoelectronic devices based on current ferroelectrics involve their insulating and interfacial issues, which hinder their combination with semiconductors in nanocircuits and reduce their efficiency in data reading/writing. In comparison, the excellent semiconductor performance of many 2D ferroelectrics may enable computing-in-memory architectures or efficient ferroelectric photovoltaics. In addition, their clean van der Waals interfaces can greatly facilitate their integration into silicon chips, as well as the popularization of nondestructive data reading and indefatigable data writing. Two-dimensional ferroelectrics also give rise to new physics such as interlayer sliding ferroelectricity, Moiré ferroelectricity, switchable metallic ferroelectricity, and unconventional robust multiferroic couplings, which may provide high-speed energy-saving data writing and efficient data-reading strategies. The emerging 2D ferroelectric candidates for optimization will help resolve some current issues (e.g., weak vertical polarizations), and further exploitation of the aforementioned advantages may open a new era of nanoferroelectricity.