The pattern of a neuron action potential

The pattern of a neuron action potential

Source publication
Article
Full-text available
An electronic circuit that implements a neural network architecture with spike neurons was studied, proposed, and evaluated, primarily considering energy consumption. In this way, CMOS transistors were used to implement neurons, memristors were used to work as synapses, and the proposed network has a spike‐timing‐dependent plasticity (STDP) learnin...

Citations

... The third class uses in-memory computing paradigm, which makes it more biologically plausible. The synapses that hold the weights are stored in non-volatile (NV) technology devices [13] [14] in a crossbar array between the neurons. However, the vast majority of the aforementioned proposals don't support on-• Introduction of multistate synapses by exploiting the inherent stochasticity of parallel connected MTJs. ...
Article
Full-text available
Spiking Neural Networks (SNNs) are Artificial Neural Networks which promise to mimic the biological brain processing with unsupervised online learning capability for various cognitive tasks. However, SNN hardware implementation with online learning support is not trivial and might prove highly inefficient. This paper proposes an energy-efficient hardware implementation for SNN synapses. The implementation is based on parallel-connected Magnetic Tunnel Junction (MTJ) devices and exploits their inherent stochasticity. In addition, it uses a dedicated unsupervised learning rule based on optimized Spike-Timing-Dependent Plasticity (STDP). To facilitate the design of the SNN, its training and evaluation, an open-source Python-based platform is developed; it takes as input the SNN parameters and discrete circuit components, and it automatically generates the associated full netlist in SPICE; moreover, it extracts the simulation results and makes them available in python for evaluation and manipulation. Unlike conventional neuromorphic hardware that relies on simple weight mapping post-off-line training, our approach emphasizes continuous, unsupervised learning, ensuring an energy efficiency of 11.2nW per synaptic update during training and as low as 109fJ/spike during inference.
... The knowledge from references [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] with their applications helped us to gain experience with the simulation environment. The knowledge from [21][22][23][24][25][26][27][28][29][30][31][32][33][34] made us aware of the neuroscience domain. ...
Article
Full-text available
The importance of spinal reflexes is connected to the rehabilitation processes in neural prostheses and to the neuromuscular junction. In order to model neuron networks as electronic circuits, a simulation environment like LTSpice XVII or PSpice can be used to create a complete electronic description. There are four types of neurons employed in spinal reflexes: α-motoneurons, sensitive neurons, excitatory interneurons, and inhibitory interneurons. Many proposals have been made regarding methods that can be used for assimilating neurons using electronic circuits. In this paper, only a single internal model of a neuron is considered enough to simulate all four types of neurons implicated in the control loops. The main contribution of this paper is to propose the modeling of neurons using some electronic circuits designed either with a bipolar transistor or with CMOS transistors for the input and output of circuits stages. In this way, it is possible to mimic the neural pulses’ circulation along the loops of the spinal reflexes and to prove the accuracy of the simulation results with respect to the biological signals collected from the bibliographic materials.
... There is a plethora of work dedicated to the memristor, ever since its concept was proposed by Leon Chua in [1]. Specifically, the hysteresis effect observed in memristive devices allows for in-situ learning and memory, breaking down the Von-Neumann bottleneck present in traditional computing architectures [2,3]. The link between a resistive switching device and memristors was first demonstrated by HP labs in 2008 [4], and ever since many different devices have been created, with a variety of switching properties observed. ...
Article
Full-text available
Memristive devices being applied in neuromorphic computing are envisioned to significantly improve the power consumption and speed of future computing platforms. The materials used to fabricate such devices will play a significant role in their viability. Graphene is a promising material, with superb electrical properties and the ability to be produced in large volumes. In this paper, we demonstrate that a graphene-based memristive device could potentially be used as synapses within spiking neural networks (SNNs) to realise spike timing-dependant plasticity for unsupervised learning in an efficient manner. Specifically, we verify the operation of two SNN architectures tasked for single-digit (0–9) classification: (i) a single layer network, where inputs are presented in 5×5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5\times 5$$\end{document} pixel resolution, and (ii) a larger network capable of classifying the dataset. Our work presents the first investigation and large-scale simulation of the use of graphene memristive devices to perform a complex pattern classification task. In favour of reproducible research, we will make our code and data publicly available. This can pave the way for future research in using graphene devices with memristive capabilities in neuromorphic computing architectures.
... Due to this high output power and low oscillation frequency, monolithic integration with CMOS technology is less challenging. The required footprint is about 0.017 μm 2 per MRAM and 0.09 μm 2 per STNO, which is very small compared to CMOS based artificial neurons that can have a footprint of 8 μm², but are usually far above this value 4,43 . Based on this proof-of-principle of a WSTNO system our long-term vision is to develop a unique platform for nextgeneration complex NCSs with improved performance compared to existing CMOS computing systems, filling the gap between the capability of current computers and the brain. ...
Article
Full-text available
Neuromorphic computing is a promising strategy to overcome fundamental limitations, such as enormous power consumption, by massive parallel data processing, similar to the brain. Here we demonstrate a proof-of-principle implementation of the weighted spin torque nano-oscillator (WSTNO) as a programmable building block for the next-generation neuromorphic computing systems (NCS). The WSTNO is a spintronic circuit composed of two spintronic devices made of magnetic tunnel junctions (MTJs): non-volatile magnetic memories acting as synapses and non-linear spin torque nano-oscillator (STNO) acting as a neuron. The non-linear output based on the weighted sum of the inputs is demonstrated using three MTJs. The STNO shows an output power above 3 µW and frequencies of 240 MHz. Both MTJ types are fabricated from a multifunctional MTJ stack in a single fabrication process, which reduces the footprint, is compatible with monolithic integration on top of CMOS technology and paves ways to fabricate more complex neuromorphic computing systems.
... Because of the fascinating capabilities of the human brain, such as parallel processing, error resilience (working with approximate data), and the most remarkable feature, learning capacity, bio-inspired computational systems have become highly successful [1][2][3]. Bio-inspired computational systems show great potential to overcome Von-Neumann-based computers' inefficiency in processing complex tasks, such as image processing and pattern association [4][5][6][7]. Considering the features mentioned above of the bio-inspired computational systems, hardware implementation of these systems has also become one of today's hottest research topics [6,8]. ...
Article
Full-text available
Significant progress has been made in manufacturing emerging technologies in recent years. This progress implemented in‐memory‐computing and neural networks, one of today's hottest research topics. Over time, the need to process complex tasks has increased. This need causes the emergence of intelligent processors. A nonvolatile associative memory based on spintronic synapses utilising magnetic tunnel junction (MTJ) and carbon nanotube field‐effect transistors (CNTFET)‐based neurons is proposed. The proposed design uses the MTJ device because of its fascinating features, such as reliable reconfiguration and nonvolatility. At the same time, CNTFET has overcome conventional complementary metal‐oxide‐semiconductor shortcomings like the short channel effect, drain‐induced barrier lowering, and poor hole mobility. The proposed design is simulated in the presence of process variations. The proposed design aims to increase the number of weights generated in the synapse for higher memory capacity and accuracy. The effect of different tunnel magnetoresistance (TMR) values (100%, 200%, and 300%) on the performance and accuracy of the proposed design has also been investigated. This investigation shows that the proposed design performs well even with a low TMR value, which is very important and remarkable from the fabrication point of view.
... There have been works to implement the spiking network structure with different configurations among which CMOS-Memristor Neuron-Synapse structure is the most common one [13] [14]. The [3] neurons are being implemented in different configurations -Axon Hillock [15], Low Power Axon Hillock, Integrate and Fire Model, Leaky Integrate and Fire model [16]. To implement the synapse of the neural circuit, different memristor specifications and configurations are used. ...
Book
In this work - a CMOS Neuron model and a twin memristor synapse were used to implement circuits to realize STDP Learning and Associative Learning Technique. Incorporating elements that can properly and truthfully mimic its bio- logical counterpart provides more authenticity to this study over studies with gen- erated signal for neuron spikes or simple switching connection for synapse mod- elling. The implementation of STDP learning leads to potential future use in SNNs and the implementation of associative learning also shows promising output with a simpler and more compact circuit implementation.
... The Spike-timing-dependent plasticity (STDP) learning rule modulates synaptic weights based on the time difference of pre-and postsynaptic spike arrivals [36]. This type of learning rule can be be achieved by using memristors [18], [38], and can be verified using SPICE-level models [39]. As Fig. 1 shows, a synaptic memristor is interposed between two neurons, where the pre-and postsynaptic spikes will generate a voltage across the memristor that causes the memristance to be updated. ...
Preprint
Full-text available
We present a fully memristive spiking neural network (MSNN) consisting of physically-realizable memristive neurons and memristive synapses to implement an unsupervised Spiking Time Dependent Plasticity (STDP) learning rule. The system is fully memristive in that both neuronal and synaptic dynamics can be realized by using memristors. The neuron is implemented using the SPICE-level memristive integrate-and-fire (MIF) model, which consists of a minimal number of circuit elements necessary to achieve distinct depolarization, hyperpolarization, and repolarization voltage waveforms. The proposed MSNN uniquely implements STDP learning by using cumulative weight changes in memristive synapses from the voltage waveform changes across the synapses, which arise from the presynaptic and postsynaptic spiking voltage signals during the training process. Two types of MSNN architectures are investigated: 1) a biologically plausible memory retrieval system, and 2) a multi-class classification system. Our circuit simulation results verify the MSNN's unsupervised learning efficacy by replicating biological memory retrieval mechanisms, and achieving 97.5% accuracy in a 4-pattern recognition problem in a large scale discriminative MSNN.
Article
Full-text available
In this study, we explore spintronic synapses composed of several Magnetic Tunnel Junctions (MTJs), leveraging their attractive characteristics such as endurance, nonvolatility, stochasticity, and energy efficiency for hardware implementation of unsupervised neuromorphic systems. Spiking Neural Networks (SNNs) running on dedicated hardware are suitable for edge computing and IoT devices where continuous online learning and energy efficiency are important characteristics. We focus in this work on synaptic plasticity by conducting comprehensive electrical simulations to optimize the MTJ-based synapse design and find the accurate neuronal pulses that are responsible for the Spike Timing Dependent Plasticity (STDP) behavior. Most proposals in the literature are based on hardware-independent algorithms that require the network to store the spiking history to be able to update the weights accordingly. In this work, we developed a new learning rule, the Bi-Sigmoid STDP (B2STDP), which originates from the physical properties of MTJs. This rule enables immediate synaptic plasticity based on neuronal activity, leveraging in-memory computing. Finally, the integration of this learning approach within an SNN framework leads to a 91.71% accuracy in unsupervised image classification, demonstrating the potential of MTJ-based synapses for effective online learning in hardware-implemented SNNs.
Article
Full-text available
Apart from simulating biological synapses, memristors can also be used in the secure encryption by exploiting their inherent random resistive switching (RS) properties. In this work, nonvolatile Ta/BiFeO3/ZrO2/Pt memristor is fabricated with 2 nm inserting BiFeO3 layer. At a high compliance current of 10 mA, it presents gradual RS characteristics during the set process, which resulting in 30 conductance states. Synaptic behavior can be successfully mimicked by precise conductance modulating under pulse electrical stimulation. Whereas under a low compliance current of 500 µA, it exhibits abrupt RS behavior, and the set voltage ranged from 0.3–0.8 V, which can be effectively as an entropy source for true random number generator (TRNG). Due to its Shannon entropy of 300 bits is 0.99987, Hamming weights and intra‐Hamming distances tend to be 50%. Furthermore, the software simulation of encryption and decryption of cat image is achieved by combining the key generated by TRNG. The controllable electrical properties of the Ta/BiFeO3/ZrO2/Pt device can be modulated by the compliance current, thus meeting the computing or safety requirements, which provides a solid foundation for the development of integrating computation and security at the hardware level.
Chapter
Advanced memory technologies are impacting the information era, representing a vibrant research area of huge electronic industry interest. The demand for data storage, computing performance and energy efficiency is increasing exponentially and will exceed the capabilities of current information technologies. Alternatives to traditional silicon technology and novel memory principles are expected to meet the need of modern data-intensive applications such as “big data” and artificial intelligence (AI). Functional materials or methodologies may find a key role in building novel, high speed and low power consumption computing and data storage systems. This book covers functional materials and devices in the data storage areas, alongside electronic devices with new possibilities for future computing, from neuromorphic next generation AI to in-memory computing. Summarizing different memory materials and devices to emphasize the future applications, graduate students and researchers can systematically learn and understand the design, materials characteristics, device operation principles, specialized device applications and mechanisms of the latest reported memory materials and devices.