Figure 2 - uploaded by Akos Ferenc Kungl
Content may be subject to copyright.
4: Morphology of neurons and the action potential. (A) Sketch of a neuron with annotated morphology. The tree-like dendritic structure receives input from other neurons, which are then integrated in the soma. The action potential is generated at the soma. It travels through the axon to the synaptic terminals where it triggers a synaptic transmission to other receiving neurons. Image is adapted from Jarosz [2009]. (B) Sketch of the temporal course of the action potential. If enough input arrives, the spike-generating mechanism is activated. The resulting action potential follows a stereotypical form beginning with a strong depolarization followed by a hyperpolarization. During the latter, the spike generation is prohibited hence it is called the refractory period. Image is adapted from Chris73 [2007].

4: Morphology of neurons and the action potential. (A) Sketch of a neuron with annotated morphology. The tree-like dendritic structure receives input from other neurons, which are then integrated in the soma. The action potential is generated at the soma. It travels through the axon to the synaptic terminals where it triggers a synaptic transmission to other receiving neurons. Image is adapted from Jarosz [2009]. (B) Sketch of the temporal course of the action potential. If enough input arrives, the spike-generating mechanism is activated. The resulting action potential follows a stereotypical form beginning with a strong depolarization followed by a hyperpolarization. During the latter, the spike generation is prohibited hence it is called the refractory period. Image is adapted from Chris73 [2007].

Source publication
Thesis
Full-text available
Inspired by the remarkable properties of the human brain, the fields of machine learning, computational neuroscience and neuromorphic engineering have achieved significant synergistic progress in the last decade. Powerful neural network models rooted in machine learning have been proposed as models for neuroscience and for applications in neuromorp...

Citations

Thesis
Neuromorphic hardware enables novel modes of computation. We present two innovative learning strategies: First, we perform spike-based deep learning with LIF neurons in a Time-To-First-Spike coding scheme that focuses on achieving classification results with as few spikes as early as possible. This is critical for biological agents operating under environmental pressure, requiring quick reflexes while conserving energy. Deriving exact learning rules, we perform backpropagation on spike-times of LIF neurons in both software and on the BrainScaleS hardware platform. Second, we present fast energy-efficient analog inference on BrainScaleS-2. In this non-spiking mode, we use convolutional neural networks to check medical ECG traces for atrial fibrillation. The newly commissioned BrainScaleS-2 Mobile system has successfully participated and proven to operate reliably in the ``Energy-efficient AI system'' competition held by the German Federal Ministry of Education and Research. Developing these new computing paradigms from the ground up is a Herculean effort in terms of work required and people involved. Therefore, we introduce tooling methods to facilitate collaborative scientific software development and deployment. In particular, we focus on explicitly tracking disjoint sets of software dependencies via Spack, an existing package manager aimed at high performance computing. They are deployed as monolithic Singularity containers in a rolling-release schedule after thorough verification. These practices enable us to confidently advance our neuromorphic platform while fostering reproducibility of experiments, a still unsolved problem in software-aided sciences. By introducing quiggeldy, a micro-scheduling service operating on interleaved experiment-steps by different users, we achieve better hardware interactivity, stability and experiment throughput.
Thesis
Despite the recent success of deep learning, the mammalian brain is still unrivaled when it comes to interpreting complex, high-dimensional data streams like visual, auditory and somatosensory stimuli. However, the underlying computational principles allowing the brain to deal with unreliable, high-dimensional and often incomplete data while having a power consumption on the order of a few watt are still mostly unknown. In this work, we investigate how specific functionalities emerge from simple structures observed in the mammalian cortex, and how these might be utilized in non-von Neumann devices like “neuromorphic hardware”. Firstly, we show that an ensemble of deterministic, spiking neural networks can be shaped by a simple, local learning rule to perform sampling-based Bayesian inference. This suggests a coding scheme where spikes (or “action potentials”) represent samples of a posterior distribution, constrained by sensory input, without the need for any source of stochasticity. Secondly, we introduce a top-down framework where neuronal and synaptic dynamics are derived using a least action principle and gradient-based minimization. Combined, neurosynaptic dynamics approximate real-time error backpropagation, mappable to mechanistic components of cortical networks, whose dynamics can again be described within the proposed framework. The presented models narrow the gap between well-defined, functional algorithms and their biophysical implementation, improving our understanding of the computational principles the brain might employ. Furthermore, such models are naturally translated to hardware mimicking the vastly parallel neural structure of the brain, promising a strongly accelerated and energy-efficient implementation of powerful learning and inference algorithms, which we demonstrate for the physical model system “BrainScaleS–1”.