[go: up one dir, main page]

Table of contents

Volume 3

Number 3, September 2023

Previous issue Next issue

Buy this issue in print

Editorial/Other

Other Editorial Matter

030402
The following article is Open access

, and

Focus Issue on Algorithms for Neuromorphic Computing

Neuromorphic computing provides a promising energy-efficient alternative to von-Neumann-type computing and learning architectures. However, the best neuromorphic hardware is useless without suitable inference and learning algorithms that can fully exploit hardware advantages. Such algorithms often have to deal with challenging constraints posed by neuromorphic hardware such as massive parallelism, sparse asynchronous communication, and analog and/or unreliable computing elements. This Focus Issue presents advances on various aspects of algorithms for neuromorphic computing. The collection of articles covers a wide range from very fundamental questions about the computational properties of the basic computing elements in neuromorphic systems, algorithms for continual learning, semantic segmentation, and novel efficient learning paradigms, up to algorithms for a specific application domain.

030404
The following article is Open access

, , and

Focus Issue on Bioinspired Adaptive Intelligent Robots

The NCE Focus Issue on Bioinspired Adaptive Intelligent Robots aims at collecting evidence of the different forms of biomimicry in robotics, from soft robotics and embodiment to neuromorphic sensing, computation and control, as enabling approaches to intelligent and adaptive robots.

Perspective

033001
The following article is Open access

, , , , , , , , , et al

Focus Issue on Photonic Neuromorphic Engineering and Neuron-Inspired Processing

Nanophotonic spiking neural networks (SNNs) based on neuron-like excitable subwavelength (submicrometre) devices are of key importance for realizing brain-inspired, power-efficient artificial intelligence (AI) systems with high degree of parallelism and energy efficiency. Despite significant advances in neuromorphic photonics, compact and efficient nanophotonic elements for spiking signal emission and detection, as required for spike-based computation, remain largely unexplored. In this invited perspective, we outline the main challenges, early achievements, and opportunities toward a key-enabling photonic neuro-architecture using III–V/Si integrated spiking nodes based on nanoscale resonant tunnelling diodes (nanoRTDs) with folded negative differential resistance. We utilize nanoRTDs as nonlinear artificial neurons capable of spiking at high-speeds. We discuss the prospects for monolithic integration of nanoRTDs with nanoscale light-emitting diodes and nanolaser diodes, and nanophotodetectors to realize neuron emitter and receiver spiking nodes, respectively. Such layout would have a small footprint, fast operation, and low power consumption, all key requirements for efficient nano-optoelectronic spiking operation. We discuss how silicon photonics interconnects, integrated photorefractive interconnects, and 3D waveguide polymeric interconnections can be used for interconnecting the emitter-receiver spiking photonic neural nodes. Finally, using numerical simulations of artificial neuron models, we present spike-based spatio-temporal learning methods for applications in relevant AI-based functional tasks, such as image pattern recognition, edge detection, and SNNs for inference and learning. Future developments in neuromorphic spiking photonic nanocircuits, as outlined here, will significantly boost the processing and transmission capabilities of next-generation nanophotonic spike-based neuromorphic architectures for energy-efficient AI applications. This perspective paper is a result of the European Union funded research project ChipAI in the frame of the Horizon 2020 Future and Emerging Technologies Open programme.

Papers

034001
The following article is Open access

and

Focus Issue on In-Memory Computing

To build neuromorphic hardware with self-assembled memristive networks, it is necessary to determine how the functional connectivity between electrodes can be adjusted, under the application of external signals. In this work, we analyse a model of a disordered memristor-resistor network, within the framework of graph theory. Such a model is well suited for the simulation of physical self-assembled neuromorphic materials where impurities are likely to be present. Two primary mechanisms that modulate the collective dynamics are investigated: the strength of interaction, i.e. the ratio of the two limiting conductance states of the memristive components, and the role of disorder in the form of density of Ohmic conductors (OCs) diluting the network. We consider the case where a fraction of the network edges has memristive properties, while the remaining part shows pure Ohmic behaviour. We consider both the case of poor and good OCs. Both the role of the interaction strength and the presence of OCs are investigated in relation to the trace formation between electrodes at the fixed point of the dynamics. The latter is analysed through an ideal observer approach. Thus, network entropy is used to understand the self-reinforcing and cooperative inhibition of other memristive elements resulting in the formation of a winner-take-all path. Both the low interaction strength and the dilution of the memristive fraction in a network provide a reduction of the steep non-linearity in the network conductance under the application of a steady input voltage. Entropy analysis shows enhanced robustness in selective trace formation to the applied voltage for heterogeneous networks of memristors diluted by poor OCs in the vicinity of the percolation threshold. The input voltage controls the diversity in trace formation.

034002
The following article is Open access

, and

Neuromorphic processing systems implementing spiking neural networks with mixed signal analog/digital electronic circuits and/or memristive devices represent a promising technology for edge computing applications that require low power, low latency, and that cannot connect to the cloud for off-line processing, either due to lack of connectivity or for privacy concerns. However, these circuits are typically noisy and imprecise, because they are affected by device-to-device variability, and operate with extremely small currents. So achieving reliable computation and high accuracy following this approach is still an open challenge that has hampered progress on the one hand and limited widespread adoption of this technology on the other. By construction, these hardware processing systems have many constraints that are biologically plausible, such as heterogeneity and non-negativity of parameters. More and more evidence is showing that applying such constraints to artificial neural networks, including those used in artificial intelligence, promotes robustness in learning and improves their reliability. Here we delve even more into neuroscience and present network-level brain-inspired strategies that further improve reliability and robustness in these neuromorphic systems: we quantify, with chip measurements, to what extent population averaging is effective in reducing variability in neural responses, we demonstrate experimentally how the neural coding strategies of cortical models allow silicon neurons to produce reliable signal representations, and show how to robustly implement essential computational primitives, such as selective amplification, signal restoration, working memory, and relational networks, exploiting such strategies. We argue that these strategies can be instrumental for guiding the design of robust and reliable ultra-low power electronic neural processing systems implemented using noisy and imprecise computing substrates such as subthreshold neuromorphic circuits and emerging memory technologies.

034003
The following article is Open access

and

Magnetic textures are promising candidates for unconventional computing due to their non-linear dynamics. We propose to investigate the rich variety of seemingly trivial lamellar magnetic phases, e.g. helical, spiral, stripy phase, or other one-dimensional soliton lattices. These are the natural stray field-free ground states of almost every magnet. The order parameters of these phases may be of potential interest for both classical and unconventional computing, which we refer to as helitronics. For the particular case of a chiral magnet and its helical phase, we use micromagnetic simulations to demonstrate the working principles of all-electrical (i) classical binary memory cells and (ii) memristors and artificial synapses, based on the orientation of the helical stripes.

034004
The following article is Open access

, , , , , and

Focus Issue on Hardware Optimization for Neuromorphic Computing

Digital electronics based on von Neumann's architecture is reaching its limits to solve large-scale problems essentially due to the memory fetching. Instead, recent efforts to bring the memory near the computation have enabled highly parallel computations at low energy costs. Oscillatory neural network (ONN) is one example of in-memory analog computing paradigm consisting of coupled oscillating neurons. When implemented in hardware, ONNs naturally perform gradient descent of an energy landscape which makes them particularly suited for solving optimization problems. Although the ONN computational capability and its link with the Ising model are known for decades, implementing a large-scale ONN remains difficult. Beyond the oscillators' variations, there are still design challenges such as having compact, programmable synapses and a modular architecture for solving large problem instances. In this paper, we propose a mixed-signal architecture named Saturated Kuramoto ONN (SKONN) that leverages both analog and digital domains for efficient ONN hardware implementation. SKONN computes in the analog phase domain while propagating the information digitally to facilitate scaling up the ONN size. SKONN's separation between computation and propagation enhances the robustness and enables a feed-forward phase propagation that is showcased for the first time. Moreover, the SKONN architecture leads to unique binarizing dynamics that are particularly suitable for solving NP-hard combinatorial optimization problems such as finding the weighted Max-cut of a graph. We find that SKONN's accuracy is as good as the Goemans–Williamson 0.878-approximation algorithm for Max-cut; whereas SKONN's computation time only grows logarithmically. We report on Weighted Max-cut experiments using a 9-neuron SKONN proof-of-concept on a printed circuit board (PCB). Finally, we present a low-power 16-neuron SKONN integrated circuit and illustrate SKONN's feed-forward ability while computing the XOR function.

034005
The following article is Open access

, , , , , , and

Focus on Benchmarks for Neuromorphic Computing

A critical enabler for progress in neuromorphic computing research is the ability to transparently evaluate different neuromorphic solutions on important tasks and to compare them to state-of-the-art conventional solutions. The Intel Neuromorphic Deep Noise Suppression Challenge (Intel N-DNS Challenge), inspired by the Microsoft DNS Challenge, tackles a ubiquitous and commercially relevant task: real-time audio denoising. Audio denoising is likely to reap the benefits of neuromorphic computing due to its low-bandwidth, temporal nature and its relevance for low-power devices. The Intel N-DNS Challenge consists of two tracks: a simulation-based algorithmic track to encourage algorithmic innovation, and a neuromorphic hardware (Loihi 2) track to rigorously evaluate solutions. For both tracks, we specify an evaluation methodology based on energy, latency, and resource consumption in addition to output audio quality. We make the Intel N-DNS Challenge dataset scripts and evaluation code freely accessible, encourage community participation with monetary prizes, and release a neuromorphic baseline solution which shows promising audio quality, high power efficiency, and low resource consumption when compared to Microsoft NsNet2 and a proprietary Intel denoising model used in production. We hope the Intel N-DNS Challenge will hasten innovation in neuromorphic algorithms research, especially in the area of training tools and methods for real-time signal processing. We expect the winners of the challenge will demonstrate that for problems like audio denoising, significant gains in power and resources can be realized on neuromorphic devices available today compared to conventional state-of-the-art solutions.

034006
The following article is Open access

, , , and

Topological-soliton-based devices, like the ferromagnetic domain-wall device, have been proposed as non-volatile memory (NVM) synapses in electronic crossbar arrays for fast and energy-efficient implementation of on-chip learning of neural networks (NN). High linearity and symmetry in the synaptic weight-update characteristic of the device (long-term potentiation (LTP) and long-term depression (LTD)) are important requirements to obtain high classification/regression accuracy in such an on-chip learning scheme. However, obtaining such linear and symmetric LTP and LTD characteristics in the ferromagnetic domain-wall device has remained a challenge. Here, we first carry out micromagnetic simulations of the device to show that the incorporation of defects at the edges of the device, with the defects having higher perpendicular magnetic anisotropy compared to the rest of the ferromagnetic layer, leads to massive improvement in the linearity and symmetry of the LTP and LTD characteristics of the device. This is because these defects act as pinning centres for the domain wall and prevent it from moving during the delay time between two consecutive programming current pulses, which is not the case when the device does not have defects. Next, we carry out system-level simulations of two crossbar arrays with synaptic characteristics of domain-wall synapse devices incorporated in them: one without such defects, and one with such defects. For on-chip learning of both long short-term memory networks (using a regression task) and fully connected NN (using a classification task), we show improved performance when the domain-wall synapse devices have defects at the edges. We also estimate the energy consumption in these synaptic devices and project their scaling, with respect to on-chip learning in corresponding crossbar arrays.

034007
The following article is Open access

, , , , , and

Focus Issue on Hardware Optimization for Neuromorphic Computing

In-memory computing with emerging non-volatile memory devices (eNVMs) has shown promising results in accelerating matrix-vector multiplications. However, activation function calculations are still being implemented with general processors or large and complex neuron peripheral circuits. Here, we present the integration of Ag-based conductive bridge random access memory (Ag-CBRAM) crossbar arrays with Mott rectified linear unit (ReLU) activation neurons for scalable, energy and area-efficient hardware (HW) implementation of deep neural networks. We develop Ag-CBRAM devices that can achieve a high ON/OFF ratio and multi-level programmability. Compact and energy-efficient Mott ReLU neuron devices implementing ReLU activation function are directly connected to the columns of Ag-CBRAM crossbars to compute the output from the weighted sum current. We implement convolution filters and activations for VGG-16 using our integrated HW and demonstrate the successful generation of feature maps for CIFAR-10 images in HW. Our approach paves a new way toward building a highly compact and energy-efficient eNVMs-based in-memory computing system.

034008
The following article is Open access

, , and

Focus Issue on Ionic Phenomena in Materials for Neuromorphic Computing and Engineering

Artificial synapses capable of mimicking the fundamental functionalities of biological synapses are critical to the building of efficient neuromorphic systems. We have developed a HxWO3-based artificial synapse that replicates such synaptic functionalities via an all-solid-state redox transistor mechanism. The subject synaptic-HxWO3 transistor, which operates by current pulse control, exhibits excellent synaptic properties including good linearity, low update variation and conductance modulation characteristics. We investigated the performance of the device under various operating conditions, and the impact of the characteristics of the device on artificial neural network computing. Although the subject synaptic-HxWO3 transistor showed an insufficient recognition accuracy of 66% for a handwritten digit recognition task with voltage pulse control, it achieved an excellent accuracy of 88% with current pulse control, which is approaching the 93% accuracy of an ideal synaptic device. This result suggests that the performance of any redox-transistor-type artificial synapse can be dramatically improved by current pulse control, which in turn paves the way for further exploration and the evolution of advanced neuromorphic systems, with the potential to revolutionize the artificial intelligence domain. It further marks a significant stride towards the realization of high-performance, low-power consumption computing devices.

034009
The following article is Open access

, and

Spiking neural networks (SNNs) have emerged as a hardware efficient architecture for classification tasks. The challenge of spike-based encoding has been the lack of a universal training mechanism performed entirely using spikes. There have been several attempts to adopt the powerful backpropagation (BP) technique used in non-spiking artificial neural networks (ANNs): (1) SNNs can be trained by externally computed numerical gradients. (2) A major advancement towards native spike-based learning has been the use of approximate BP using spike-time dependent plasticity with phased forward/backward passes. However, the transfer of information between such phases for gradient and weight update calculation necessitates external memory and computational access. This is a challenge for standard neuromorphic hardware implementations. In this paper, we propose a stochastic SNN based back-prop (SSNN-BP) algorithm that utilizes a composite neuron to simultaneously compute the forward pass activations and backward pass gradients explicitly with spikes. Although signed gradient values are a challenge for spike-based representation, we tackle this by splitting the gradient signal into positive and negative streams. The composite neuron encodes information in the form of stochastic spike-trains and converts BP weight updates into temporally and spatially local spike coincidence updates compatible with hardware-friendly resistive processing units. Furthermore, we characterize the quantization effect of discrete spike-based weight update to show that our method approaches BP ANN baseline with sufficiently long spike-trains. Finally, we show that the well-performing softmax cross-entropy loss function can be implemented through inhibitory lateral connections enforcing a winner take all rule. Our SNN with a two-layer network shows excellent generalization through comparable performance to ANNs with equivalent architecture and regularization parameters on static image datasets like MNIST, Fashion-MNIST, Extended MNIST, and temporally encoded image datasets like Neuromorphic MNIST datasets. Thus, SSNN-BP enables BP compatible with purely spike-based neuromorphic hardware.

034010
The following article is Open access

, , , , and

Focus on Adaptive Materials and Devices for Brain-Inspired Electronics

The frequency of vanadium dioxide (VO2) oscillators is a fundamental figure of merit for the realization of neuromorphic circuits called oscillatory neural networks (ONNs) since the high frequency of oscillators ensures low-power consuming, real-time computing ONNs. In this study, we perform electrothermal 3D technology computer-aided design (TCAD) simulations of a VO2 relaxation oscillator. We find that there exists an upper limit to its operating frequency, where such a limit is not predicted from a purely circuital model of the VO2 oscillator. We investigate the intrinsic physical mechanisms that give rise to this upper limit. Our TCAD simulations show that it arises a dependence on the frequency of the points of the curve current versus voltage across the VO2 device corresponding to the insulator-to-metal transition (IMT) and metal-to-insulator transition (MIT) during oscillation, below some threshold values of $C_{\mathrm{ext}}$. This implies that the condition for the self-oscillatory regime may be satisfied by a given load-line in the low-frequency range but no longer at higher frequencies, with consequent suppression of oscillations. We note that this variation of the IMT/MIT points below some threshold values of $C_{\mathrm{ext}}$ is due to a combination of different factors: intermediate resistive states achievable by VO2 channel and the interplay between frequency and heat transfer rate. Although the upper limit on the frequency that we extract is linked to the specific VO2 device we simulate, our findings apply qualitatively to any VO2 oscillator. Overall, our study elucidates the link between electrical and thermal behavior in VO2 devices that sets a constraint on the upper values of the operating frequency of any VO2 oscillator.

034011
The following article is Open access

, , and

Neurons with internal memory have been proposed for biological and bio-inspired neural networks, adding important functionality. We introduce an internal time-limited charge-based memory into a III–V nanowire (NW) based optoelectronic neural node circuit designed for handling optical signals in a neural network. The new circuit can receive inhibiting and exciting light signals, store them, perform a non-linear evaluation, and emit a light signal. Using experimental values from the performance of individual III–V NWs we create a realistic computational model of the complete artificial neural node circuit. We then create a flexible neural network simulation that uses these circuits as neuronal nodes and light for communication between the nodes. This model can simulate combinations of nodes with different hardware derived memory properties and variable interconnects. Using the full model, we simulate the hardware implementation for two types of neural networks. First, we show that intentional variations in the memory decay time of the nodes can significantly improve the performance of a reservoir network. Second, we simulate the implementation in an anatomically constrained functioning model of the central complex network of the insect brain and find that it resolves an important functionality of the network even with significant variations in the node performance. Our work demonstrates the advantages of an internal memory in a concrete, nanophotonic neural node. The use of variable memory time constants in neural nodes is a general hardware derived feature and could be used in a broad range of implementations.

034012
The following article is Open access

, , , , , and

Focus Issue on Photonic Neuromorphic Engineering and Neuron-Inspired Processing

This work reports a nanostructure resonant tunnelling diode-photodetector (RTD-PD) device and demonstrates its operation as a controllable, optically-triggered excitable spike generator. The top contact layer of the device is designed with a nanopillar structure (500 nm in diameter) to restrain the injection current, yielding therefore lower energy operation for spike generation. We demonstrate experimentally the deterministic optical triggering of controllable and repeatable neuron-like spike patterns in the nanostructure RTD-PDs. Moreover, we show the device's ability to deliver spiking responses when biased in either of the two regions adjacent to the negative differential conductance region, the so-called 'peak' and 'valley' points of the current–voltage (IV) characteristic. This work also demonstrates experimentally key neuron-like dynamical features in the nanostructure RTD-PD, such as a well-defined threshold (in input optical intensity) for spike firing, as well as the presence of spike firing refractory time. The optoelectronic and chip-scale character of the proposed system together with the deterministic, repeatable and well controllable nature of the optically-elicited spiking responses render this nanostructure RTD-PD element as a highly promising solution for high-speed, energy-efficient optoelectronic artificial spiking neurons for novel light-enabled neuromorphic computing hardware.

034013
The following article is Open access

, , , , , , , , , et al

The first-generation of BrainScaleS, also referred to as BrainScaleS-1, is a neuromorphic system for emulating large-scale networks of spiking neurons. Following a 'physical modeling' principle, its VLSI circuits are designed to emulate the dynamics of biological examples: analog circuits implement neurons and synapses with time constants that arise from their electronic components' intrinsic properties. It operates in continuous time, with dynamics typically matching an acceleration factor of 10 000 compared to the biological regime. A fault-tolerant design allows it to achieve wafer-scale integration despite unavoidable analog variability and component failures. In this paper, we present the commissioning process of a BrainScaleS-1 wafer module, providing a short description of the system's physical components, illustrating the steps taken during its assembly and the measures taken to operate it. Furthermore, we reflect on the system's development process and the lessons learned to conclude with a demonstration of its functionality by emulating a wafer-scale synchronous firing chain, the largest spiking network emulation ran with analog components and individual synapses to date.

034014
The following article is Open access

, , , , and

Focus on Neuromorphic Circuits and Systems using Emerging Devices

Brain-inspired computing proposes a set of algorithmic principles that hold promise for advancing artificial intelligence. They endow systems with self learning capabilities, efficient energy usage, and high storage capacity. A core concept that lies at the heart of brain computation is sequence learning and prediction. This form of computation is essential for almost all our daily tasks such as movement generation, perception, and language. Understanding how the brain performs such a computation is not only important to advance neuroscience, but also to pave the way to new technological brain-inspired applications. A previously developed spiking neural network implementation of sequence prediction and recall learns complex, high-order sequences in an unsupervised manner by local, biologically inspired plasticity rules. An emerging type of hardware that may efficiently run this type of algorithm is neuromorphic hardware. It emulates the way the brain processes information and maps neurons and synapses directly into a physical substrate. Memristive devices have been identified as potential synaptic elements in neuromorphic hardware. In particular, redox-induced resistive random access memories (ReRAM) devices stand out at many aspects. They permit scalability, are energy efficient and fast, and can implement biological plasticity rules. In this work, we study the feasibility of using ReRAM devices as a replacement of the biological synapses in the sequence learning model. We implement and simulate the model including the ReRAM plasticity using the neural network simulator NEST. We investigate two types of ReRAM memristive devices: (i) a gradual, analog switching device, and (ii) an abrupt, binary switching device. We study the effect of different device properties on the performance characteristics of the sequence learning model, and demonstrate that, in contrast to many other artificial neural networks, this architecture is resilient with respect to changes in the on-off ratio and the conductance resolution, device variability, and device failure.