SPICE-Level Demonstration of Unsupervised Learning With Spintronic Synapses in Spiking Neural Networks

Spiking Neural Networks (SNNs) are Artificial Neural Networks which promise to mimic the biological brain processing with unsupervised online learning capability for various cognitive tasks. However, SNN hardware implementation with online learning support is not trivial and might prove highly ineff...

Full description

Saved in:
Bibliographic Details
Main Authors: Salah Daddinounou, Anteneh Gebregiorgis, Said Hamdioui, Elena-Ioana Vatajelu
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10551821/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Spiking Neural Networks (SNNs) are Artificial Neural Networks which promise to mimic the biological brain processing with unsupervised online learning capability for various cognitive tasks. However, SNN hardware implementation with online learning support is not trivial and might prove highly inefficient. This paper proposes an energy-efficient hardware implementation for SNN synapses. The implementation is based on parallel-connected Magnetic Tunnel Junction (MTJ) devices and exploits their inherent stochasticity. In addition, it uses a dedicated unsupervised learning rule based on optimized Spike-Timing-Dependent Plasticity (STDP). To facilitate the design of the SNN, its training and evaluation, an open-source Python-based platform is developed; it takes as input the SNN parameters and discrete circuit components, and it automatically generates the associated full netlist in SPICE then launches the simulation; moreover, it extracts the simulation results and makes them available in python for evaluation and manipulation. Unlike conventional neuromorphic hardware that relies on simple weight mapping post-off-line training, our approach emphasizes continuous, unsupervised learning, ensuring an energy efficiency of 11.2nW per synaptic update during training and as low as 109fJ/spike during inference.
ISSN:2169-3536