Efficient Hardware Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA

This paper presents an efficient hardware implementation of the recently proposed Optimised Deep Event-driven Spiking Neural Network Architecture (ODESA). ODESA is the first network to have end-to-end multi-layer online local supervised training without using gradients and has the combined adaptatio...

Full description

Saved in:
Bibliographic Details
Main Authors: Ali Mehrabi, Yeshwanth Bethi, Andre van Schaik, Andrew Wabnitz, Saeed Afshar
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10755039/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832592894204051456
author Ali Mehrabi
Yeshwanth Bethi
Andre van Schaik
Andrew Wabnitz
Saeed Afshar
author_facet Ali Mehrabi
Yeshwanth Bethi
Andre van Schaik
Andrew Wabnitz
Saeed Afshar
author_sort Ali Mehrabi
collection DOAJ
description This paper presents an efficient hardware implementation of the recently proposed Optimised Deep Event-driven Spiking Neural Network Architecture (ODESA). ODESA is the first network to have end-to-end multi-layer online local supervised training without using gradients and has the combined adaptation of weights and thresholds in an efficient hierarchical structure. This research shows that the network architecture and the online training of weights and thresholds can be implemented efficiently on a large scale in hardware. The implementation consists of a multi-layer Spiking Neural Network (SNN) and individual training modules for each layer that enable online self-learning without using back-propagation. By using simple local adaptive selection thresholds, a Winner-Take-All (WTA) constraint on each layer, and a modified weight update rule that is more amenable to hardware, the trainer module allocates neuronal resources optimally at each layer without having to pass high-precision error measurements across layers. All elements in the system, including the training module, interact using event-based binary spikes. The hardware-optimised implementation is shown to preserve the performance of the original algorithm across multiple spatial-temporal classification problems with significantly reduced hardware requirements.
format Article
id doaj-art-db7972714ad2482a85c89b990c6f0938
institution Kabale University
issn 2169-3536
language English
publishDate 2024-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-db7972714ad2482a85c89b990c6f09382025-01-21T00:02:31ZengIEEEIEEE Access2169-35362024-01-011217098017099310.1109/ACCESS.2024.350013410755039Efficient Hardware Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGAAli Mehrabi0https://orcid.org/0000-0003-3984-5361Yeshwanth Bethi1https://orcid.org/0000-0002-0713-0903Andre van Schaik2https://orcid.org/0000-0001-6140-017XAndrew Wabnitz3Saeed Afshar4https://orcid.org/0000-0002-2695-3745International Centre for Neuromorphic Systems, MARCS Institute for Brain and Behaviour, Western Sydney University, Penrith, NSW, AustraliaInternational Centre for Neuromorphic Systems, MARCS Institute for Brain and Behaviour, Western Sydney University, Penrith, NSW, AustraliaInternational Centre for Neuromorphic Systems, MARCS Institute for Brain and Behaviour, Western Sydney University, Penrith, NSW, AustraliaDepartment of Defence, Defence Science and Technology Group, Canberra, ACT, AustraliaInternational Centre for Neuromorphic Systems, MARCS Institute for Brain and Behaviour, Western Sydney University, Penrith, NSW, AustraliaThis paper presents an efficient hardware implementation of the recently proposed Optimised Deep Event-driven Spiking Neural Network Architecture (ODESA). ODESA is the first network to have end-to-end multi-layer online local supervised training without using gradients and has the combined adaptation of weights and thresholds in an efficient hierarchical structure. This research shows that the network architecture and the online training of weights and thresholds can be implemented efficiently on a large scale in hardware. The implementation consists of a multi-layer Spiking Neural Network (SNN) and individual training modules for each layer that enable online self-learning without using back-propagation. By using simple local adaptive selection thresholds, a Winner-Take-All (WTA) constraint on each layer, and a modified weight update rule that is more amenable to hardware, the trainer module allocates neuronal resources optimally at each layer without having to pass high-precision error measurements across layers. All elements in the system, including the training module, interact using event-based binary spikes. The hardware-optimised implementation is shown to preserve the performance of the original algorithm across multiple spatial-temporal classification problems with significantly reduced hardware requirements.https://ieeexplore.ieee.org/document/10755039/Spiking neural networkssupervised learningneuromorphic hardwarefield programmable gate array (FPGA)
spellingShingle Ali Mehrabi
Yeshwanth Bethi
Andre van Schaik
Andrew Wabnitz
Saeed Afshar
Efficient Hardware Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA
IEEE Access
Spiking neural networks
supervised learning
neuromorphic hardware
field programmable gate array (FPGA)
title Efficient Hardware Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA
title_full Efficient Hardware Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA
title_fullStr Efficient Hardware Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA
title_full_unstemmed Efficient Hardware Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA
title_short Efficient Hardware Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA
title_sort efficient hardware implementation of a multi layer gradient free online trainable spiking neural network on fpga
topic Spiking neural networks
supervised learning
neuromorphic hardware
field programmable gate array (FPGA)
url https://ieeexplore.ieee.org/document/10755039/
work_keys_str_mv AT alimehrabi efficienthardwareimplementationofamultilayergradientfreeonlinetrainablespikingneuralnetworkonfpga
AT yeshwanthbethi efficienthardwareimplementationofamultilayergradientfreeonlinetrainablespikingneuralnetworkonfpga
AT andrevanschaik efficienthardwareimplementationofamultilayergradientfreeonlinetrainablespikingneuralnetworkonfpga
AT andrewwabnitz efficienthardwareimplementationofamultilayergradientfreeonlinetrainablespikingneuralnetworkonfpga
AT saeedafshar efficienthardwareimplementationofamultilayergradientfreeonlinetrainablespikingneuralnetworkonfpga