Loss shaping enhances exact gradient learning with Eventprop in spiking neural networks
Event-based machine learning promises more energy-efficient AI on future neuromorphic hardware. Here, we investigate how the recently discovered Eventprop algorithm for gradient descent on exact gradients in spiking neural networks (SNNs) can be scaled up to challenging keyword recognition benchmark...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IOP Publishing
2025-01-01
|
Series: | Neuromorphic Computing and Engineering |
Subjects: | |
Online Access: | https://doi.org/10.1088/2634-4386/ada852 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832592158020861952 |
---|---|
author | Thomas Nowotny James P Turner James C Knight |
author_facet | Thomas Nowotny James P Turner James C Knight |
author_sort | Thomas Nowotny |
collection | DOAJ |
description | Event-based machine learning promises more energy-efficient AI on future neuromorphic hardware. Here, we investigate how the recently discovered Eventprop algorithm for gradient descent on exact gradients in spiking neural networks (SNNs) can be scaled up to challenging keyword recognition benchmarks. We implemented Eventprop in the GPU-enhanced neural networks framework (GeNN) and used it for training recurrent SNNs on the Spiking Heidelberg Digits (SHD) and Spiking Speech Commands (SSC) datasets. We found that learning depended strongly on the loss function and extended Eventprop to a wider class of loss functions to enable effective training. We then tested a large number of data augmentations and regularisations as well as exploring different network structures; and heterogeneous and trainable timescales. We found that when combined with two specific augmentations, the right regularisation and a delay line input, Eventprop networks with one recurrent layer achieved state-of-the-art performance on SHD and good accuracy on SSC. In comparison to a leading surrogate-gradient-based SNN training method, our GeNN Eventprop implementation is 3× faster and uses 4× less memory. This work is a significant step towards a low-power neuromorphic alternative to current machine learning paradigms. |
format | Article |
id | doaj-art-8913dbb94d0b4809ac15554bd20d990d |
institution | Kabale University |
issn | 2634-4386 |
language | English |
publishDate | 2025-01-01 |
publisher | IOP Publishing |
record_format | Article |
series | Neuromorphic Computing and Engineering |
spelling | doaj-art-8913dbb94d0b4809ac15554bd20d990d2025-01-21T13:22:01ZengIOP PublishingNeuromorphic Computing and Engineering2634-43862025-01-015101400110.1088/2634-4386/ada852Loss shaping enhances exact gradient learning with Eventprop in spiking neural networksThomas Nowotny0https://orcid.org/0000-0002-4451-915XJames P Turner1James C Knight2https://orcid.org/0000-0003-0577-0074School of Engineering and Informatics, University of Sussex , Brighton BN1 9QJ, United KingdomInformation & Communication Technologies, Imperial College London , London SW7 2AZ, United KingdomSchool of Engineering and Informatics, University of Sussex , Brighton BN1 9QJ, United KingdomEvent-based machine learning promises more energy-efficient AI on future neuromorphic hardware. Here, we investigate how the recently discovered Eventprop algorithm for gradient descent on exact gradients in spiking neural networks (SNNs) can be scaled up to challenging keyword recognition benchmarks. We implemented Eventprop in the GPU-enhanced neural networks framework (GeNN) and used it for training recurrent SNNs on the Spiking Heidelberg Digits (SHD) and Spiking Speech Commands (SSC) datasets. We found that learning depended strongly on the loss function and extended Eventprop to a wider class of loss functions to enable effective training. We then tested a large number of data augmentations and regularisations as well as exploring different network structures; and heterogeneous and trainable timescales. We found that when combined with two specific augmentations, the right regularisation and a delay line input, Eventprop networks with one recurrent layer achieved state-of-the-art performance on SHD and good accuracy on SSC. In comparison to a leading surrogate-gradient-based SNN training method, our GeNN Eventprop implementation is 3× faster and uses 4× less memory. This work is a significant step towards a low-power neuromorphic alternative to current machine learning paradigms.https://doi.org/10.1088/2634-4386/ada852spiking neural networkloss shapingEventpropgradient descentkeyword recognitionSpiking Heidelberg Digits |
spellingShingle | Thomas Nowotny James P Turner James C Knight Loss shaping enhances exact gradient learning with Eventprop in spiking neural networks Neuromorphic Computing and Engineering spiking neural network loss shaping Eventprop gradient descent keyword recognition Spiking Heidelberg Digits |
title | Loss shaping enhances exact gradient learning with Eventprop in spiking neural networks |
title_full | Loss shaping enhances exact gradient learning with Eventprop in spiking neural networks |
title_fullStr | Loss shaping enhances exact gradient learning with Eventprop in spiking neural networks |
title_full_unstemmed | Loss shaping enhances exact gradient learning with Eventprop in spiking neural networks |
title_short | Loss shaping enhances exact gradient learning with Eventprop in spiking neural networks |
title_sort | loss shaping enhances exact gradient learning with eventprop in spiking neural networks |
topic | spiking neural network loss shaping Eventprop gradient descent keyword recognition Spiking Heidelberg Digits |
url | https://doi.org/10.1088/2634-4386/ada852 |
work_keys_str_mv | AT thomasnowotny lossshapingenhancesexactgradientlearningwitheventpropinspikingneuralnetworks AT jamespturner lossshapingenhancesexactgradientlearningwitheventpropinspikingneuralnetworks AT jamescknight lossshapingenhancesexactgradientlearningwitheventpropinspikingneuralnetworks |