Knowledge distillation for spiking neural networks: aligning features and saliency
Spiking neural networks (SNNs) are renowned for their energy efficiency and bio-fidelity, but their widespread adoption is hindered by challenges in training, primarily due to the non-differentiability of spiking activations and limited representational capacity. Existing approaches, such as artific...
Saved in:
| Main Authors: | Yifan Hu, Guoqi Li, Lei Deng |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IOP Publishing
2025-01-01
|
| Series: | Neuromorphic Computing and Engineering |
| Subjects: | |
| Online Access: | https://doi.org/10.1088/2634-4386/ade821 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Neuromorphic Wireless Split Computing With Multi-Level Spikes
by: Dengyu Wu, et al.
Published: (2025-01-01) -
Autocorrelation Matrix Knowledge Distillation: A Task-Specific Distillation Method for BERT Models
by: Kai Zhang, et al.
Published: (2024-10-01) -
Spike-Based Neuromorphic Model of Spasticity for Generation of Affected Neural Activity
by: Jin Yan, et al.
Published: (2025-01-01) -
Aligning to the teacher: multilevel feature-aligned knowledge distillation
by: Yang Zhang, et al.
Published: (2025-08-01) -
Understanding the functional roles of modelling components in spiking neural networks
by: Huifeng Yin, et al.
Published: (2024-01-01)