Principled neuromorphic reservoir computing
Abstract Reservoir computing advances the intriguing idea that a nonlinear recurrent neural circuit—the reservoir—can encode spatio-temporal input signals to enable efficient ways to perform tasks like classification or regression. However, recently the idea of a monolithic reservoir network that si...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | Nature Communications |
Online Access: | https://doi.org/10.1038/s41467-025-55832-y |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832594662389448704 |
---|---|
author | Denis Kleyko Christopher J. Kymn Anthony Thomas Bruno A. Olshausen Friedrich T. Sommer E. Paxon Frady |
author_facet | Denis Kleyko Christopher J. Kymn Anthony Thomas Bruno A. Olshausen Friedrich T. Sommer E. Paxon Frady |
author_sort | Denis Kleyko |
collection | DOAJ |
description | Abstract Reservoir computing advances the intriguing idea that a nonlinear recurrent neural circuit—the reservoir—can encode spatio-temporal input signals to enable efficient ways to perform tasks like classification or regression. However, recently the idea of a monolithic reservoir network that simultaneously buffers input signals and expands them into nonlinear features has been challenged. A representation scheme in which memory buffer and expansion into higher-order polynomial features can be configured separately has been shown to significantly outperform traditional reservoir computing in prediction of multivariate time-series. Here we propose a configurable neuromorphic representation scheme that provides competitive performance on prediction, but with significantly better scaling properties than directly materializing higher-order features as in prior work. Our approach combines the use of randomized representations from traditional reservoir computing with mathematical principles for approximating polynomial kernels via such representations. While the memory buffer can be realized with standard reservoir networks, computing higher-order features requires networks of ‘Sigma-Pi’ neurons, i.e., neurons that enable both summation as well as multiplication of inputs. Finally, we provide an implementation of the memory buffer and Sigma-Pi networks on Loihi 2, an existing neuromorphic hardware platform. |
format | Article |
id | doaj-art-f0d77d90ee664896af12e6d9d69de726 |
institution | Kabale University |
issn | 2041-1723 |
language | English |
publishDate | 2025-01-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Nature Communications |
spelling | doaj-art-f0d77d90ee664896af12e6d9d69de7262025-01-19T12:30:06ZengNature PortfolioNature Communications2041-17232025-01-0116111510.1038/s41467-025-55832-yPrincipled neuromorphic reservoir computingDenis Kleyko0Christopher J. Kymn1Anthony Thomas2Bruno A. Olshausen3Friedrich T. Sommer4E. Paxon Frady5Centre for Applied Autonomous Sensor Systems, Örebro UniversityRedwood Center for Theoretical Neuroscience, University of CaliforniaRedwood Center for Theoretical Neuroscience, University of CaliforniaRedwood Center for Theoretical Neuroscience, University of CaliforniaRedwood Center for Theoretical Neuroscience, University of CaliforniaNeuromorphic Computing Lab, IntelAbstract Reservoir computing advances the intriguing idea that a nonlinear recurrent neural circuit—the reservoir—can encode spatio-temporal input signals to enable efficient ways to perform tasks like classification or regression. However, recently the idea of a monolithic reservoir network that simultaneously buffers input signals and expands them into nonlinear features has been challenged. A representation scheme in which memory buffer and expansion into higher-order polynomial features can be configured separately has been shown to significantly outperform traditional reservoir computing in prediction of multivariate time-series. Here we propose a configurable neuromorphic representation scheme that provides competitive performance on prediction, but with significantly better scaling properties than directly materializing higher-order features as in prior work. Our approach combines the use of randomized representations from traditional reservoir computing with mathematical principles for approximating polynomial kernels via such representations. While the memory buffer can be realized with standard reservoir networks, computing higher-order features requires networks of ‘Sigma-Pi’ neurons, i.e., neurons that enable both summation as well as multiplication of inputs. Finally, we provide an implementation of the memory buffer and Sigma-Pi networks on Loihi 2, an existing neuromorphic hardware platform.https://doi.org/10.1038/s41467-025-55832-y |
spellingShingle | Denis Kleyko Christopher J. Kymn Anthony Thomas Bruno A. Olshausen Friedrich T. Sommer E. Paxon Frady Principled neuromorphic reservoir computing Nature Communications |
title | Principled neuromorphic reservoir computing |
title_full | Principled neuromorphic reservoir computing |
title_fullStr | Principled neuromorphic reservoir computing |
title_full_unstemmed | Principled neuromorphic reservoir computing |
title_short | Principled neuromorphic reservoir computing |
title_sort | principled neuromorphic reservoir computing |
url | https://doi.org/10.1038/s41467-025-55832-y |
work_keys_str_mv | AT deniskleyko principledneuromorphicreservoircomputing AT christopherjkymn principledneuromorphicreservoircomputing AT anthonythomas principledneuromorphicreservoircomputing AT brunoaolshausen principledneuromorphicreservoircomputing AT friedrichtsommer principledneuromorphicreservoircomputing AT epaxonfrady principledneuromorphicreservoircomputing |