Evolutionary learning in neural networks by heterosynaptic plasticity

Summary: Training biophysical neuron models provides insights into brain circuits’ organization and problem-solving capabilities. Traditional training methods like backpropagation face challenges with complex models due to instability and gradient issues. We explore evolutionary algorithms (EAs) com...

Full description

Saved in:
Bibliographic Details
Main Authors: Zedong Bi, Ruiqi Fu, Guozhang Chen, Dongping Yang, Yu Zhou, Liang Tian
Format: Article
Language:English
Published: Elsevier 2025-05-01
Series:iScience
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2589004225006017
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849700289803714560
author Zedong Bi
Ruiqi Fu
Guozhang Chen
Dongping Yang
Yu Zhou
Liang Tian
author_facet Zedong Bi
Ruiqi Fu
Guozhang Chen
Dongping Yang
Yu Zhou
Liang Tian
author_sort Zedong Bi
collection DOAJ
description Summary: Training biophysical neuron models provides insights into brain circuits’ organization and problem-solving capabilities. Traditional training methods like backpropagation face challenges with complex models due to instability and gradient issues. We explore evolutionary algorithms (EAs) combined with heterosynaptic plasticity as a gradient-free alternative. Our EA models agents with distinct neuron information routes, evaluated via alternating gating, and guided by dopamine-driven plasticity. This model draws inspiration from various biological mechanisms, such as dopamine function, dendritic spine meta-plasticity, memory replay, and cooperative synaptic plasticity within dendritic neighborhoods. Neural networks trained with this model recapitulate brain-like dynamics during cognition. Our method effectively trains spiking and analog neural networks in both feedforward and recurrent architectures, it also achieves performance in tasks like MNIST classification and Atari games comparable to gradient-based methods. Overall, this research extends training approaches for biophysical neuron models, offering a robust alternative to traditional algorithms.
format Article
id doaj-art-e0f7836a4e9b4de0808e96f290e6d02a
institution DOAJ
issn 2589-0042
language English
publishDate 2025-05-01
publisher Elsevier
record_format Article
series iScience
spelling doaj-art-e0f7836a4e9b4de0808e96f290e6d02a2025-08-20T03:18:19ZengElsevieriScience2589-00422025-05-0128511234010.1016/j.isci.2025.112340Evolutionary learning in neural networks by heterosynaptic plasticityZedong Bi0Ruiqi Fu1Guozhang Chen2Dongping Yang3Yu Zhou4Liang Tian5Lingang Laboratory, Shanghai 200031, China; Corresponding authorDepartment of Physics, Hong Kong Baptist University, Hong Kong, ChinaNational Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, ChinaResearch Institute of Artificial Intelligence, Zhejiang Lab, Hangzhou 311121, ChinaSchool of Life Sciences and Health, University of Health and Rehabilitation Sciences, Qingdao, Shandong 266011, ChinaDepartment of Physics, Hong Kong Baptist University, Hong Kong, China; Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Hong Kong, China; Institute of Systems Medicine and Health Sciences, Hong Kong Baptist University, Hong Kong, China; Corresponding authorSummary: Training biophysical neuron models provides insights into brain circuits’ organization and problem-solving capabilities. Traditional training methods like backpropagation face challenges with complex models due to instability and gradient issues. We explore evolutionary algorithms (EAs) combined with heterosynaptic plasticity as a gradient-free alternative. Our EA models agents with distinct neuron information routes, evaluated via alternating gating, and guided by dopamine-driven plasticity. This model draws inspiration from various biological mechanisms, such as dopamine function, dendritic spine meta-plasticity, memory replay, and cooperative synaptic plasticity within dendritic neighborhoods. Neural networks trained with this model recapitulate brain-like dynamics during cognition. Our method effectively trains spiking and analog neural networks in both feedforward and recurrent architectures, it also achieves performance in tasks like MNIST classification and Atari games comparable to gradient-based methods. Overall, this research extends training approaches for biophysical neuron models, offering a robust alternative to traditional algorithms.http://www.sciencedirect.com/science/article/pii/S2589004225006017Biological sciencesNeuroscienceBiophysics
spellingShingle Zedong Bi
Ruiqi Fu
Guozhang Chen
Dongping Yang
Yu Zhou
Liang Tian
Evolutionary learning in neural networks by heterosynaptic plasticity
iScience
Biological sciences
Neuroscience
Biophysics
title Evolutionary learning in neural networks by heterosynaptic plasticity
title_full Evolutionary learning in neural networks by heterosynaptic plasticity
title_fullStr Evolutionary learning in neural networks by heterosynaptic plasticity
title_full_unstemmed Evolutionary learning in neural networks by heterosynaptic plasticity
title_short Evolutionary learning in neural networks by heterosynaptic plasticity
title_sort evolutionary learning in neural networks by heterosynaptic plasticity
topic Biological sciences
Neuroscience
Biophysics
url http://www.sciencedirect.com/science/article/pii/S2589004225006017
work_keys_str_mv AT zedongbi evolutionarylearninginneuralnetworksbyheterosynapticplasticity
AT ruiqifu evolutionarylearninginneuralnetworksbyheterosynapticplasticity
AT guozhangchen evolutionarylearninginneuralnetworksbyheterosynapticplasticity
AT dongpingyang evolutionarylearninginneuralnetworksbyheterosynapticplasticity
AT yuzhou evolutionarylearninginneuralnetworksbyheterosynapticplasticity
AT liangtian evolutionarylearninginneuralnetworksbyheterosynapticplasticity