Evolutionary learning in neural networks by heterosynaptic plasticity

Summary: Training biophysical neuron models provides insights into brain circuits’ organization and problem-solving capabilities. Traditional training methods like backpropagation face challenges with complex models due to instability and gradient issues. We explore evolutionary algorithms (EAs) com...

Full description

Saved in:
Bibliographic Details
Main Authors: Zedong Bi, Ruiqi Fu, Guozhang Chen, Dongping Yang, Yu Zhou, Liang Tian
Format: Article
Language:English
Published: Elsevier 2025-05-01
Series:iScience
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2589004225006017
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Summary: Training biophysical neuron models provides insights into brain circuits’ organization and problem-solving capabilities. Traditional training methods like backpropagation face challenges with complex models due to instability and gradient issues. We explore evolutionary algorithms (EAs) combined with heterosynaptic plasticity as a gradient-free alternative. Our EA models agents with distinct neuron information routes, evaluated via alternating gating, and guided by dopamine-driven plasticity. This model draws inspiration from various biological mechanisms, such as dopamine function, dendritic spine meta-plasticity, memory replay, and cooperative synaptic plasticity within dendritic neighborhoods. Neural networks trained with this model recapitulate brain-like dynamics during cognition. Our method effectively trains spiking and analog neural networks in both feedforward and recurrent architectures, it also achieves performance in tasks like MNIST classification and Atari games comparable to gradient-based methods. Overall, this research extends training approaches for biophysical neuron models, offering a robust alternative to traditional algorithms.
ISSN:2589-0042