Recurrent neural networks with transient trajectory explain working memory encoding mechanisms

Abstract Whether working memory (WM) is encoded by persistent activity using attractors or by dynamic activity using transient trajectories has been debated for decades in both experimental and modeling studies, and a consensus has not been reached. Even though many recurrent neural networks (RNNs)...

Full description

Saved in:
Bibliographic Details
Main Authors: Chenghao Liu, Shuncheng Jia, Hongxing Liu, Xuanle Zhao, Chengyu T. Li, Bo Xu, Tielin Zhang
Format: Article
Language:English
Published: Nature Portfolio 2025-01-01
Series:Communications Biology
Online Access:https://doi.org/10.1038/s42003-024-07282-3
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832571360508903424
author Chenghao Liu
Shuncheng Jia
Hongxing Liu
Xuanle Zhao
Chengyu T. Li
Bo Xu
Tielin Zhang
author_facet Chenghao Liu
Shuncheng Jia
Hongxing Liu
Xuanle Zhao
Chengyu T. Li
Bo Xu
Tielin Zhang
author_sort Chenghao Liu
collection DOAJ
description Abstract Whether working memory (WM) is encoded by persistent activity using attractors or by dynamic activity using transient trajectories has been debated for decades in both experimental and modeling studies, and a consensus has not been reached. Even though many recurrent neural networks (RNNs) have been proposed to simulate WM, most networks are designed to match respective experimental observations and show either transient or persistent activities. Those few which consider networks with both activity patterns have not attempted to directly compare their memory capabilities. In this study, we build transient-trajectory-based RNNs (TRNNs) and compare them to vanilla RNNs with more persistent activities. The TRNN incorporates biologically plausible modifications, including self-inhibition, sparse connection and hierarchical topology. Besides activity patterns resembling animal recordings and retained versatility to variable encoding time, TRNNs show better performance in delayed choice and spatial memory reinforcement learning tasks. Therefore, this study provides evidence supporting the transient activity theory to explain the WM mechanism from the model designing point of view.
format Article
id doaj-art-22724d0e77064c45b1b7037293924247
institution Kabale University
issn 2399-3642
language English
publishDate 2025-01-01
publisher Nature Portfolio
record_format Article
series Communications Biology
spelling doaj-art-22724d0e77064c45b1b70372939242472025-02-02T12:37:21ZengNature PortfolioCommunications Biology2399-36422025-01-018111310.1038/s42003-024-07282-3Recurrent neural networks with transient trajectory explain working memory encoding mechanismsChenghao Liu0Shuncheng Jia1Hongxing Liu2Xuanle Zhao3Chengyu T. Li4Bo Xu5Tielin Zhang6Institute of Automation, Chinese Academy of SciencesInstitute of Automation, Chinese Academy of SciencesInstitute of Automation, Chinese Academy of SciencesInstitute of Automation, Chinese Academy of SciencesLingang LaboratoryInstitute of Automation, Chinese Academy of SciencesInstitute of Automation, Chinese Academy of SciencesAbstract Whether working memory (WM) is encoded by persistent activity using attractors or by dynamic activity using transient trajectories has been debated for decades in both experimental and modeling studies, and a consensus has not been reached. Even though many recurrent neural networks (RNNs) have been proposed to simulate WM, most networks are designed to match respective experimental observations and show either transient or persistent activities. Those few which consider networks with both activity patterns have not attempted to directly compare their memory capabilities. In this study, we build transient-trajectory-based RNNs (TRNNs) and compare them to vanilla RNNs with more persistent activities. The TRNN incorporates biologically plausible modifications, including self-inhibition, sparse connection and hierarchical topology. Besides activity patterns resembling animal recordings and retained versatility to variable encoding time, TRNNs show better performance in delayed choice and spatial memory reinforcement learning tasks. Therefore, this study provides evidence supporting the transient activity theory to explain the WM mechanism from the model designing point of view.https://doi.org/10.1038/s42003-024-07282-3
spellingShingle Chenghao Liu
Shuncheng Jia
Hongxing Liu
Xuanle Zhao
Chengyu T. Li
Bo Xu
Tielin Zhang
Recurrent neural networks with transient trajectory explain working memory encoding mechanisms
Communications Biology
title Recurrent neural networks with transient trajectory explain working memory encoding mechanisms
title_full Recurrent neural networks with transient trajectory explain working memory encoding mechanisms
title_fullStr Recurrent neural networks with transient trajectory explain working memory encoding mechanisms
title_full_unstemmed Recurrent neural networks with transient trajectory explain working memory encoding mechanisms
title_short Recurrent neural networks with transient trajectory explain working memory encoding mechanisms
title_sort recurrent neural networks with transient trajectory explain working memory encoding mechanisms
url https://doi.org/10.1038/s42003-024-07282-3
work_keys_str_mv AT chenghaoliu recurrentneuralnetworkswithtransienttrajectoryexplainworkingmemoryencodingmechanisms
AT shunchengjia recurrentneuralnetworkswithtransienttrajectoryexplainworkingmemoryencodingmechanisms
AT hongxingliu recurrentneuralnetworkswithtransienttrajectoryexplainworkingmemoryencodingmechanisms
AT xuanlezhao recurrentneuralnetworkswithtransienttrajectoryexplainworkingmemoryencodingmechanisms
AT chengyutli recurrentneuralnetworkswithtransienttrajectoryexplainworkingmemoryencodingmechanisms
AT boxu recurrentneuralnetworkswithtransienttrajectoryexplainworkingmemoryencodingmechanisms
AT tielinzhang recurrentneuralnetworkswithtransienttrajectoryexplainworkingmemoryencodingmechanisms