Should We Reconsider RNNs for Time-Series Forecasting?

(1) Background: In recent years, Transformer-based models have dominated the time-series forecasting domain, overshadowing recurrent neural networks (RNNs) such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). While Transformers demonstrate superior performance, their high computatio...

Full description

Saved in:
Bibliographic Details
Main Authors: Vahid Naghashi, Mounir Boukadoum, Abdoulaye Banire Diallo
Format: Article
Language:English
Published: MDPI AG 2025-04-01
Series:AI
Subjects:
Online Access:https://www.mdpi.com/2673-2688/6/5/90
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:(1) Background: In recent years, Transformer-based models have dominated the time-series forecasting domain, overshadowing recurrent neural networks (RNNs) such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). While Transformers demonstrate superior performance, their high computational cost limits their practical application in resource-constrained settings. (2) Methods: In this paper, we reconsider RNNs—specifically the GRU architecture—as an efficient alternative to time-series forecasting by leveraging this architecture’s sequential representation capability to capture cross-channel dependencies effectively. Our model also utilizes a feed-forward layer right after the GRU module to represent temporal dependencies, and aggregates it with the GRU layers to predict future values of a given time-series. (3) Results and conclusions: Our extensive experiments conducted on different real-world datasets show that our inverted GRU (iGRU) model achieves promising results in terms of error metrics and memory efficiency, challenging or surpassing state-of-the-art models on various benchmarks.
ISSN:2673-2688