Deep Reinforcement Learning-Based Speed Predictor for Distributionally Robust Eco-Driving

This paper proposes an eco-driving technique for an ego vehicle operating behind a non-communicating leading Heavy-Duty Vehicle (HDV), aimed at minimizing energy consumption while ensuring inter-vehicle distance. A novel data-driven approach based on Deep Reinforcement Learning (DRL) is developed to...

Full description

Saved in:
Bibliographic Details
Main Authors: Rajan Chaudhary, Nalin Kumar Sharma, Rahul Kala, Sri Niwas Singh
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10843212/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper proposes an eco-driving technique for an ego vehicle operating behind a non-communicating leading Heavy-Duty Vehicle (HDV), aimed at minimizing energy consumption while ensuring inter-vehicle distance. A novel data-driven approach based on Deep Reinforcement Learning (DRL) is developed to predict the future speed trajectory of the leading HDV using simulated speed profiles and road slope information. The DQN-based speed predictor achieves a prediction accuracy of 95.4% and 93.2% in Driving Cycles 1 and 2, respectively. This predicted speed is then used to optimize the ego vehicle’s speed plan through a distributionally robust Model Predictive Controller (MPC), which accounts for uncertainties in the prediction, ensuring operational safety. The proposed method demonstrates energy savings of 12.5% in Driving Cycle 1 and 8.6% in Driving Cycle 2, compared to traditional leading vehicle speed prediction methods. Validated through case studies across simulated and real-world driving cycles, the solution is scalable, computationally efficient, and suitable for real-time applications in Intelligent Transportation Systems (ITS), making it a viable approach for enhancing sustainability in non-communicating vehicle environments.
ISSN:2169-3536