Deep Reinforcement Learning-Based Speed Predictor for Distributionally Robust Eco-Driving
This paper proposes an eco-driving technique for an ego vehicle operating behind a non-communicating leading Heavy-Duty Vehicle (HDV), aimed at minimizing energy consumption while ensuring inter-vehicle distance. A novel data-driven approach based on Deep Reinforcement Learning (DRL) is developed to...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10843212/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832586889339600896 |
---|---|
author | Rajan Chaudhary Nalin Kumar Sharma Rahul Kala Sri Niwas Singh |
author_facet | Rajan Chaudhary Nalin Kumar Sharma Rahul Kala Sri Niwas Singh |
author_sort | Rajan Chaudhary |
collection | DOAJ |
description | This paper proposes an eco-driving technique for an ego vehicle operating behind a non-communicating leading Heavy-Duty Vehicle (HDV), aimed at minimizing energy consumption while ensuring inter-vehicle distance. A novel data-driven approach based on Deep Reinforcement Learning (DRL) is developed to predict the future speed trajectory of the leading HDV using simulated speed profiles and road slope information. The DQN-based speed predictor achieves a prediction accuracy of 95.4% and 93.2% in Driving Cycles 1 and 2, respectively. This predicted speed is then used to optimize the ego vehicle’s speed plan through a distributionally robust Model Predictive Controller (MPC), which accounts for uncertainties in the prediction, ensuring operational safety. The proposed method demonstrates energy savings of 12.5% in Driving Cycle 1 and 8.6% in Driving Cycle 2, compared to traditional leading vehicle speed prediction methods. Validated through case studies across simulated and real-world driving cycles, the solution is scalable, computationally efficient, and suitable for real-time applications in Intelligent Transportation Systems (ITS), making it a viable approach for enhancing sustainability in non-communicating vehicle environments. |
format | Article |
id | doaj-art-cedaf0dc8815467191c3e4f8db0de040 |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-cedaf0dc8815467191c3e4f8db0de0402025-01-25T00:00:33ZengIEEEIEEE Access2169-35362025-01-0113139041391810.1109/ACCESS.2025.353008710843212Deep Reinforcement Learning-Based Speed Predictor for Distributionally Robust Eco-DrivingRajan Chaudhary0https://orcid.org/0000-0002-0275-5496Nalin Kumar Sharma1https://orcid.org/0000-0001-7494-757XRahul Kala2https://orcid.org/0000-0003-0421-5028Sri Niwas Singh3https://orcid.org/0000-0002-2451-5303Department of Electrical and Electronics Engineering, ABV-Indian Institute of Information Technology and Management, Gwalior, IndiaDepartment of Electrical Engineering, Indian Institute of Technology Jammu, Jammu, IndiaCenter for Autonomous Systems, ABV-Indian Institute of Information Technology and Management, Gwalior, IndiaDepartment of Electrical and Electronics Engineering, ABV-Indian Institute of Information Technology and Management, Gwalior, IndiaThis paper proposes an eco-driving technique for an ego vehicle operating behind a non-communicating leading Heavy-Duty Vehicle (HDV), aimed at minimizing energy consumption while ensuring inter-vehicle distance. A novel data-driven approach based on Deep Reinforcement Learning (DRL) is developed to predict the future speed trajectory of the leading HDV using simulated speed profiles and road slope information. The DQN-based speed predictor achieves a prediction accuracy of 95.4% and 93.2% in Driving Cycles 1 and 2, respectively. This predicted speed is then used to optimize the ego vehicle’s speed plan through a distributionally robust Model Predictive Controller (MPC), which accounts for uncertainties in the prediction, ensuring operational safety. The proposed method demonstrates energy savings of 12.5% in Driving Cycle 1 and 8.6% in Driving Cycle 2, compared to traditional leading vehicle speed prediction methods. Validated through case studies across simulated and real-world driving cycles, the solution is scalable, computationally efficient, and suitable for real-time applications in Intelligent Transportation Systems (ITS), making it a viable approach for enhancing sustainability in non-communicating vehicle environments.https://ieeexplore.ieee.org/document/10843212/Deep reinforcement learningdistributionally robusteco-drivingheavy-duty vehiclesleading vehicle observermodel predictive control |
spellingShingle | Rajan Chaudhary Nalin Kumar Sharma Rahul Kala Sri Niwas Singh Deep Reinforcement Learning-Based Speed Predictor for Distributionally Robust Eco-Driving IEEE Access Deep reinforcement learning distributionally robust eco-driving heavy-duty vehicles leading vehicle observer model predictive control |
title | Deep Reinforcement Learning-Based Speed Predictor for Distributionally Robust Eco-Driving |
title_full | Deep Reinforcement Learning-Based Speed Predictor for Distributionally Robust Eco-Driving |
title_fullStr | Deep Reinforcement Learning-Based Speed Predictor for Distributionally Robust Eco-Driving |
title_full_unstemmed | Deep Reinforcement Learning-Based Speed Predictor for Distributionally Robust Eco-Driving |
title_short | Deep Reinforcement Learning-Based Speed Predictor for Distributionally Robust Eco-Driving |
title_sort | deep reinforcement learning based speed predictor for distributionally robust eco driving |
topic | Deep reinforcement learning distributionally robust eco-driving heavy-duty vehicles leading vehicle observer model predictive control |
url | https://ieeexplore.ieee.org/document/10843212/ |
work_keys_str_mv | AT rajanchaudhary deepreinforcementlearningbasedspeedpredictorfordistributionallyrobustecodriving AT nalinkumarsharma deepreinforcementlearningbasedspeedpredictorfordistributionallyrobustecodriving AT rahulkala deepreinforcementlearningbasedspeedpredictorfordistributionallyrobustecodriving AT sriniwassingh deepreinforcementlearningbasedspeedpredictorfordistributionallyrobustecodriving |