Reinforcement learning based route optimization model to enhance energy efficiency in internet of vehicles
Abstract The Internet of Vehicles (IoV) transforms the automobile industry through connected vehicles with communication infrastructure that improves traffic control, safety and information, and entertainment services. However, some issues remain, like data protection, privacy, compatibility with ot...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | Scientific Reports |
Online Access: | https://doi.org/10.1038/s41598-025-86608-5 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832585868389384192 |
---|---|
author | Quadeer Hussain Ahmad Shukri Mohd Noor Muhammad Mukhtar Qureshi Jianqiang Li Atta-ur Rahman Aghiad Bakry Tariq Mahmood Amjad Rehman |
author_facet | Quadeer Hussain Ahmad Shukri Mohd Noor Muhammad Mukhtar Qureshi Jianqiang Li Atta-ur Rahman Aghiad Bakry Tariq Mahmood Amjad Rehman |
author_sort | Quadeer Hussain |
collection | DOAJ |
description | Abstract The Internet of Vehicles (IoV) transforms the automobile industry through connected vehicles with communication infrastructure that improves traffic control, safety and information, and entertainment services. However, some issues remain, like data protection, privacy, compatibility with other protocols and systems, and the availability of stable and continuous connections. Specific problems are related to energy consumption for transmitting information, distributing energy loads across the vehicle’s sensors and communication units, and designing energy-efficient approaches to processing received data and making decisions in the context of the IoV environment. In the realm of IoV, we propose OptiE2ERL, an advanced Reinforcement Learning (RL) based model designed to optimize energy efficiency and routing. Our model leverages a reward matrix and the Bellman equation to determine the optimal path from source to destination, effectively managing communication overhead. The model considers critical parameters such as Remaining Energy Level (REL), Bandwidth and Interference Level (BIL), Mobility Pattern (MP), Traffic Condition (TC), and Network Topological Arrangement (NTA), ensuring a comprehensive approach to route optimization. Extensive simulations were conducted using NS2 and Python, demonstrating that OptiE2ERL significantly outperforms existing models like LEACH, PEGASIS, and EER-RL across various performance metrics. Specifically, our model extends the network lifetime, delays the occurrence of the first dead node, and maintains a higher residual energy rate. Furthermore, OptiE2ERL enhances network scalability and robustness, making it a superior choice for IoV applications. The simulation results highlight the effectiveness of our model in achieving energy-efficient routing while maintaining network performance under different scenarios. By incorporating a diverse set of parameters and utilizing RL techniques, OptiE2ERL provides a robust solution for the challenges faced in IoV networks. This research contributes to the field by presenting a model that optimizes energy consumption and ensures reliable and efficient communication in dynamic vehicular environments. |
format | Article |
id | doaj-art-b497483be57a408f83934df698717c81 |
institution | Kabale University |
issn | 2045-2322 |
language | English |
publishDate | 2025-01-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Scientific Reports |
spelling | doaj-art-b497483be57a408f83934df698717c812025-01-26T12:28:18ZengNature PortfolioScientific Reports2045-23222025-01-0115112410.1038/s41598-025-86608-5Reinforcement learning based route optimization model to enhance energy efficiency in internet of vehiclesQuadeer Hussain0Ahmad Shukri Mohd Noor1Muhammad Mukhtar Qureshi2Jianqiang Li3Atta-ur Rahman4Aghiad Bakry5Tariq Mahmood6Amjad Rehman7Faculty of Information Technology, Beijing University of TechnologyFaculty of Ocean Engineering and Informatics, Universiti Malaysia TerengganuFaculty of Ocean Engineering and Informatics, Universiti Malaysia TerengganuFaculty of Information Technology, Beijing University of TechnologyDepartment of Computer Science, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal UniversityDepartment of Computer Science, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal UniversityArtificial Intelligence and Data Analytics (AIDA) Lab, CCIS Prince Sultan UniversityArtificial Intelligence and Data Analytics (AIDA) Lab, CCIS Prince Sultan UniversityAbstract The Internet of Vehicles (IoV) transforms the automobile industry through connected vehicles with communication infrastructure that improves traffic control, safety and information, and entertainment services. However, some issues remain, like data protection, privacy, compatibility with other protocols and systems, and the availability of stable and continuous connections. Specific problems are related to energy consumption for transmitting information, distributing energy loads across the vehicle’s sensors and communication units, and designing energy-efficient approaches to processing received data and making decisions in the context of the IoV environment. In the realm of IoV, we propose OptiE2ERL, an advanced Reinforcement Learning (RL) based model designed to optimize energy efficiency and routing. Our model leverages a reward matrix and the Bellman equation to determine the optimal path from source to destination, effectively managing communication overhead. The model considers critical parameters such as Remaining Energy Level (REL), Bandwidth and Interference Level (BIL), Mobility Pattern (MP), Traffic Condition (TC), and Network Topological Arrangement (NTA), ensuring a comprehensive approach to route optimization. Extensive simulations were conducted using NS2 and Python, demonstrating that OptiE2ERL significantly outperforms existing models like LEACH, PEGASIS, and EER-RL across various performance metrics. Specifically, our model extends the network lifetime, delays the occurrence of the first dead node, and maintains a higher residual energy rate. Furthermore, OptiE2ERL enhances network scalability and robustness, making it a superior choice for IoV applications. The simulation results highlight the effectiveness of our model in achieving energy-efficient routing while maintaining network performance under different scenarios. By incorporating a diverse set of parameters and utilizing RL techniques, OptiE2ERL provides a robust solution for the challenges faced in IoV networks. This research contributes to the field by presenting a model that optimizes energy consumption and ensures reliable and efficient communication in dynamic vehicular environments.https://doi.org/10.1038/s41598-025-86608-5 |
spellingShingle | Quadeer Hussain Ahmad Shukri Mohd Noor Muhammad Mukhtar Qureshi Jianqiang Li Atta-ur Rahman Aghiad Bakry Tariq Mahmood Amjad Rehman Reinforcement learning based route optimization model to enhance energy efficiency in internet of vehicles Scientific Reports |
title | Reinforcement learning based route optimization model to enhance energy efficiency in internet of vehicles |
title_full | Reinforcement learning based route optimization model to enhance energy efficiency in internet of vehicles |
title_fullStr | Reinforcement learning based route optimization model to enhance energy efficiency in internet of vehicles |
title_full_unstemmed | Reinforcement learning based route optimization model to enhance energy efficiency in internet of vehicles |
title_short | Reinforcement learning based route optimization model to enhance energy efficiency in internet of vehicles |
title_sort | reinforcement learning based route optimization model to enhance energy efficiency in internet of vehicles |
url | https://doi.org/10.1038/s41598-025-86608-5 |
work_keys_str_mv | AT quadeerhussain reinforcementlearningbasedrouteoptimizationmodeltoenhanceenergyefficiencyininternetofvehicles AT ahmadshukrimohdnoor reinforcementlearningbasedrouteoptimizationmodeltoenhanceenergyefficiencyininternetofvehicles AT muhammadmukhtarqureshi reinforcementlearningbasedrouteoptimizationmodeltoenhanceenergyefficiencyininternetofvehicles AT jianqiangli reinforcementlearningbasedrouteoptimizationmodeltoenhanceenergyefficiencyininternetofvehicles AT attaurrahman reinforcementlearningbasedrouteoptimizationmodeltoenhanceenergyefficiencyininternetofvehicles AT aghiadbakry reinforcementlearningbasedrouteoptimizationmodeltoenhanceenergyefficiencyininternetofvehicles AT tariqmahmood reinforcementlearningbasedrouteoptimizationmodeltoenhanceenergyefficiencyininternetofvehicles AT amjadrehman reinforcementlearningbasedrouteoptimizationmodeltoenhanceenergyefficiencyininternetofvehicles |