Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks

Ensuring reliable data transmission in all Vehicular Ad-hoc Network (VANET) segments is paramount in modern vehicular communications. Vehicular operations face unpredictable network conditions which affect routing protocol adaptiveness. Several solutions have addressed those challenges, but each has...

Full description

Saved in:
Bibliographic Details
Main Authors: Olivia Nakayima, Mostafa I. Soliman, Kazunori Ueda, Samir A. Elsagheer Mohamed
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Open Journal of Vehicular Technology
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10518068/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832582335630934016
author Olivia Nakayima
Mostafa I. Soliman
Kazunori Ueda
Samir A. Elsagheer Mohamed
author_facet Olivia Nakayima
Mostafa I. Soliman
Kazunori Ueda
Samir A. Elsagheer Mohamed
author_sort Olivia Nakayima
collection DOAJ
description Ensuring reliable data transmission in all Vehicular Ad-hoc Network (VANET) segments is paramount in modern vehicular communications. Vehicular operations face unpredictable network conditions which affect routing protocol adaptiveness. Several solutions have addressed those challenges, but each has noted shortcomings. This work proposes a centralised-controller multi-agent (CCMA) algorithm based on Software-Defined Networking (SDN) and Delay-Tolerant Networking (DTN) principles, to enhance VANET performance using Reinforcement Learning (RL). This algorithm is trained and validated with a simulation environment modelling the network nodes, routing protocols and buffer schedules. It optimally deploys DTN routing protocols (Spray and Wait, Epidemic, and PRoPHETv2) and buffer schedules (Random, Defer, Earliest Deadline First, First In First Out, Large/smallest bundle first) based on network state information (that is; traffic pattern, buffer size variance, node and link uptime, bundle Time To Live (TTL), link loss and capacity). These are implemented in three environment types; Advanced Technological Regions, Limited Resource Regions and Opportunistic Communication Regions. The study assesses the performance of the multi-protocol approach using metrics: TTL, buffer management,link quality, delivery ratio, Latency and overhead scores for optimal network performance. Comparative analysis with single-protocol VANETs (simulated using the Opportunistic Network Environment (ONE)), demonstrate an improved performance of the proposed algorithm in all VANET scenarios.
format Article
id doaj-art-c87e04b5d6a04a0e8b04194d7ab56217
institution Kabale University
issn 2644-1330
language English
publishDate 2024-01-01
publisher IEEE
record_format Article
series IEEE Open Journal of Vehicular Technology
spelling doaj-art-c87e04b5d6a04a0e8b04194d7ab562172025-01-30T00:04:32ZengIEEEIEEE Open Journal of Vehicular Technology2644-13302024-01-01572173610.1109/OJVT.2024.339663710518068Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular NetworksOlivia Nakayima0https://orcid.org/0009-0008-8378-7369Mostafa I. Soliman1https://orcid.org/0000-0002-4386-8235Kazunori Ueda2https://orcid.org/0000-0002-3424-1844Samir A. Elsagheer Mohamed3https://orcid.org/0000-0003-4388-1998Department of Computer Science and Engineering, Egypt–Japan University of Science and Technology, New Borg El-Arab City, EgyptDepartment of Computer Science and Engineering, Egypt–Japan University of Science and Technology, New Borg El-Arab City, EgyptDepartment of Computer Science and Engineering, Waseda University, Tokyo, JapanDepartment of Computer Science and Engineering, Egypt–Japan University of Science and Technology, New Borg El-Arab City, EgyptEnsuring reliable data transmission in all Vehicular Ad-hoc Network (VANET) segments is paramount in modern vehicular communications. Vehicular operations face unpredictable network conditions which affect routing protocol adaptiveness. Several solutions have addressed those challenges, but each has noted shortcomings. This work proposes a centralised-controller multi-agent (CCMA) algorithm based on Software-Defined Networking (SDN) and Delay-Tolerant Networking (DTN) principles, to enhance VANET performance using Reinforcement Learning (RL). This algorithm is trained and validated with a simulation environment modelling the network nodes, routing protocols and buffer schedules. It optimally deploys DTN routing protocols (Spray and Wait, Epidemic, and PRoPHETv2) and buffer schedules (Random, Defer, Earliest Deadline First, First In First Out, Large/smallest bundle first) based on network state information (that is; traffic pattern, buffer size variance, node and link uptime, bundle Time To Live (TTL), link loss and capacity). These are implemented in three environment types; Advanced Technological Regions, Limited Resource Regions and Opportunistic Communication Regions. The study assesses the performance of the multi-protocol approach using metrics: TTL, buffer management,link quality, delivery ratio, Latency and overhead scores for optimal network performance. Comparative analysis with single-protocol VANETs (simulated using the Opportunistic Network Environment (ONE)), demonstrate an improved performance of the proposed algorithm in all VANET scenarios.https://ieeexplore.ieee.org/document/10518068/Delay-tolerant networksperformance analysisreinforcement learningsimulatorsoftware-defined networkingvehicular ad-hoc networks
spellingShingle Olivia Nakayima
Mostafa I. Soliman
Kazunori Ueda
Samir A. Elsagheer Mohamed
Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks
IEEE Open Journal of Vehicular Technology
Delay-tolerant networks
performance analysis
reinforcement learning
simulator
software-defined networking
vehicular ad-hoc networks
title Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks
title_full Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks
title_fullStr Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks
title_full_unstemmed Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks
title_short Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks
title_sort combining software defined and delay tolerant networking concepts with deep reinforcement learning technology to enhance vehicular networks
topic Delay-tolerant networks
performance analysis
reinforcement learning
simulator
software-defined networking
vehicular ad-hoc networks
url https://ieeexplore.ieee.org/document/10518068/
work_keys_str_mv AT olivianakayima combiningsoftwaredefinedanddelaytolerantnetworkingconceptswithdeepreinforcementlearningtechnologytoenhancevehicularnetworks
AT mostafaisoliman combiningsoftwaredefinedanddelaytolerantnetworkingconceptswithdeepreinforcementlearningtechnologytoenhancevehicularnetworks
AT kazunoriueda combiningsoftwaredefinedanddelaytolerantnetworkingconceptswithdeepreinforcementlearningtechnologytoenhancevehicularnetworks
AT samiraelsagheermohamed combiningsoftwaredefinedanddelaytolerantnetworkingconceptswithdeepreinforcementlearningtechnologytoenhancevehicularnetworks