Distributed Deep Reinforcement Learning Via Split Computing For Connected Autonomous Vehicles

This paper proposes the application of split computing paradigms for deep reinforcement learning through distributed computation between Connected Autonomous Vehicles (CAVs) and edge servers. While this approach has been explored in computer vision, it remains largely unexplored for reinforcement le...

Full description

Saved in:
Bibliographic Details
Main Authors: Rauch Robert, Gazda Juraj
Format: Article
Language:English
Published: Sciendo 2025-06-01
Series:Acta Electrotechnica et Informatica
Subjects:
Online Access:https://doi.org/10.2478/aei-2025-0008
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850124625483137024
author Rauch Robert
Gazda Juraj
author_facet Rauch Robert
Gazda Juraj
author_sort Rauch Robert
collection DOAJ
description This paper proposes the application of split computing paradigms for deep reinforcement learning through distributed computation between Connected Autonomous Vehicles (CAVs) and edge servers. While this approach has been explored in computer vision, it remains largely unexplored for reinforcement learning scenarios. We introduce a novel autoencoder trained directly through Deep Q-Network (DQN) rewards, wherein we optimize autoencoder layers using the DQN reward function while maintaining all other layers frozen. Our experimental results demonstrate that the proposed approach outperforms baseline methods by reducing data offloading requirements to the edge server by up to 98.7%. Additionally, this methodology not only decreases the data transmission burden but also achieves comparable rewards. In certain configurations, it even enhancing performance by up to 9.65%. The primary objective of this research is to reduce latency in deep reinforcement learning tasks for autonomous vehicles. In this regard, proposed approach achieves up to 66.5% improvement in latency reduction compared to baseline methods. These findings indicate that partial offloading through split computing offers significant benefits over both full offloading and complete on-device computation strategies for CAVs.
format Article
id doaj-art-e5f261b0a78b45daa01b2e71a5d0fa4d
institution OA Journals
issn 1338-3957
language English
publishDate 2025-06-01
publisher Sciendo
record_format Article
series Acta Electrotechnica et Informatica
spelling doaj-art-e5f261b0a78b45daa01b2e71a5d0fa4d2025-08-20T02:34:16ZengSciendoActa Electrotechnica et Informatica1338-39572025-06-01252212910.2478/aei-2025-0008Distributed Deep Reinforcement Learning Via Split Computing For Connected Autonomous VehiclesRauch Robert0Gazda Juraj11Department of Computers and Informatics, Faculty of Electrical Engineering and Informatics, Technical University of Košice, Letná 9, 042 00Košice, Slovak Republic, Tel. +421 55 602 31751Department of Computers and Informatics, Faculty of Electrical Engineering and Informatics, Technical University of Košice, Letná 9, 042 00Košice, Slovak Republic, Tel. +421 55 602 3175This paper proposes the application of split computing paradigms for deep reinforcement learning through distributed computation between Connected Autonomous Vehicles (CAVs) and edge servers. While this approach has been explored in computer vision, it remains largely unexplored for reinforcement learning scenarios. We introduce a novel autoencoder trained directly through Deep Q-Network (DQN) rewards, wherein we optimize autoencoder layers using the DQN reward function while maintaining all other layers frozen. Our experimental results demonstrate that the proposed approach outperforms baseline methods by reducing data offloading requirements to the edge server by up to 98.7%. Additionally, this methodology not only decreases the data transmission burden but also achieves comparable rewards. In certain configurations, it even enhancing performance by up to 9.65%. The primary objective of this research is to reduce latency in deep reinforcement learning tasks for autonomous vehicles. In this regard, proposed approach achieves up to 66.5% improvement in latency reduction compared to baseline methods. These findings indicate that partial offloading through split computing offers significant benefits over both full offloading and complete on-device computation strategies for CAVs.https://doi.org/10.2478/aei-2025-0008connected autonomous vehiclescontrol theorydeep reinforcement learningedge computingsplit computing
spellingShingle Rauch Robert
Gazda Juraj
Distributed Deep Reinforcement Learning Via Split Computing For Connected Autonomous Vehicles
Acta Electrotechnica et Informatica
connected autonomous vehicles
control theory
deep reinforcement learning
edge computing
split computing
title Distributed Deep Reinforcement Learning Via Split Computing For Connected Autonomous Vehicles
title_full Distributed Deep Reinforcement Learning Via Split Computing For Connected Autonomous Vehicles
title_fullStr Distributed Deep Reinforcement Learning Via Split Computing For Connected Autonomous Vehicles
title_full_unstemmed Distributed Deep Reinforcement Learning Via Split Computing For Connected Autonomous Vehicles
title_short Distributed Deep Reinforcement Learning Via Split Computing For Connected Autonomous Vehicles
title_sort distributed deep reinforcement learning via split computing for connected autonomous vehicles
topic connected autonomous vehicles
control theory
deep reinforcement learning
edge computing
split computing
url https://doi.org/10.2478/aei-2025-0008
work_keys_str_mv AT rauchrobert distributeddeepreinforcementlearningviasplitcomputingforconnectedautonomousvehicles
AT gazdajuraj distributeddeepreinforcementlearningviasplitcomputingforconnectedautonomousvehicles