Power Controlled Resource Allocation and Task Offloading via Optimized Deep Reinforcement Learning in D2D Assisted Mobile Edge Computing

Device-to-device (D2D) technology enables continuous communication between devices, effectively addressing the challenge of limited bandwidth resources in 5G communication systems. However, shared resources among multiple D2D user pairs can result in significant interference. In advanced 5G networks...

Full description

Saved in:
Bibliographic Details
Main Authors: Sambi Reddy Gottam, Udit Narayana Kar
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10850906/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Device-to-device (D2D) technology enables continuous communication between devices, effectively addressing the challenge of limited bandwidth resources in 5G communication systems. However, shared resources among multiple D2D user pairs can result in significant interference. In advanced 5G networks and beyond, mobile edge computing (MEC) has emerged as a promising technology for reducing power consumption in cloud data centers while ensuring reliability and real-time access for end devices. Nonetheless, the inherent complexity and variability of MEC networks present significant challenges in task offloading (<inline-formula> <tex-math notation="LaTeX">$T_{o}$ </tex-math></inline-formula>) solutions. This study introduces an optimized deep reinforcement learning (DRL) approach for D2D-assisted task offloading (<inline-formula> <tex-math notation="LaTeX">$T_{o}$ </tex-math></inline-formula>) and resource allocation (<inline-formula> <tex-math notation="LaTeX">$R_{a}$ </tex-math></inline-formula>) to address these challenges. The process begins with the construction of the MEC network scenario, followed by the implementation of a human evolutionary optimization-aided DRL (<inline-formula> <tex-math notation="LaTeX">$HEOp-DRL$ </tex-math></inline-formula>) model to handle task offloading and resource allocation tasks jointly. The HEOp method minimizes time and power consumption constraints while efficiently allocating resources across end devices. The proposed DRL model uses the Markov decision process (MDP) to facilitate collaborative task offloading between end devices and MEC servers, deriving an optimal policy for offloading and resource allocation. The system is simulated on the MATLAB platform, and key performance metrics such as latency, average energy consumption, and sum rate (SR) are analyzed and compared with those of existing methods. The results demonstrate enhanced robustness in addressing resource allocation and task offloading challenges in D2D-MEC systems, with a minimized overall latency, and less energy consumption.
ISSN:2169-3536