Enhanced Reinforcement Learning Algorithm Based-Transmission Parameter Selection for Optimization of Energy Consumption and Packet Delivery Ratio in LoRa Wireless Networks

Wireless communication technologies (WSN) are pivotal for the successful deployment of the Internet of Things (IoT). Among them, long-range (LoRa) and long-range wide-area network (LoRaWAN) technologies have been widely adopted due to their ability to provide long-distance communication, low energy...

Full description

Saved in:
Bibliographic Details
Main Authors: Batyrbek Zholamanov, Askhat Bolatbek, Ahmet Saymbetov, Madiyar Nurgaliyev, Evan Yershov, Kymbat Kopbay, Sayat Orynbassar, Gulbakhar Dosymbetova, Ainur Kapparova, Nurzhigit Kuttybay, Nursultan Koshkarbay
Format: Article
Language:English
Published: MDPI AG 2024-12-01
Series:Journal of Sensor and Actuator Networks
Subjects:
Online Access:https://www.mdpi.com/2224-2708/13/6/89
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Wireless communication technologies (WSN) are pivotal for the successful deployment of the Internet of Things (IoT). Among them, long-range (LoRa) and long-range wide-area network (LoRaWAN) technologies have been widely adopted due to their ability to provide long-distance communication, low energy consumption (EC), and cost-effectiveness. One of the critical issues in the implementation of wireless networks is the selection of optimal transmission parameters to minimize EC while maximizing the packet delivery ratio (PDR). This study introduces a reinforcement learning (RL) algorithm, Double Deep Q-Network with Prioritized Experience Replay (DDQN-PER), designed to optimize network transmission parameter selection, particularly the spreading factor (SF) and transmission power (TP). This research explores a variety of network scenarios, characterized by different device numbers and simulation times. The proposed approach demonstrates the best performance, achieving a 17.2% increase in the packet delivery ratio compared to the traditional Adaptive Data Rate (ADR) algorithm. The proposed DDQN-PER algorithm showed PDR improvement in the range of 6.2–8.11% compared to other existing RL and machine-learning-based works.
ISSN:2224-2708