Drone Landing and Reinforcement Learning: State-of-Art, Challenges and Opportunities
Unmanned aerial vehicles, and special multirotor drones, have shown great relevance in a plethora of missions that require high affordance, field of view, and precision. Their limited payload capacity and autonomy make its landing a crucial task. Despite many attempts in the literature to address dr...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2024-01-01
|
Series: | IEEE Open Journal of Intelligent Transportation Systems |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10637701/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832590310912294912 |
---|---|
author | Jose Amendola Linga Reddy Cenkeramaddi Ajit Jha |
author_facet | Jose Amendola Linga Reddy Cenkeramaddi Ajit Jha |
author_sort | Jose Amendola |
collection | DOAJ |
description | Unmanned aerial vehicles, and special multirotor drones, have shown great relevance in a plethora of missions that require high affordance, field of view, and precision. Their limited payload capacity and autonomy make its landing a crucial task. Despite many attempts in the literature to address drone landing, challenges and open gaps still exist. Reinforcement Learning has gained notoriety in a variety of control problems, with recent proposals for drone landing applications. This work aims to present a systematic literature review on works employing Deep Reinforcement Learning for multirotor drone landing in both static and dynamic platforms. It also revisits Reinforcement Learning Algorithms, the main frameworks and simulators adopted for specific landing operations. The comprehensive analysis performed on reviewed works revealed that there are important untackled challenges when it comes to wind disturbances, unpredictability of moving landing targets, sensor latency, and sim-to-real gap. Finally, we present our critical analysis of how recent state-of-the-art deep learning concepts can be combined with reinforcement learning to leverage the latter in addressing the open gaps in future works. |
format | Article |
id | doaj-art-1e687038c8064b7bb55de5fe957aa5aa |
institution | Kabale University |
issn | 2687-7813 |
language | English |
publishDate | 2024-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Open Journal of Intelligent Transportation Systems |
spelling | doaj-art-1e687038c8064b7bb55de5fe957aa5aa2025-01-24T00:02:58ZengIEEEIEEE Open Journal of Intelligent Transportation Systems2687-78132024-01-01552053910.1109/OJITS.2024.344448710637701Drone Landing and Reinforcement Learning: State-of-Art, Challenges and OpportunitiesJose Amendola0https://orcid.org/0000-0002-9374-4724Linga Reddy Cenkeramaddi1https://orcid.org/0000-0002-1023-2118Ajit Jha2https://orcid.org/0000-0003-1435-9260Department of Engineering Sciences, University of Agder, Kristiansand, NorwayDepartment of Information and Communication Technology, University of Agder, Kristiansand, NorwayDepartment of Engineering Sciences, University of Agder, Kristiansand, NorwayUnmanned aerial vehicles, and special multirotor drones, have shown great relevance in a plethora of missions that require high affordance, field of view, and precision. Their limited payload capacity and autonomy make its landing a crucial task. Despite many attempts in the literature to address drone landing, challenges and open gaps still exist. Reinforcement Learning has gained notoriety in a variety of control problems, with recent proposals for drone landing applications. This work aims to present a systematic literature review on works employing Deep Reinforcement Learning for multirotor drone landing in both static and dynamic platforms. It also revisits Reinforcement Learning Algorithms, the main frameworks and simulators adopted for specific landing operations. The comprehensive analysis performed on reviewed works revealed that there are important untackled challenges when it comes to wind disturbances, unpredictability of moving landing targets, sensor latency, and sim-to-real gap. Finally, we present our critical analysis of how recent state-of-the-art deep learning concepts can be combined with reinforcement learning to leverage the latter in addressing the open gaps in future works.https://ieeexplore.ieee.org/document/10637701/Deep reinforcement learningdronesautonomous landing |
spellingShingle | Jose Amendola Linga Reddy Cenkeramaddi Ajit Jha Drone Landing and Reinforcement Learning: State-of-Art, Challenges and Opportunities IEEE Open Journal of Intelligent Transportation Systems Deep reinforcement learning drones autonomous landing |
title | Drone Landing and Reinforcement Learning: State-of-Art, Challenges and Opportunities |
title_full | Drone Landing and Reinforcement Learning: State-of-Art, Challenges and Opportunities |
title_fullStr | Drone Landing and Reinforcement Learning: State-of-Art, Challenges and Opportunities |
title_full_unstemmed | Drone Landing and Reinforcement Learning: State-of-Art, Challenges and Opportunities |
title_short | Drone Landing and Reinforcement Learning: State-of-Art, Challenges and Opportunities |
title_sort | drone landing and reinforcement learning state of art challenges and opportunities |
topic | Deep reinforcement learning drones autonomous landing |
url | https://ieeexplore.ieee.org/document/10637701/ |
work_keys_str_mv | AT joseamendola dronelandingandreinforcementlearningstateofartchallengesandopportunities AT lingareddycenkeramaddi dronelandingandreinforcementlearningstateofartchallengesandopportunities AT ajitjha dronelandingandreinforcementlearningstateofartchallengesandopportunities |