Safety-Critical Trajectory Tracking Control with Safety-Enhanced Reinforcement Learning for Autonomous Underwater Vehicle
This paper investigates a novel reinforcement learning (RL)-based quadratic programming (QP) method for the safety-critical trajectory tracking control of autonomous underwater vehicles (AUVs). The proposed approach addresses the substantial challenge posed by model uncertainty, which may hinder the...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Drones |
Subjects: | |
Online Access: | https://www.mdpi.com/2504-446X/9/1/65 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper investigates a novel reinforcement learning (RL)-based quadratic programming (QP) method for the safety-critical trajectory tracking control of autonomous underwater vehicles (AUVs). The proposed approach addresses the substantial challenge posed by model uncertainty, which may hinder the safety and performance of AUVs operating in complex underwater environments. The RL framework can learn the inherent model uncertainties that affect the constraints in Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs). These learned uncertainties are subsequently integrated for formulating a novel RL-CBF-CLF Quadratic Programming (RL-CBF-CLF-QP) controller. Corresponding simulations are demonstrated under diverse trajectory tracking scenarios with high levels of model uncertainties. The simulation results show that the proposed RL-CBF-CLF-QP controller can significantly improve the safety and accuracy of the AUV’s tracking performance. |
---|---|
ISSN: | 2504-446X |