A proximal policy optimization based deep reinforcement learning framework for tracking control of a flexible robotic manipulator
This paper puts forward a policy feedback based deep reinforcement learning (DRL) control scheme for a partially observable system by leveraging the potentials of proximal policy optimization (PPO) algorithm and convolutional neural network (CNN). Although several DRL algorithms have been investigat...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2025-03-01
|
Series: | Results in Engineering |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2590123025002646 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper puts forward a policy feedback based deep reinforcement learning (DRL) control scheme for a partially observable system by leveraging the potentials of proximal policy optimization (PPO) algorithm and convolutional neural network (CNN). Although several DRL algorithms have been investigated for a fully observable system, there has been limited studies on devising a DRL control for a partially observable system with uncertain dynamics. Moreover, the major limitation of the existing policy gradient based DRL techniques is that they are computationally expensive and suffer from scalability issues for complex higher order systems. Hence, in this study, we adopt the PPO technique which utilizes first-order optimization to minimize the computational complexity and devise a DRL scheme for a partially observable flexible link robot manipulator system. Specifically, to improve the stability and convergence in PPO algorithm, this study adopts a collaborative policy approach in the update of value function and presents a collaborative proximal policy optimization (CPPO) algorithm that can address the tracking control and vibration suppression problems in partially observable robotic manipulator system. Identifying the optimal hyper-parameters of DRL using the grid search method, we exploit the capability of CNN in actor-critic architecture to extract the spatial dependencies in the state sequences of the dynamical system and boost the DRL performance. To improve the convergence of the proposed DRL algorithm, this study adopts the Lyapunov based reward shaping technique. The experimental validation on robotic manipulator system through hardware in loop (HIL) testing substantiates that the proposed framework offers faster convergence and better vibration suppression feature compared to the state-of-the-art policy gradient technique and actor-critic technique. |
---|---|
ISSN: | 2590-1230 |