Control of Magnetic Manipulator Using Reinforcement Learning Based on Incrementally Adapted Local Linear Models

Reinforcement learning (RL) agents can learn to control a nonlinear system without using a model of the system. However, having a model brings benefits, mainly in terms of a reduced number of unsuccessful trials before achieving acceptable control performance. Several modelling approaches have been...

Full description

Saved in:
Bibliographic Details
Main Authors: Martin Brablc, Jan Žegklitz, Robert Grepl, Robert Babuška
Format: Article
Language:English
Published: Wiley 2021-01-01
Series:Complexity
Online Access:http://dx.doi.org/10.1155/2021/6617309
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832551580669313024
author Martin Brablc
Jan Žegklitz
Robert Grepl
Robert Babuška
author_facet Martin Brablc
Jan Žegklitz
Robert Grepl
Robert Babuška
author_sort Martin Brablc
collection DOAJ
description Reinforcement learning (RL) agents can learn to control a nonlinear system without using a model of the system. However, having a model brings benefits, mainly in terms of a reduced number of unsuccessful trials before achieving acceptable control performance. Several modelling approaches have been used in the RL domain, such as neural networks, local linear regression, or Gaussian processes. In this article, we focus on techniques that have not been used much so far: symbolic regression (SR), based on genetic programming and local modelling. Using measured data, symbolic regression yields a nonlinear, continuous-time analytic model. We benchmark two state-of-the-art methods, SNGP (single-node genetic programming) and MGGP (multigene genetic programming), against a standard incremental local regression method called RFWR (receptive field weighted regression). We have introduced modifications to the RFWR algorithm to better suit the low-dimensional continuous-time systems we are mostly dealing with. The benchmark is a nonlinear, dynamic magnetic manipulation system. The results show that using the RL framework and a suitable approximation method, it is possible to design a stable controller of such a complex system without the necessity of any haphazard learning. While all of the approximation methods were successful, MGGP achieved the best results at the cost of higher computational complexity. Index Terms–AI-based methods, local linear regression, nonlinear systems, magnetic manipulation, model learning for control, optimal control, reinforcement learning, symbolic regression.
format Article
id doaj-art-096c940cdf9d49c5b732b33c5ce7535b
institution Kabale University
issn 1099-0526
language English
publishDate 2021-01-01
publisher Wiley
record_format Article
series Complexity
spelling doaj-art-096c940cdf9d49c5b732b33c5ce7535b2025-02-03T06:01:00ZengWileyComplexity1099-05262021-01-01202110.1155/2021/6617309Control of Magnetic Manipulator Using Reinforcement Learning Based on Incrementally Adapted Local Linear ModelsMartin Brablc0Jan Žegklitz1Robert Grepl2Robert Babuška3Institute of Solid Mechanics, Mechatronics and BiomechanicsCzech Institute of Informatics, Robotics and CyberneticsInstitute of Solid Mechanics, Mechatronics and BiomechanicsCzech Institute of Informatics, Robotics and CyberneticsReinforcement learning (RL) agents can learn to control a nonlinear system without using a model of the system. However, having a model brings benefits, mainly in terms of a reduced number of unsuccessful trials before achieving acceptable control performance. Several modelling approaches have been used in the RL domain, such as neural networks, local linear regression, or Gaussian processes. In this article, we focus on techniques that have not been used much so far: symbolic regression (SR), based on genetic programming and local modelling. Using measured data, symbolic regression yields a nonlinear, continuous-time analytic model. We benchmark two state-of-the-art methods, SNGP (single-node genetic programming) and MGGP (multigene genetic programming), against a standard incremental local regression method called RFWR (receptive field weighted regression). We have introduced modifications to the RFWR algorithm to better suit the low-dimensional continuous-time systems we are mostly dealing with. The benchmark is a nonlinear, dynamic magnetic manipulation system. The results show that using the RL framework and a suitable approximation method, it is possible to design a stable controller of such a complex system without the necessity of any haphazard learning. While all of the approximation methods were successful, MGGP achieved the best results at the cost of higher computational complexity. Index Terms–AI-based methods, local linear regression, nonlinear systems, magnetic manipulation, model learning for control, optimal control, reinforcement learning, symbolic regression.http://dx.doi.org/10.1155/2021/6617309
spellingShingle Martin Brablc
Jan Žegklitz
Robert Grepl
Robert Babuška
Control of Magnetic Manipulator Using Reinforcement Learning Based on Incrementally Adapted Local Linear Models
Complexity
title Control of Magnetic Manipulator Using Reinforcement Learning Based on Incrementally Adapted Local Linear Models
title_full Control of Magnetic Manipulator Using Reinforcement Learning Based on Incrementally Adapted Local Linear Models
title_fullStr Control of Magnetic Manipulator Using Reinforcement Learning Based on Incrementally Adapted Local Linear Models
title_full_unstemmed Control of Magnetic Manipulator Using Reinforcement Learning Based on Incrementally Adapted Local Linear Models
title_short Control of Magnetic Manipulator Using Reinforcement Learning Based on Incrementally Adapted Local Linear Models
title_sort control of magnetic manipulator using reinforcement learning based on incrementally adapted local linear models
url http://dx.doi.org/10.1155/2021/6617309
work_keys_str_mv AT martinbrablc controlofmagneticmanipulatorusingreinforcementlearningbasedonincrementallyadaptedlocallinearmodels
AT janzegklitz controlofmagneticmanipulatorusingreinforcementlearningbasedonincrementallyadaptedlocallinearmodels
AT robertgrepl controlofmagneticmanipulatorusingreinforcementlearningbasedonincrementallyadaptedlocallinearmodels
AT robertbabuska controlofmagneticmanipulatorusingreinforcementlearningbasedonincrementallyadaptedlocallinearmodels