Bayesian Reinforcement Learning for Adaptive Balancing in an Assembly Line With Human-Robot Collaboration

Reinforcement learning (RL) has been frequently used in recent years to develop intelligent robot agents that collaborate with human workers. In human-robot collaboration (HRC), the adaptation ability of automated robots is critical for efficient collaboration. However, previous studies using RL for...

Full description

Saved in:
Bibliographic Details
Main Authors: Hyun-Rok Lee, Sanghyun Park, Jimin Lee
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10756596/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Reinforcement learning (RL) has been frequently used in recent years to develop intelligent robot agents that collaborate with human workers. In human-robot collaboration (HRC), the adaptation ability of automated robots is critical for efficient collaboration. However, previous studies using RL for assembly lines with HRC did not properly consider the possibilities of dynamics changes in the environment when collaborating with other human workers. In this study, we propose a method for adaptive assembly line balancing to efficiently collaborate with diverse workers having different task proficiencies through Bayesian RL. We mathematically formulate a Hidden-parameter Markov Decision Processes (HiP-MDP) model to represent the task proficiency of unknown workers in sequential assembly tasks. The robotic arm learns generalized policy to collaborate with diverse workers through meta-learning, and in a practical implementation, uses the policy adaptively by estimating the task proficiency of workers through Bayesian inference. We demonstrate the superiority of the proposed method in both virtual and real-world environments. Numerical studies show that the proposed method can collaborate well with diverse new workers.
ISSN:2169-3536