Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement
Learning from demonstration (LfD) enables a robot to emulate natural human movement instead of merely executing preprogrammed behaviors. This article presents a hierarchical LfD structure of task-parameterized models for object movement tasks, which are ubiquitous in everyday life and could benefit...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2019-01-01
|
Series: | Applied Bionics and Biomechanics |
Online Access: | http://dx.doi.org/10.1155/2019/9765383 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832567781144395776 |
---|---|
author | Siyao Hu Katherine J. Kuchenbecker |
author_facet | Siyao Hu Katherine J. Kuchenbecker |
author_sort | Siyao Hu |
collection | DOAJ |
description | Learning from demonstration (LfD) enables a robot to emulate natural human movement instead of merely executing preprogrammed behaviors. This article presents a hierarchical LfD structure of task-parameterized models for object movement tasks, which are ubiquitous in everyday life and could benefit from robotic support. Our approach uses the task-parameterized Gaussian mixture model (TP-GMM) algorithm to encode sets of demonstrations in separate models that each correspond to a different task situation. The robot then maximizes its expected performance in a new situation by either selecting a good existing model or requesting new demonstrations. Compared to a standard implementation that encodes all demonstrations together for all test situations, the proposed approach offers four advantages. First, a simply defined distance function can be used to estimate test performance by calculating the similarity between a test situation and the existing models. Second, the proposed approach can improve generalization, e.g., better satisfying the demonstrated task constraints and speeding up task execution. Third, because the hierarchical structure encodes each demonstrated situation individually, a wider range of task situations can be modeled in the same framework without deteriorating performance. Last, adding or removing demonstrations incurs low computational load, and thus, the robot’s skill library can be built incrementally. We first instantiate the proposed approach in a simulated task to validate these advantages. We then show that the advantages transfer to real hardware for a task where naive participants collaborated with a Willow Garage PR2 robot to move a handheld object. For most tested scenarios, our hierarchical method achieved significantly better task performance and subjective ratings than both a passive model with only gravity compensation and a single TP-GMM encoding all demonstrations. |
format | Article |
id | doaj-art-233a6513585b494e9da004a70d61020c |
institution | Kabale University |
issn | 1176-2322 1754-2103 |
language | English |
publishDate | 2019-01-01 |
publisher | Wiley |
record_format | Article |
series | Applied Bionics and Biomechanics |
spelling | doaj-art-233a6513585b494e9da004a70d61020c2025-02-03T01:00:30ZengWileyApplied Bionics and Biomechanics1176-23221754-21032019-01-01201910.1155/2019/97653839765383Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object MovementSiyao Hu0Katherine J. Kuchenbecker1Department of Mechanical Engineering and Applied Mechanics and GRASP Laboratory, University of Pennsylvania, Philadelphia 19104, USADepartment of Mechanical Engineering and Applied Mechanics and GRASP Laboratory, University of Pennsylvania, Philadelphia 19104, USALearning from demonstration (LfD) enables a robot to emulate natural human movement instead of merely executing preprogrammed behaviors. This article presents a hierarchical LfD structure of task-parameterized models for object movement tasks, which are ubiquitous in everyday life and could benefit from robotic support. Our approach uses the task-parameterized Gaussian mixture model (TP-GMM) algorithm to encode sets of demonstrations in separate models that each correspond to a different task situation. The robot then maximizes its expected performance in a new situation by either selecting a good existing model or requesting new demonstrations. Compared to a standard implementation that encodes all demonstrations together for all test situations, the proposed approach offers four advantages. First, a simply defined distance function can be used to estimate test performance by calculating the similarity between a test situation and the existing models. Second, the proposed approach can improve generalization, e.g., better satisfying the demonstrated task constraints and speeding up task execution. Third, because the hierarchical structure encodes each demonstrated situation individually, a wider range of task situations can be modeled in the same framework without deteriorating performance. Last, adding or removing demonstrations incurs low computational load, and thus, the robot’s skill library can be built incrementally. We first instantiate the proposed approach in a simulated task to validate these advantages. We then show that the advantages transfer to real hardware for a task where naive participants collaborated with a Willow Garage PR2 robot to move a handheld object. For most tested scenarios, our hierarchical method achieved significantly better task performance and subjective ratings than both a passive model with only gravity compensation and a single TP-GMM encoding all demonstrations.http://dx.doi.org/10.1155/2019/9765383 |
spellingShingle | Siyao Hu Katherine J. Kuchenbecker Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement Applied Bionics and Biomechanics |
title | Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement |
title_full | Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement |
title_fullStr | Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement |
title_full_unstemmed | Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement |
title_short | Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement |
title_sort | hierarchical task parameterized learning from demonstration for collaborative object movement |
url | http://dx.doi.org/10.1155/2019/9765383 |
work_keys_str_mv | AT siyaohu hierarchicaltaskparameterizedlearningfromdemonstrationforcollaborativeobjectmovement AT katherinejkuchenbecker hierarchicaltaskparameterizedlearningfromdemonstrationforcollaborativeobjectmovement |