Segment, Compare, and Learn: Creating Movement Libraries of Complex Task for Learning from Demonstration
Motion primitives are a highly useful and widely employed tool in the field of Learning from Demonstration (LfD). However, obtaining a large number of motion primitives can be a tedious process, as they typically need to be generated individually for each task to be learned. To address this challeng...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Biomimetics |
Subjects: | |
Online Access: | https://www.mdpi.com/2313-7673/10/1/64 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832588988080193536 |
---|---|
author | Adrian Prados Gonzalo Espinoza Luis Moreno Ramon Barber |
author_facet | Adrian Prados Gonzalo Espinoza Luis Moreno Ramon Barber |
author_sort | Adrian Prados |
collection | DOAJ |
description | Motion primitives are a highly useful and widely employed tool in the field of Learning from Demonstration (LfD). However, obtaining a large number of motion primitives can be a tedious process, as they typically need to be generated individually for each task to be learned. To address this challenge, this work presents an algorithm for acquiring robotic skills through automatic and unsupervised segmentation. The algorithm divides tasks into simpler subtasks and generates motion primitive libraries that group common subtasks for use in subsequent learning processes. Our algorithm is based on an initial segmentation step using a heuristic method, followed by probabilistic clustering with Gaussian Mixture Models. Once the segments are obtained, they are grouped using Gaussian Optimal Transport on the Gaussian Processes (GPs) of each segment group, comparing their similarities through the energy cost of transforming one GP into another. This process requires no prior knowledge, it is entirely autonomous, and supports multimodal information. The algorithm enables generating trajectories suitable for robotic tasks, establishing simple primitives that encapsulate the structure of the movements to be performed. Its effectiveness has been validated in manipulation tasks with a real robot, as well as through comparisons with state-of-the-art algorithms. |
format | Article |
id | doaj-art-88ddc43eb63f4ffa9e5462d5e7846274 |
institution | Kabale University |
issn | 2313-7673 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Biomimetics |
spelling | doaj-art-88ddc43eb63f4ffa9e5462d5e78462742025-01-24T13:24:47ZengMDPI AGBiomimetics2313-76732025-01-011016410.3390/biomimetics10010064Segment, Compare, and Learn: Creating Movement Libraries of Complex Task for Learning from DemonstrationAdrian Prados0Gonzalo Espinoza1Luis Moreno2Ramon Barber3RoboticsLab, Universidad Carlos III de Madrid, 28911 Madrid, SpainRoboticsLab, Universidad Carlos III de Madrid, 28911 Madrid, SpainRoboticsLab, Universidad Carlos III de Madrid, 28911 Madrid, SpainRoboticsLab, Universidad Carlos III de Madrid, 28911 Madrid, SpainMotion primitives are a highly useful and widely employed tool in the field of Learning from Demonstration (LfD). However, obtaining a large number of motion primitives can be a tedious process, as they typically need to be generated individually for each task to be learned. To address this challenge, this work presents an algorithm for acquiring robotic skills through automatic and unsupervised segmentation. The algorithm divides tasks into simpler subtasks and generates motion primitive libraries that group common subtasks for use in subsequent learning processes. Our algorithm is based on an initial segmentation step using a heuristic method, followed by probabilistic clustering with Gaussian Mixture Models. Once the segments are obtained, they are grouped using Gaussian Optimal Transport on the Gaussian Processes (GPs) of each segment group, comparing their similarities through the energy cost of transforming one GP into another. This process requires no prior knowledge, it is entirely autonomous, and supports multimodal information. The algorithm enables generating trajectories suitable for robotic tasks, establishing simple primitives that encapsulate the structure of the movements to be performed. Its effectiveness has been validated in manipulation tasks with a real robot, as well as through comparisons with state-of-the-art algorithms.https://www.mdpi.com/2313-7673/10/1/64learning from demonstrationimitation learningmovement primitivesGaussian mixture modelsGaussian process |
spellingShingle | Adrian Prados Gonzalo Espinoza Luis Moreno Ramon Barber Segment, Compare, and Learn: Creating Movement Libraries of Complex Task for Learning from Demonstration Biomimetics learning from demonstration imitation learning movement primitives Gaussian mixture models Gaussian process |
title | Segment, Compare, and Learn: Creating Movement Libraries of Complex Task for Learning from Demonstration |
title_full | Segment, Compare, and Learn: Creating Movement Libraries of Complex Task for Learning from Demonstration |
title_fullStr | Segment, Compare, and Learn: Creating Movement Libraries of Complex Task for Learning from Demonstration |
title_full_unstemmed | Segment, Compare, and Learn: Creating Movement Libraries of Complex Task for Learning from Demonstration |
title_short | Segment, Compare, and Learn: Creating Movement Libraries of Complex Task for Learning from Demonstration |
title_sort | segment compare and learn creating movement libraries of complex task for learning from demonstration |
topic | learning from demonstration imitation learning movement primitives Gaussian mixture models Gaussian process |
url | https://www.mdpi.com/2313-7673/10/1/64 |
work_keys_str_mv | AT adrianprados segmentcompareandlearncreatingmovementlibrariesofcomplextaskforlearningfromdemonstration AT gonzaloespinoza segmentcompareandlearncreatingmovementlibrariesofcomplextaskforlearningfromdemonstration AT luismoreno segmentcompareandlearncreatingmovementlibrariesofcomplextaskforlearningfromdemonstration AT ramonbarber segmentcompareandlearncreatingmovementlibrariesofcomplextaskforlearningfromdemonstration |