Segment, Compare, and Learn: Creating Movement Libraries of Complex Task for Learning from Demonstration

Motion primitives are a highly useful and widely employed tool in the field of Learning from Demonstration (LfD). However, obtaining a large number of motion primitives can be a tedious process, as they typically need to be generated individually for each task to be learned. To address this challeng...

Full description

Saved in:
Bibliographic Details
Main Authors: Adrian Prados, Gonzalo Espinoza, Luis Moreno, Ramon Barber
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:Biomimetics
Subjects:
Online Access:https://www.mdpi.com/2313-7673/10/1/64
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Motion primitives are a highly useful and widely employed tool in the field of Learning from Demonstration (LfD). However, obtaining a large number of motion primitives can be a tedious process, as they typically need to be generated individually for each task to be learned. To address this challenge, this work presents an algorithm for acquiring robotic skills through automatic and unsupervised segmentation. The algorithm divides tasks into simpler subtasks and generates motion primitive libraries that group common subtasks for use in subsequent learning processes. Our algorithm is based on an initial segmentation step using a heuristic method, followed by probabilistic clustering with Gaussian Mixture Models. Once the segments are obtained, they are grouped using Gaussian Optimal Transport on the Gaussian Processes (GPs) of each segment group, comparing their similarities through the energy cost of transforming one GP into another. This process requires no prior knowledge, it is entirely autonomous, and supports multimodal information. The algorithm enables generating trajectories suitable for robotic tasks, establishing simple primitives that encapsulate the structure of the movements to be performed. Its effectiveness has been validated in manipulation tasks with a real robot, as well as through comparisons with state-of-the-art algorithms.
ISSN:2313-7673