Semantics-aware human motion generation from audio instructions

Recent advances in interactive technologies have highlighted the prominence of audio signals for semantic encoding. This paper explores a new task, where audio signals are used as conditioning inputs to generate motions that align with the semantics of the audio. Unlike text-based interactions, audi...

Full description

Saved in:
Bibliographic Details
Main Authors: Zi-An Wang, Shihao Zou, Shiyao Yu, Mingyuan Zhang, Chao Dong
Format: Article
Language:English
Published: Elsevier 2025-06-01
Series:Graphical Models
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S1524070325000153
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent advances in interactive technologies have highlighted the prominence of audio signals for semantic encoding. This paper explores a new task, where audio signals are used as conditioning inputs to generate motions that align with the semantics of the audio. Unlike text-based interactions, audio provides a more natural and intuitive communication method. However, existing methods typically focus on matching motions with music or speech rhythms, which often results in a weak connection between the semantics of the audio and generated motions. We propose an end-to-end framework using a masked generative transformer, enhanced by a memory-retrieval attention module to handle sparse and lengthy audio inputs. Additionally, we enrich existing datasets by converting descriptions into conversational style and generating corresponding audio with varied speaker identities. Experiments demonstrate the effectiveness and efficiency of the proposed framework, demonstrating that audio instructions can convey semantics similar to text while providing more practical and user-friendly interactions.
ISSN:1524-0703