Audio2DiffuGesture: Generating a diverse co-speech gesture based on a diffusion model

People use a combination of language and gestures to convey intentions, making the generation of natural co-speech gestures a challenging task. In audio-driven gesture generation, relying solely on features extracted from raw audio waveforms limits the model's ability to fully learn the joint d...

Full description

Saved in:
Bibliographic Details
Main Authors: Hongze Yao, Yingting Xu, Weitao WU, Huabin He, Wen Ren, Zhiming Cai
Format: Article
Language:English
Published: AIMS Press 2024-09-01
Series:Electronic Research Archive
Subjects:
Online Access:https://www.aimspress.com/article/doi/10.3934/era.2024250
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:People use a combination of language and gestures to convey intentions, making the generation of natural co-speech gestures a challenging task. In audio-driven gesture generation, relying solely on features extracted from raw audio waveforms limits the model's ability to fully learn the joint distribution between audio and gestures. To address this limitation, we integrated key features from both raw audio waveforms and Mel-spectrograms. Specifically, we employed cascaded 1D convolutions to extract features from the audio waveform and a two-stage attention mechanism to capture features from the Mel-spectrogram. The fused features were then input into a Transformer with cross-dimension attention for sequence modeling, which mitigated accumulated non-autoregressive errors and reduced redundant information. We developed a diffusion model-based Audio to Diffusion Gesture (A2DG) generation pipeline capable of producing high-quality and diverse gestures. Our method demonstrated superior performance in extensive experiments compared to established baselines. Regarding the TED Gesture and TED Expressive datasets, the Fréchet Gesture Distance (FGD) performance improved by 16.8 and 56%, respectively. Additionally, a user study validated that the co-speech gestures generated by our method are more vivid and realistic.
ISSN:2688-1594