Audio2DiffuGesture: Generating a diverse co-speech gesture based on a diffusion model
People use a combination of language and gestures to convey intentions, making the generation of natural co-speech gestures a challenging task. In audio-driven gesture generation, relying solely on features extracted from raw audio waveforms limits the model's ability to fully learn the joint d...
Saved in:
Main Authors: | Hongze Yao, Yingting Xu, Weitao WU, Huabin He, Wen Ren, Zhiming Cai |
---|---|
Format: | Article |
Language: | English |
Published: |
AIMS Press
2024-09-01
|
Series: | Electronic Research Archive |
Subjects: | |
Online Access: | https://www.aimspress.com/article/doi/10.3934/era.2024250 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Sense of agency in gesture-based interactions: modulated by sensory modality but not feedback meaning
by: George Evangelou, et al.
Published: (2025-01-01) -
Embodied sharpness: exploring the slicing gesture in political talk shows
by: Silva H. Ladewig
Published: (2025-02-01) -
Gestural Ways of Depicting Metaphors and Abstract Concepts
by: Kraśnicka Izabela
Published: (2024-12-01) -
Music and Gesture—New Perspectives in Conducting and in Education
by: Riccardo Lombardo
Published: (2024-12-01) -
Idiosyncratic gesture use in a mother-infant dyad in chimpanzees (Pan troglodytes schweinfurthii) in the wild
by: Bas van Boekholt, et al.
Published: (2024-10-01)