Audio2DiffuGesture: Generating a diverse co-speech gesture based on a diffusion model
People use a combination of language and gestures to convey intentions, making the generation of natural co-speech gestures a challenging task. In audio-driven gesture generation, relying solely on features extracted from raw audio waveforms limits the model's ability to fully learn the joint d...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
AIMS Press
2024-09-01
|
Series: | Electronic Research Archive |
Subjects: | |
Online Access: | https://www.aimspress.com/article/doi/10.3934/era.2024250 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|