UniMotion-DM: Uniform Text-Motion Generation and Editing via Diffusion Model
Diffusion models have demonstrated substantial success in controllable generation for continuous modalities, positioning them as highly suitable for tasks such as human motion generation. However, existing approaches are typically limited to single-task applications, such as text-to-motion generatio...
Saved in:
| Main Authors: | Song Lin, Wenjun Hou |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10802885/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Investigating the Role of Intravoxel Incoherent Motion Diffusion-Weighted Imaging in Evaluating Multiple Sclerosis Lesions
by: Othman I. Alomair, et al.
Published: (2025-05-01) -
Transdiagnostic Symptom Domains Have Distinct Patterns of Association With Head Motion During Multimodal Imaging in Children
by: Kavari Hercules, et al.
Published: (2025-07-01) -
Bridging text and crystal structures: literature-driven contrastive learning for materials science
by: Yuta Suzuki, et al.
Published: (2025-01-01) -
Dynamic Tuning and Multi-Task Learning-Based Model for Multimodal Sentiment Analysis
by: Yi Liang, et al.
Published: (2025-06-01) -
Multimodal diffusion framework for collaborative text image audio generation and applications
by: Junhua Wang, et al.
Published: (2025-07-01)