Sequence-to-Sequence Text Generation with Discrete Diffusion Models
Diffusion language models are currently the most promising language models among non-autoregressive models, and are expected to replace autoregressive language models, which suffer from slow inference speed, to achieve efficient and quality-preserving text generation. Sequence-to-sequence (Seq2Seq)...
Saved in:
| Main Author: | JIANG Hang, CAI Guoyong, LI Sihui |
|---|---|
| Format: | Article |
| Language: | zho |
| Published: |
Journal of Computer Engineering and Applications Beijing Co., Ltd., Science Press
2025-03-01
|
| Series: | Jisuanji kexue yu tansuo |
| Subjects: | |
| Online Access: | http://fcst.ceaj.org/fileup/1673-9418/PDF/2405063.pdf |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
SMCLM: Semantically Meaningful Causal Language Modeling for Autoregressive Paraphrase Generation
by: Michal Perelkiewicz, et al.
Published: (2025-01-01) -
VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoder
by: Zhicong Tang, et al.
Published: (2025-08-01) -
TransFusion: Generating long, high fidelity time series using diffusion models with transformers
by: Md Fahim Sikder, et al.
Published: (2025-06-01) -
How Implicit Sequence Learning and Explicit Sequence Knowledge Are Expressed in a Serial Response Time Task
by: Marius Barth, et al.
Published: (2025-04-01) -
Multimodal diffusion framework for collaborative text image audio generation and applications
by: Junhua Wang, et al.
Published: (2025-07-01)