Continuously Augmented Discrete Diffusion model for Categorical Generative Modeling
AuthorsHuangjie Zheng, Shansan Gong‡**, Ruixiang Zhang, Tianrong Chen, Jiatao Gu,, Mingyuan Zhou†**, Navdeep Jaitly, Yizhe Zhang
Continuously Augmented Discrete Diffusion model for Categorical Generative Modeling
AuthorsHuangjie Zheng, Shansan Gong‡**, Ruixiang Zhang, Tianrong Chen, Jiatao Gu,, Mingyuan Zhou†**, Navdeep Jaitly, Yizhe Zhang
Standard discrete diffusion models treat all unobserved states identically by mapping them to an absorbing [MASK] token. This creates an ‘information void’ where semantic information that could be inferred from unmasked tokens is lost between denoising steps. We introduce Continuously Augmented Discrete Diffusion (CADD), a framework that augments the discrete state space with a paired diffusion in a continuous latent space. This yields graded, gradually corrupted states in which masked tokens are represented by noisy yet informative latent vectors rather than collapsed ‘information voids’. At each reverse step, CADD may leverage the continuous latent as a semantic hint to guide discrete denoising. The design is clean and compatible with existing discrete diffusion training. At sampling time, the strength and choice of estimator for the continuous latent vector enables a controlled trade-off between mode-coverage (generating diverse outputs) and mode-seeking (generating contextually precise outputs) behaviors. Empirically, we demonstrate CADD improves generative quality over mask-based diffusion across text generation, image synthesis, and code modeling, with consistent gains on both qualitative and quantitative metrics against strong discrete baselines.
LaDiR: Latent Diffusion Enhances LLMs for Text Reasoning
April 28, 2026research area Speech and Natural Language Processingconference ICLR
Large Language Models (LLMs) demonstrate their reasoning ability through chain-of-thought (CoT) generation. However, LLM’s autoregressive decoding may limit the ability to revisit and refine earlier tokens in a holistic manner, which can also lead to inefficient exploration for diverse solutions. In this paper, we propose LaDiR (Latent Diffusion Reasoner), a novel reasoning framework that unifies the expressiveness of continuous latent…
Target Concrete Score Matching: A Holistic Framework for Discrete Diffusion
July 11, 2025research area Methods and Algorithms, research area Speech and Natural Language Processingconference ICML
Discrete diffusion is a promising framework for modeling and generating discrete data. In this work, we present Target Concrete Score Matching (TCSM), a novel and versatile objective for training and fine-tuning discrete diffusion models. TCSM provides a general framework with broad applicability. It supports pre-training discrete diffusion models directly from data samples, and many existing discrete diffusion approaches naturally emerge as…