Skip to content

[arXiv'25] Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models

License

Notifications You must be signed in to change notification settings

hustvl/LightningDiT

Repository files navigation

⚡Reconstruction vs. Generation:

Taming Optimization Dilemma in Latent Diffusion Models

FID=1.35 on ImageNet-256 & 21.8x faster training than DiT!

Jingfeng Yao, Xinggang Wang*

Huazhong University of Science and Technology (HUST)

*Corresponding author: [email protected]

PWC

license authors paper arXiv

Visualization

✨ Highlights

  • Latent diffusion system with 0.28 rFID and 1.35 FID on ImageNet-256 generation, surpassing all published state-of-the-art!

  • More than 21.8× faster convergence with VA-VAE and LightningDiT than original DiT!

  • Surpass DiT with FID=2.11 with only 8 GPUs in about 10 hours. Let's make diffusion transformers research more affordable!

📰 News

  • [2025.01.02] We have released the pre-trained weights.

  • [2025.01.01] We release the code and paper for VA-VAE and LightningDiT! The weights and pre-extracted latents will be released soon.

📄 Introduction

Latent diffusion models (LDMs) with Transformer architectures excel at generating high-fidelity images. However, recent studies reveal an optimization dilemma in this two-stage design: while increasing the per-token feature dimension in visual tokenizers improves reconstruction quality, it requires substantially larger diffusion models and more training iterations to achieve comparable generation performance. Consequently, existing systems often settle for sub-optimal solutions, either producing visual artifacts due to information loss within tokenizers or failing to converge fully due to expensive computation costs.

We argue that this dilemma stems from the inherent difficulty in learning unconstrained high-dimensional latent spaces. To address this, we propose aligning the latent space with pre-trained vision foundation models when training the visual tokenizers. Our proposed VA-VAE (Vision foundation model Aligned Variational AutoEncoder) significantly expands the reconstruction-generation frontier of latent diffusion models, enabling faster convergence of Diffusion Transformers (DiT) in high-dimensional latent spaces. To exploit the full potential of VA-VAE, we build an enhanced DiT baseline with improved training strategies and architecture designs, termed LightningDiT. The integrated system demonstrates remarkable training efficiency by reaching FID=2.11 in just 64 epochs -- an over 21× convergence speedup over the original DiT implementations, while achieving state-of-the-art performance on ImageNet-256 image generation with FID=1.35.

📝 Results

  • State-of-the-art Performance on ImageNet 256x256 with FID=1.35.
  • Surpass DiT within only 64 epochs training, achieving 21.8x speedup.
Results

🎯 How to Use

Installation

conda create -n lightningdit python=3.10.12
conda activate lightningdit
pip install -r requirements.txt

Inference with Pre-trained Models

  • Download weights and data infos:

  • Fast sample demo images:

    Run:

    bash bash run_fast_inference.sh ${config_path}
    

    Images will be saved into demo_images/demo_samples.png, e.g. the following one:

    Demo Samples
  • Sample for FID-50k evaluation:

    Run:

    bash run_inference.sh ${config_path}
    

    NOTE: The FID result reported by the script serves as a reference value. The final FID-50k reported in paper is evaluated with ADM:

    git clone https://github.com/openai/guided-diffusion.git
    
    # save your npz file with tools/save_npz.py
    bash run_fid_eval.sh /path/to/your.npz
    

🎮 Train Your Own Models

  • We provide a 👆detailed tutorial for training your own models of 2.1 FID score within only 64 epochs. It takes only about 10 hours with 8 x H800 GPUs.

❤️ Acknowledgements

This repo is mainly built on DiT, FastDiT and SiT. Our VAVAE codes are mainly built with LDM and MAR. Thanks for all these great works.

📝 Citation

If you find our work useful, please consider to cite our related paper:

# arxiv preprint
@article{vavae,
  title={Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models},
  author={Yao, Jingfeng and Wang, Xinggang},
  journal={arXiv preprint arXiv:2501.01423},
  year={2025}
}

# NeurIPS 24
@article{fasterdit,
  title={FasterDiT: Towards Faster Diffusion Transformers Training without Architecture Modification},
  author={Yao, Jingfeng and Wang, Cheng and Liu, Wenyu and Wang, Xinggang},
  journal={arXiv preprint arXiv:2410.10356},
  year={2024}
}

About

[arXiv'25] Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published