FreeInit : Bridging Initialization Gap
in Video Diffusion Models

S-Lab, Nanyang Technological University

We propose FreeInit, a concise yet effective method to improve temporal consistency of videos generated by diffusion models. FreeInit requires no additional training and introduces no learnable parameters, and can be easily incorporated into arbitrary video diffusion models at inference time.



Though diffusion-based video generation has witnessed rapid progress, the inference results of existing models still exhibit unsatisfactory temporal consistency and unnatural dynamics. In this paper, we delve deep into the noise initialization of video diffusion models, and discover an implicit training-inference gap that attributes to the inference quality drop. Our key findings are: 1) the spatial-temporal frequency distribution of the initial latent's signal-to-noise ratio (SNR) at inference is intrinsically different from training, and 2) the denoising process is significantly influenced by the low-frequency component of the initial noise. Motivated by these observations, we propose a concise yet effective inference sampling strategy, FreeInit, which significantly improves temporal consistency of videos generated by diffusion models. Through iteratively refining the spatial-temporal low-frequency component of the initial latent during inference, FreeInit is able to compensate the initialization gap between training and inference, thus effectively improving the subject appearance and temporal consistency of generation results. Extensive experiments demonstrate that FreeInit consistently enhances the generation results of various text-to-video generation models without additional training.

FreeInit Framework

We propose FreeInit to bridge the initialization gap between training and inference of video diffusion models. FreeInit refines the inference initial noise in an iterative manner. Through DDIM Sampling, DDPM Forward and Noise Reinitialization, the low frequency components of initial noise is gradually refined, consistently enhancing the temporal consistency and subject appearance.


Videos generated by AnimateDiff before and after FreeInit.


Videos generated by ModelScope before and after FreeInit.


Videos generated by VideoCrafter before and after FreeInit.

Related Links

Check out FreeU and FreeNoise for more tuning-free add-ons to Image and Video Diffusion Models! :)


If you find our work useful, please consider citing our paper:

      title={FreeInit: Bridging Initialization Gap in Video Diffusion Models},
      author={Wu, Tianxing and Si, Chenyang and Jiang, Yuming and Huang, Ziqi and Liu, Ziwei},
      journal={arXiv preprint arXiv:2312.07537},