Edit model card

AnimateLCM-I2V for Fast Image-conditioned Video Generation in 4 steps.

AnimateLCM-I2V is a latent image-to-video consistency model finetuned with AnimateLCM following the strategy proposed in AnimateLCM-paper without requiring teacher models.

AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data by Fu-Yun Wang et al.

Example-Video

image/png

For more details, please refer to our [paper] | [code] | [proj-page] | [civitai].

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Spaces using wangfuyun/AnimateLCM-I2V 5