Papers
arxiv:2303.13439

Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators

Published on Mar 23, 2023
Authors:
,
,
,
,
,
,

Abstract

Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain. Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object. Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing. As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data. Our code will be open sourced at: https://github.com/Picsart-AI-Research/Text2Video-Zero .

Community

a sunset over a field of crops with the sun shining through the clouds and the sun shining through the leaves

This comment has been hidden

a painting of a man with a staff and sheep

A rabbit looking in a mirror beside Alice in wonderland

grg7347_A_close-up_of_a_video_camera_falling_and_shattering_o_a98eea2c-f336-46e9-9510-0d9d80ff4038_1.png

dynamics, movement
1729598515514mzy1nl5y.jpg

Sign up or log in to comment

Models citing this paper 5

Browse 5 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2303.13439 in a dataset README.md to link it from this page.

Spaces citing this paper 25

Collections including this paper 3