•  


GitHub - dvlab-research/Video-P2P: Video-P2P: Video Editing with Cross-attention Control
Skip to content

dvlab-research/Video-P2P

Repository files navigation

[CVPR 2024] Video-P2P: Video Editing with Cross-attention Control

The official implementation of Video-P2P .

Shaoteng Liu , Yuechen Zhang , Wenbo Li , Zhe Lin , Jiaya Jia

Project Website arXiv Hugging Face Demo

Teaser

Changelog

  • 2023.03.20 Release Demo.
  • 2023.03.19 Release Code.
  • 2023.03.09 Paper preprint on arxiv.

Todo

  • Release the code with 6 examples.
  • Update a faster version.
  • Release data.
  • Release the Gradio Demo.
  • Add local Gradio Demo.
  • Release more configs and new applications.

Setup

pip install -r requirements.txt

The code was tested on both Tesla V100 32GB and RTX3090 24GB. At least 20GB VRAM is required.

The environment is similar to Tune-A-Video and prompt-to-prompt .

xformers on 3090 may meet this issue .

Quickstart

Please replace pretrained_model_path with the path to your stable-diffusion.

To download the pre-trained model, please refer to diffusers .

#
 Stage 1: Tuning to do model initialization.


#
 You can minimize the tuning epochs to speed up.

python run_tuning.py  --config=
"
configs/rabbit-jump-tune.yaml
"
#
 Stage 2: Attention Control


#
 We develop a faster mode (1 min on V100):

python run_videop2p.py --config=
"
configs/rabbit-jump-p2p.yaml
"
 --fast

#
 The official mode (10 mins on V100, more stable):

python run_videop2p.py --config=
"
configs/rabbit-jump-p2p.yaml
"

Find your results in Video-P2P/outputs/xxx/results .

Dataset

We release our dataset here .

Download them under ./data and explore your creativity!

Results

configs/rabbit-jump-p2p.yaml configs/penguin-run-p2p.yaml
configs/man-motor-p2p.yaml configs/car-drive-p2p.yaml
configs/tiger-forest-p2p.yaml configs/bird-forest-p2p.yaml

Gradio demo

Running the following command to launch the local demo built with gradio :

python app_gradio.py

Find the demo on HuggingFace here . The demo code borrows heavily from Tune-A-Video .

Citation

@misc{liu2023videop2p,
      author={Liu, Shaoteng and Zhang, Yuechen and Li, Wenbo and Lin, Zhe and Jia, Jiaya},
      title={Video-P2P: Video Editing with Cross-attention Control}, 
      journal={arXiv:2303.04761},
      year={2023},
}

References

- "漢字路" 한글한자자동변환 서비스는 교육부 고전문헌국역지원사업의 지원으로 구축되었습니다.
- "漢字路" 한글한자자동변환 서비스는 전통문화연구회 "울산대학교한국어처리연구실 옥철영(IT융합전공)교수팀"에서 개발한 한글한자자동변환기를 바탕하여 지속적으로 공동 연구 개발하고 있는 서비스입니다.
- 현재 고유명사(인명, 지명등)을 비롯한 여러 변환오류가 있으며 이를 해결하고자 많은 연구 개발을 진행하고자 하고 있습니다. 이를 인지하시고 다른 곳에서 인용시 한자 변환 결과를 한번 더 검토하시고 사용해 주시기 바랍니다.
- 변환오류 및 건의,문의사항은 juntong@juntong.or.kr로 메일로 보내주시면 감사하겠습니다. .
Copyright ⓒ 2020 By '전통문화연구회(傳統文化硏究會)' All Rights reserved.
 한국   대만   중국   일본