Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
image
image |
|---|
One-to-All Animation: Alignment-Free Character Animation and Image Pose Transfer
This repository contains the sample training data and benchmarks associated with the paper One-to-All Animation: Alignment-Free Character Animation and Image Pose Transfer.
The paper presents a unified framework for high-fidelity character animation and image pose transfer for references with arbitrary layouts, addressing spatial misalignment and partially visible references through innovative techniques.
- π Project Page
- π» GitHub Repository
π Highlights
We provide a complete and reproducible training and evaluation pipeline:
- β Full Training Code: Three-stage progressive training from scratch
- β Complete Benchmarks: Reproduction code and pre-trained checkpoints
- β Flexible Training Codebase: Multi-resolution, multi-aspect-ratio, and multi-frame training codebase
- β Datasets: Pre-processed open-source datasets + self-collected cartoon data
βοΈ Quick Inference (Sample Usage)
To perform quick inference with the models, follow these steps from the GitHub repository:
π§ Dependencies and Installation
Clone Repo
git clone https://github.com/ssj9596/One-to-All-Animation.git cd One-to-All-AnimationCreate Conda Environment and Install Dependencies
# create new conda env conda create -n one-to-all python=3.12 conda activate one-to-all # install pytorch pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124 # or pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 -i https://mirrors.aliyun.com/pypi/simple/ # install python dependencies pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ # (Recommended) install flash attention 3 (or 2) from source: # https://github.com/Dao-AILab/flash-attention
π¬ Training from scratch
π‘ Data Collection Required: We find current open-source datasets are not sufficient for training from scratch. We strongly recommend collecting at least 3,000 additional high-quality video samples for better results.
We divide the training process into several steps to help you reproduce our results from scratch (using 1.3B as an example).
Download Pretrained Models
Download the base model from HuggingFace: Wan-AI/Wan2.1-T2V-1.3B-Diffusers
Download Training Datasets and Pose Pool
cd datasets bash setup_datasets.shThis will download and prepare:
- Training datasets (open-source + cartoon):
datasets/opensource_dataset/ - Pose pool for face enhancement:
datasets/opensource_pose_pool/
Manual Download Links
- Training datasets (open-source + cartoon):
Training
We provide three-stage training scripts:
- Stage 1: Reference Extractor
cd video-generation bash training_scripts/train1.3b_only_refextractor_2d.sh # Convert checkpoint to FP32 cd outputs_wanx1.3b/train1.3b_only_refextractor_2d/checkpoint-xxx mkdir fp32_model_xxx python zero_to_fp32.py . fp32_model_xxx --safe_serialization # Run inference (update model path in inference_refextractor.py first) cd ../../../ # Edit inference_refextractor.py and change ckpt_path to: # ./outputs_wanx1.3b/train1.3b_only_refextractor_2d/checkpoint-xxx/fp32_model_xxx python inference_refextractor.py- Stage 2: Pose Control
bash training_scripts/train1.3b_posecontrol_prefix_2d.sh- Stage 3: Token Replace for Long video generation
bash training_scripts/train1.3b_posecontrol_prefix_2d_tokenreplace.shπ‘ Training Notes:
- Each stage uses different training resolutions - check the scripts for specific resolution settings
- Fine-tuning from our checkpoints: If you want to continue training from our pre-trained models, directly use the Stage 3 script and modify the checkpoint path
π Reproduce Paper Results
We provide scripts to reproduce the quantitative results reported in our paper.
Download Benchmark
cd benchmark bash setup_datasets.shPrepare Model Input
cd ../video-generation python reproduce/infer_preprocess.pyRun Inference We provide inference scripts for different model sizes and datasets:
# TikTok dataset python reproduce/inference_tiktok1.3b.py # 1.3B model python reproduce/inference_tiktok14b.py # 14B model # Cartoon dataset python reproduce/inference_cartoon1.3b.py # 1.3B model python reproduce/inference_cartoon14b.py # 14B modelPrepare gt/pred pairs for Judge
cd ../benchmark # TikTok dataset python prepare_eval_frames_tiktok.py # Cartoon dataset python prepare_eval_frames_cartoon.pyRun judge
# prepare DisCo environment and lpips fvd ckpt for judge cd DisCo # TikTok dataset bash eval_tiktok.sh python summary.py
- Downloads last month
- 1,147