InstantID: Zero-shot Identity-Preserving Generation in Seconds

Official 🤗 Gradio demo for InstantID: Zero-shot Identity-Preserving Generation in Seconds.
We are organizing a Spring Festival event with HuggingFace from 2.7 to 2.25, and you can now generate pictures of Spring Festival costumes. Happy Dragon Year 🐲 ! Share the joy with your family.
How to use:

  1. Upload an image with a face. For images with multiple faces, we will only detect the largest face. Ensure the face is not too small and is clearly visible without significant obstructions or blurring.
  2. (Optional) You can upload another image as a reference for the face pose. If you don't, we will use the first detected face image to extract facial landmarks. If you use a cropped face at step 1, it is recommended to upload it to define a new face pose.
  3. (Optional) You can select multiple ControlNet models to control the generation process. The default is to use the IdentityNet only. The ControlNet models include pose skeleton, canny, and depth. You can adjust the strength of each ControlNet model to control the generation process.
  4. Enter a text prompt, as done in normal text-to-image models.
  5. Click the Submit button to begin customization.
  6. Share your customized photo with your friends and enjoy! 😊

LCM speeds up the inference step, the trade-off is the quality of the generated image. It performs better with portrait face images rather than distant faces

Style template
0 1.5
0 1.5
Controlnet

Use pose for skeleton inference, canny for edge detection, and depth for depth map estimation. You can try all three to control the generation process

0 1.5
0 1.5
1 100
0.1 20
0 2147483647
Schedulers
Examples
Upload a photo of your face Upload a reference pose image (Optional) Prompt Style template Negative Prompt

📝 Citation
If our work is helpful for your research or applications, please cite us via:

@article{wang2024instantid,
  title={InstantID: Zero-shot Identity-Preserving Generation in Seconds},
  author={Wang, Qixun and Bai, Xu and Wang, Haofan and Qin, Zekui and Chen, Anthony},
  journal={arXiv preprint arXiv:2401.07519},
  year={2024}
}

📧 Contact
If you have any questions, please feel free to open an issue or directly reach us out at [email protected].