Diffusers
ONNX
Safetensors
How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("levihsu/OOTDiffusion", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

OOTDiffusion

Our OOTDiffusion GitHub repository

πŸ€— Try out OOTDiffusion

(Thanks to ZeroGPU for providing A100 GPUs)

OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on [arXiv paper]
Yuhao Xu, Tao Gu, Weifeng Chen, Chengcai Chen
Xiao-i Research

Our model checkpoints trained on VITON-HD (half-body) and Dress Code (full-body) have been released

  • πŸ“’πŸ“’ We support ONNX for humanparsing now. Most environmental issues should have been addressed : )
  • Please also download clip-vit-large-patch14 into checkpoints folder
  • We've only tested our code and models on Linux (Ubuntu 22.04)

demo  workflow 

Citation

@article{xu2024ootdiffusion,
  title={OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on},
  author={Xu, Yuhao and Gu, Tao and Chen, Weifeng and Chen, Chengcai},
  journal={arXiv preprint arXiv:2403.01779},
  year={2024}
}
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ 1 Ask for provider support

Spaces using levihsu/OOTDiffusion 100

Paper for levihsu/OOTDiffusion