infer_kandinsky_2_inpaint
Ikomia HUB

infer_kandinsky_2_inpaint

GitHub

About

1.0.0
Apache-2.0

Kandinsky 2.2 inpainting diffusion model.

Task: Inpainting
Latent Diffusion
Hugging Face
Kandinsky
Inpaint
Generative

Kandinsky 2.2 inpaint is a text-conditional diffusion model based on unCLIP and latent diffusion, composed of a transformer-based image prior model, a unet diffusion model, and a decoder.

Inpainted cat

Note: This algorithm requires 10GB GPU RAM

🚀 Use with Ikomia API

1. Install Ikomia API

We strongly recommend using a virtual environment. If you're not sure where to start, we offer a tutorial here.

pip install ikomia

2. Create your workflow

from ikomia.dataprocess.workflow import Workflow
from ikomia.utils.displayIO import display


# Init your workflow
wf = Workflow()

# Detect object with Grounding DINO
grounding_dino = wf.add_task(name = "infer_grounding_dino", auto_connect=True)
grounding_dino.set_parameters({'prompt': 'cat'})

# Segment object with Mobile SAM
mobile_sam = wf.add_task(name = "infer_mobile_segment_anything", auto_connect=True)

# Inpaint the Kandinsky 2.2 inpaint
kandinsky = wf.add_task(name = "infer_kandinsky_2_inpaint", auto_connect=True)

# Run on your image
wf.run_on(url="https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_cat.jpg")

# Get your inpaint image
display(kandinsky.get_output(0).get_image())

☀️ Use with Ikomia Studio

Ikomia Studio offers a friendly UI with the same features as the API.

  • If you haven't started using Ikomia Studio yet, download and install it from this page.

  • For additional guidance on getting started with Ikomia Studio, check out this blog post.

📝 Set algorithm parameters

  • prompt (str) - default 'portrait of a young women, blue eyes, cinematic' : Text prompt to guide the image generation .
  • negative_prompt (str, optional) - default 'low quality, bad quality': The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1).
  • prior_num_inference_steps (int) - default '25': Number of denoising steps of the prior model (CLIP).
  • prior_guidance_scale (float) - default '4.0': Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality. (minimum: 1; maximum: 20).
  • num_inference_steps (int) - default '150': The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
  • guidance_scale (float) - default '1.0': Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality. (minimum: 1; maximum: 20).
  • seed (int) - default '-1': Seed value. '-1' generates a random number between 0 and 191965535.

Note:"prior model" interprets and encodes the input text to understand the desired image content, while the "decoder model" translates this encoded information into the actual visual representation, effectively generating the image based on the text description.

from ikomia.dataprocess.workflow import Workflow
from ikomia.utils.displayIO import display


# Init your workflow
wf = Workflow()

# Detect object with Grounding DINO
grounding_dino = wf.add_task(name = "infer_grounding_dino", auto_connect=True)
grounding_dino.set_parameters({'prompt': 'cat'})

# Segment object with Mobile SAM
mobile_sam = wf.add_task(name = "infer_mobile_segment_anything", auto_connect=True)

# Inpaint the Kandinsky 2.2 inpaint
kandinsky = wf.add_task(name = "infer_kandinsky_2_inpaint", auto_connect=True)

kandinsky.set_parameters({
    'prompt': 'a dog, high resolution',
    'negative_prompt': 'lowres, text, error, cropped, worst quality, low quality, ugly',
    'prior_num_inference_steps': '25',
    'prior_guidance_scale': '4.0',
    'num_inference_steps': '150',
    'guidance_scale': '4.0',
    'seed': '-1',
    })


# Run on your image
wf.run_on(url="https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_cat.jpg")

# Get your inpaint image
display(kandinsky.get_output(0).get_image())

🔍 Explore algorithm outputs

Every algorithm produces specific outputs, yet they can be explored them the same way using the Ikomia API. For a more in-depth understanding of managing algorithm outputs, please refer to the documentation.

from ikomia.dataprocess.workflow import Workflow

# Init your workflow
wf = Workflow()

# Detect object with Grounding DINO
grounding_dino = wf.add_task(name = "infer_grounding_dino", auto_connect=True)
grounding_dino.set_parameters({'prompt': 'cat'})

# Segment object with Mobile SAM
mobile_sam = wf.add_task(name = "infer_mobile_segment_anything", auto_connect=True)

# Inpaint the Kandinsky 2.2 inpaint
kandinsky = wf.add_task(name = "infer_kandinsky_2_inpaint", auto_connect=True)

# Run on your image
wf.run_on(url="https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_cat.jpg")

# Iterate over outputs
for output in kandinsky.get_outputs():
    # Print information
    print(output)
    # Export it to JSON
    output.to_json()

Developer

  • Ikomia
    Ikomia

License

Apache License 2.0
Read license full text

A permissive license whose main conditions require preservation of copyright and license notices. Contributors provide an express grant of patent rights. Licensed works, modifications, and larger works may be distributed under different terms and without source code.

PermissionsConditionsLimitations

Commercial use

License and copyright notice

Trademark use

Modification

State changes

Liability

Distribution

Warranty

Patent use

Private use

This is not legal advice: this description is for informational purposes only and does not constitute the license itself. Provided by choosealicense.com.