infer_mmlab_segmentation

infer_mmlab_segmentation

About

2.0.4
Apache-2.0

Inference for MMLAB segmentation models


Run object detection and instance segmentation algorithms from MMLAB framework.

Models will come from MMLAB's model zoo if custom training is disabled. If not, you can choose to load your model trained with algorithm train_mmlab_detection from Ikomia HUB. In this case, make sure to set parameters for config file (.py) and model file (.pth). Both of these files are produced by the train algorithm.

Example image

🚀 Use with Ikomia API

1. Install Ikomia API

We strongly recommend using a virtual environment. If you're not sure where to start, we offer a tutorial here.

pip install ikomia

2. Create your workflow

from ikomia.dataprocess.workflow import Workflow
from ikomia.utils.displayIO import display

# Init your workflow
wf = Workflow()

# Add object detection algorithm
segmentor = wf.add_task(name="infer_mmlab_segmentation", auto_connect=True)

# Run the workflow on image
wf.run_on(url="https://production-media.paperswithcode.com/datasets/Foggy_Cityscapes-0000003414-fb7dc023.jpg")

# Get and display results
image_output = segmentor.get_output(0)
segmentation_output = segmentor.get_output(1)

display(image_output.get_image_with_mask(segmentation_output), title="MMLAB segmentation")

☀️ Use with Ikomia Studio

Ikomia Studio offers a friendly UI with the same features as the API.

  • If you haven't started using Ikomia Studio yet, download and install it from this page.

  • For additional guidance on getting started with Ikomia Studio, check out this blog post.

📝 Set algorithm parameters

from ikomia.dataprocess.workflow import Workflow

# Init your workflow
wf = Workflow()

# Add object detection algorithm
segmentor = wf.add_task(name="infer_mmlab_segmentation", auto_connect=True)

segmentor.set_parameters({
"model_name": "pidnet",
"model_config": "pidnet-m_2xb6-120k_1024x1024-cityscapes",
"config_file": "",
"model_weight_file": "",
"cuda": "True",
})

# Run the workflow on image
wf.run_on(url="https://production-media.paperswithcode.com/datasets/Foggy_Cityscapes-0000003414-fb7dc023.jpg")
  • model_name (str, default="maskformer"): model name.
  • model_config (str, default="maskformer_r50-d32_8xb2-160k_ade20k-512x512"): name of the model configuration file.
  • config_file (str, default=""): path to model config file (only if use_custom_model=True). The file is generated at the end of a custom training. Use algorithm train_mmlab_detection from Ikomia HUB to train custom model.
  • model_weight_file (str, default=""): path to model weights file (.pt) (only if use_custom_model=True). The file is generated at the end of a custom training.
  • cuda (bool, default=True): CUDA acceleration if True, run on CPU otherwise.

MMLab framework for object detection and instance segmentation offers a large range of models. To ease the choice of couple (model_name/model_config), you can call the function get_model_zoo() to get a list of possible values.

from ikomia.dataprocess.workflow import Workflow

# Init your workflow
wf = Workflow()

# Add object detection algorithm
segmentor = wf.add_task(name="infer_mmlab_segmentation", auto_connect=True)

# Get list of possible models (model_name, model_config)
print(segmentor.get_model_zoo())

🔍 Explore algorithm outputs

Every algorithm produces specific outputs, yet they can be explored them the same way using the Ikomia API. For a more in-depth understanding of managing algorithm outputs, please refer to the documentation.

from ikomia.dataprocess.workflow import Workflow

# Init your workflow
wf = Workflow()

# Add algorithm
algo = wf.add_task(name="infer_mmlab_segmentation", auto_connect=True)

# Run on your image
wf.run_on(url="https://production-media.paperswithcode.com/datasets/Foggy_Cityscapes-0000003414-fb7dc023.jpg")

# Iterate over outputs
for output in algo.get_outputs():
# Print information
print(output)
# Export it to JSON
output.to_json()

Developer

  • Ikomia
    Ikomia

License

Apache License 2.0
Read license full text

A permissive license whose main conditions require preservation of copyright and license notices. Contributors provide an express grant of patent rights. Licensed works, modifications, and larger works may be distributed under different terms and without source code.

PermissionsConditionsLimitations

Commercial use

License and copyright notice

Trademark use

Modification

State changes

Liability

Distribution

Warranty

Patent use

Private use

This is not legal advice: this description is for informational purposes only and does not constitute the license itself. Provided by choosealicense.com.