infer_depth_anything_v2
About
Depth Anything V2 is a highly practical solution for robust monocular depth estimation
Depth Anything V2, significantly outperforms V1, is a highly practical solution for robust monocular depth estimation.
🚀 Use with Ikomia API
1. Install Ikomia API
We strongly recommend using a virtual environment. If you're not sure where to start, we offer a tutorial here.
pip install ikomia
2. Create your workflow
from ikomia.dataprocess.workflow import Workflowfrom ikomia.utils.displayIO import display# Init your workflowwf = Workflow()# Add algorithmalgo = wf.add_task(name="infer_depth_anything_v2", auto_connect=True)# Run directly on your imagewf.run_on(url="https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_dog.png?raw=true")# Display the resultsdisplay(algo.get_output(0).get_image()) # Colormapdisplay(algo.get_output(1).get_image()) # Grayscale
☀️ Use with Ikomia Studio
Ikomia Studio offers a friendly UI with the same features as the API.
- If you haven't started using Ikomia Studio yet, download and install it from this page.
- For additional guidance on getting started with Ikomia Studio, check out this blog post.
📝 Set algorithm parameters
- model_name (str) - default 'vits': Name of the ViT pre-trained model.
- 'vits' ; Param: 24.8M
- 'vitm' ; Param: 97.5M
- 'vitl' ; Param: 335.3M
- input_size (str) - default '640': Size of the input image.
- cuda (bool): If True, CUDA-based inference (GPU). If False, run on CPU.
from ikomia.dataprocess.workflow import Workflowfrom ikomia.utils.displayIO import display# Init your workflowwf = Workflow()# Add algorithmalgo = wf.add_task(name="infer_depth_anything_v2", auto_connect=True)algo.set_parameters({'model_name':'vits','input_size': '640','cuda': 'True'})# Run directly on your imagewf.run_on(url="https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_dog.png?raw=true")# Display the resultsdisplay(algo.get_output(0).get_image())display(algo.get_output(1).get_image())
🔍 Explore algorithm outputs
Every algorithm produces specific outputs, yet they can be explored them the same way using the Ikomia API. For a more in-depth understanding of managing algorithm outputs, please refer to the documentation.
from ikomia.dataprocess.workflow import Workflow# Init your workflowwf = Workflow()# Add algorithmalgo = wf.add_task(name="infer_depth_anything", auto_connect=True)# Run on your imagewf.run_on(url="https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_dog.png?raw=true")# Iterate over outputsfor output in algo.get_outputs():# Print informationprint(output)# Export it to JSONoutput.to_json()
Developer
Ikomia
License
Apache License 2.0
A permissive license whose main conditions require preservation of copyright and license notices. Contributors provide an express grant of patent rights. Licensed works, modifications, and larger works may be distributed under different terms and without source code.
Permissions | Conditions | Limitations |
---|---|---|
Commercial use | License and copyright notice | Trademark use |
Modification | State changes | Liability |
Distribution | Warranty | |
Patent use | ||
Private use |
This is not legal advice: this description is for informational purposes only and does not constitute the license itself. Provided by choosealicense.com.