From 4653249392f0bcc7513830c1321d123c450bed6d Mon Sep 17 00:00:00 2001 From: Madhav Kumar Date: Wed, 30 Apr 2025 22:46:08 +0530 Subject: [PATCH 1/3] Edited zoedepth model card according to specifications. --- docs/source/en/model_doc/zoedepth.md | 147 +++++++++++++-------------- 1 file changed, 69 insertions(+), 78 deletions(-) diff --git a/docs/source/en/model_doc/zoedepth.md b/docs/source/en/model_doc/zoedepth.md index fefadfba6aa4..464b3326c217 100644 --- a/docs/source/en/model_doc/zoedepth.md +++ b/docs/source/en/model_doc/zoedepth.md @@ -14,100 +14,91 @@ rendered properly in your Markdown viewer. --> -# ZoeDepth -
-PyTorch +
+
+ PyTorch +
-## Overview +# Zoedepth -The ZoeDepth model was proposed in [ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth](https://arxiv.org/abs/2302.12288) by Shariq Farooq Bhat, Reiner Birkl, Diana Wofk, Peter Wonka, Matthias Müller. ZoeDepth extends the [DPT](dpt) framework for metric (also called absolute) depth estimation. ZoeDepth is pre-trained on 12 datasets using relative depth and fine-tuned on two domains (NYU and KITTI) using metric depth. A lightweight head is used with a novel bin adjustment design called metric bins module for each domain. During inference, each input image is automatically routed to the appropriate head using a latent classifier. +[Zoedepth](https://huggingface.co/papers/2302.12288) is a metric (also called absolute) depth estimation model to generate depth maps directly from images. It gives absolute metric depth in real-world metres, instead of relative depth. ZoeDepth is pre-trained on 12 datasets using relative depth and fine-tuned on two domains (NYU and KITTI) using metric depth. A lightweight head is used with a novel bin adjustment design called metric bins module for each domain. During inference, each input image is automatically routed to the appropriate head using a latent classifier. -The abstract from the paper is the following: +You can find all the original Zoedepth checkpoints under the [Zoedepth](https://huggingface.co/Intel?search=zoedepth) collection. -*This paper tackles the problem of depth estimation from a single image. Existing work either focuses on generalization performance disregarding metric scale, i.e. relative depth estimation, or state-of-the-art results on specific datasets, i.e. metric depth estimation. We propose the first approach that combines both worlds, leading to a model with excellent generalization performance while maintaining metric scale. Our flagship model, ZoeD-M12-NK, is pre-trained on 12 datasets using relative depth and fine-tuned on two datasets using metric depth. We use a lightweight head with a novel bin adjustment design called metric bins module for each domain. During inference, each input image is automatically routed to the appropriate head using a latent classifier. Our framework admits multiple configurations depending on the datasets used for relative depth pre-training and metric fine-tuning. Without pre-training, we can already significantly improve the state of the art (SOTA) on the NYU Depth v2 indoor dataset. Pre-training on twelve datasets and fine-tuning on the NYU Depth v2 indoor dataset, we can further improve SOTA for a total of 21% in terms of relative absolute error (REL). Finally, ZoeD-M12-NK is the first model that can jointly train on multiple datasets (NYU Depth v2 and KITTI) without a significant drop in performance and achieve unprecedented zero-shot generalization performance to eight unseen datasets from both indoor and outdoor domains.* +The example below demonstrates how to generate text based on an image with [`Pipeline`] or the [`AutoModel`] class. - + + - ZoeDepth architecture. Taken from the original paper. +```py +import torch +from transformers import pipeline -This model was contributed by [nielsr](https://huggingface.co/nielsr). -The original code can be found [here](https://github.com/isl-org/ZoeDepth). - -## Usage tips - -- ZoeDepth is an absolute (also called metric) depth estimation model, unlike DPT which is a relative depth estimation model. This means that ZoeDepth is able to estimate depth in metric units like meters. - -The easiest to perform inference with ZoeDepth is by leveraging the [pipeline API](../main_classes/pipelines.md): - -```python ->>> from transformers import pipeline ->>> from PIL import Image ->>> import requests - ->>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" ->>> image = Image.open(requests.get(url, stream=True).raw) - ->>> pipe = pipeline(task="depth-estimation", model="Intel/zoedepth-nyu-kitti") ->>> result = pipe(image) ->>> depth = result["depth"] +pipeline = pipeline( + task="depth-estimation", + model="Intel/zoedepth-nyu-kitti", + device=0 +) +pipeline(images="http://images.cocodataset.org/val2017/000000039769.jpg") ``` -Alternatively, one can also perform inference using the classes: - -```python ->>> from transformers import AutoImageProcessor, ZoeDepthForDepthEstimation ->>> import torch ->>> import numpy as np ->>> from PIL import Image ->>> import requests - ->>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" ->>> image = Image.open(requests.get(url, stream=True).raw) - ->>> image_processor = AutoImageProcessor.from_pretrained("Intel/zoedepth-nyu-kitti") ->>> model = ZoeDepthForDepthEstimation.from_pretrained("Intel/zoedepth-nyu-kitti") - ->>> # prepare image for the model ->>> inputs = image_processor(images=image, return_tensors="pt") - ->>> with torch.no_grad(): -... outputs = model(inputs) - ->>> # interpolate to original size and visualize the prediction ->>> ## ZoeDepth dynamically pads the input image. Thus we pass the original image size as argument ->>> ## to `post_process_depth_estimation` to remove the padding and resize to original dimensions. ->>> post_processed_output = image_processor.post_process_depth_estimation( -... outputs, -... source_sizes=[(image.height, image.width)], -... ) - ->>> predicted_depth = post_processed_output[0]["predicted_depth"] ->>> depth = (predicted_depth - predicted_depth.min()) / (predicted_depth.max() - predicted_depth.min()) ->>> depth = depth.detach().cpu().numpy() * 255 ->>> depth = Image.fromarray(depth.astype("uint8")) + + + +```py +import torch +import requests +from PIL import Image +from transformers import AutoModelForDepthEstimation, AutoImageProcessor + +image_processor = AutoImageProcessor.from_pretrained( + "Intel/zoedepth-nyu-kitti" +) +model = AutoModelForDepthEstimation.from_pretrained( + "Intel/zoedepth-nyu-kitti", + device_map=0 +) +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +image = Image.open(requests.get(url, stream=True).raw) +inputs = image_processor(image, return_tensors="pt").to("cuda") + +with torch.no_grad(): + outputs = model(inputs) + +# interpolate to original size and visualize the prediction +## ZoeDepth dynamically pads the input image. Thus we pass the original image size as argument +## to `post_process_depth_estimation` to remove the padding and resize to original dimensions. +post_processed_output = image_processor.post_process_depth_estimation( + outputs, + source_sizes=[(image.height, image.width)], +) + +predicted_depth = post_processed_output[0]["predicted_depth"] +depth = (predicted_depth - predicted_depth.min()) / (predicted_depth.max() - predicted_depth.min()) +depth = depth.detach().cpu().numpy() * 255 +depth = Image.fromarray(depth.astype("uint8")) ``` - -

In the original implementation ZoeDepth model performs inference on both the original and flipped images and averages out the results. The post_process_depth_estimation function can handle this for us by passing the flipped outputs to the optional outputs_flipped argument:

-
>>> with torch.no_grad():   
-...     outputs = model(pixel_values)
-...     outputs_flipped = model(pixel_values=torch.flip(inputs.pixel_values, dims=[3]))
->>> post_processed_output = image_processor.post_process_depth_estimation(
-...     outputs,
-...     source_sizes=[(image.height, image.width)],
-...     outputs_flipped=outputs_flipped,
-... )
-
-
+
+
-## Resources +## Notes -A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ZoeDepth. +- In the [original implementation](https://github.com/isl-org/ZoeDepth/blob/edb6daf45458569e24f50250ef1ed08c015f17a7/zoedepth/models/depth_model.py#L131) ZoeDepth model performs inference on both the original and flipped images and averages out the results. The ```post_process_depth_estimation``` function can handle this for us by passing the flipped outputs to the optional ```outputs_flipped``` argument: + ```py + with torch.no_grad(): + outputs = model(pixel_values) + outputs_flipped = model(pixel_values=torch.flip(inputs.pixel_values, dims=[3])) + post_processed_output = image_processor.post_process_depth_estimation( + outputs, + source_sizes=[(image.height, image.width)], + outputs_flipped=outputs_flipped, + ) + ``` +- A demo notebook regarding inference with ZoeDepth models can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ZoeDepth). -- A demo notebook regarding inference with ZoeDepth models can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ZoeDepth). 🌎 ## ZoeDepthConfig From 5222cd9a6730946fb7fe0c248cb9cec0c8678fbf Mon Sep 17 00:00:00 2001 From: Madhav Kumar Date: Wed, 30 Apr 2025 22:46:08 +0530 Subject: [PATCH 2/3] Edited Zoedepth model file --- docs/source/en/model_doc/zoedepth.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/docs/source/en/model_doc/zoedepth.md b/docs/source/en/model_doc/zoedepth.md index 464b3326c217..361690b05b04 100644 --- a/docs/source/en/model_doc/zoedepth.md +++ b/docs/source/en/model_doc/zoedepth.md @@ -35,13 +35,17 @@ The example below demonstrates how to generate text based on an image with [`Pip ```py import torch from transformers import pipeline +from PIL import Image +import requests -pipeline = pipeline( +image = Image.open(requests.get(url, stream=True).raw) +pipe = pipeline( task="depth-estimation", model="Intel/zoedepth-nyu-kitti", device=0 ) -pipeline(images="http://images.cocodataset.org/val2017/000000039769.jpg") +results = pipe(image) +depth = result['depth'] ``` From 72e4b2d6ae894f78e13cc6721a226e260032c24b Mon Sep 17 00:00:00 2001 From: Madhav Kumar Date: Mon, 26 May 2025 21:21:53 +0530 Subject: [PATCH 3/3] made suggested changes. --- docs/source/en/model_doc/zoedepth.md | 36 ++++++++++++++++------------ 1 file changed, 21 insertions(+), 15 deletions(-) diff --git a/docs/source/en/model_doc/zoedepth.md b/docs/source/en/model_doc/zoedepth.md index 361690b05b04..59bc483d8cf8 100644 --- a/docs/source/en/model_doc/zoedepth.md +++ b/docs/source/en/model_doc/zoedepth.md @@ -21,31 +21,36 @@ rendered properly in your Markdown viewer.
-# Zoedepth +# ZoeDepth -[Zoedepth](https://huggingface.co/papers/2302.12288) is a metric (also called absolute) depth estimation model to generate depth maps directly from images. It gives absolute metric depth in real-world metres, instead of relative depth. ZoeDepth is pre-trained on 12 datasets using relative depth and fine-tuned on two domains (NYU and KITTI) using metric depth. A lightweight head is used with a novel bin adjustment design called metric bins module for each domain. During inference, each input image is automatically routed to the appropriate head using a latent classifier. +[ZoeDepth](https://huggingface.co/papers/2302.12288) is a depth estimation model that combines the generalization performance of relative depth estimation (how far objects are from each other) and metric depth estimation (precise depth measurement on metric scale) from a single image. It is pre-trained on 12 datasets using relative depth and 2 datasets (NYU Depth v2 and KITTI) for metric accuracy. A lightweight head with a metric bin module for each domain is used, and during inference, it automatically selects the appropriate head for each input image with a latent classifier. -You can find all the original Zoedepth checkpoints under the [Zoedepth](https://huggingface.co/Intel?search=zoedepth) collection. + -The example below demonstrates how to generate text based on an image with [`Pipeline`] or the [`AutoModel`] class. +You can find all the original ZoeDepth checkpoints under the [Intel](https://huggingface.co/Intel?search=zoedepth) organization. + +The example below demonstrates how to estimate depth with [`Pipeline`] or the [`AutoModel`] class. ```py +import requests import torch from transformers import pipeline from PIL import Image -import requests +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" image = Image.open(requests.get(url, stream=True).raw) -pipe = pipeline( +pipeline = pipeline( task="depth-estimation", model="Intel/zoedepth-nyu-kitti", + torch_dtype=torch.float16, device=0 ) -results = pipe(image) -depth = result['depth'] +results = pipeline(image) +results["depth"] ``` @@ -62,9 +67,9 @@ image_processor = AutoImageProcessor.from_pretrained( ) model = AutoModelForDepthEstimation.from_pretrained( "Intel/zoedepth-nyu-kitti", - device_map=0 + device_map="auto" ) -url = "http://images.cocodataset.org/val2017/000000039769.jpg" +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" image = Image.open(requests.get(url, stream=True).raw) inputs = image_processor(image, return_tensors="pt").to("cuda") @@ -72,7 +77,7 @@ with torch.no_grad(): outputs = model(inputs) # interpolate to original size and visualize the prediction -## ZoeDepth dynamically pads the input image. Thus we pass the original image size as argument +## ZoeDepth dynamically pads the input image, so pass the original image size as argument ## to `post_process_depth_estimation` to remove the padding and resize to original dimensions. post_processed_output = image_processor.post_process_depth_estimation( outputs, @@ -82,7 +87,7 @@ post_processed_output = image_processor.post_process_depth_estimation( predicted_depth = post_processed_output[0]["predicted_depth"] depth = (predicted_depth - predicted_depth.min()) / (predicted_depth.max() - predicted_depth.min()) depth = depth.detach().cpu().numpy() * 255 -depth = Image.fromarray(depth.astype("uint8")) +Image.fromarray(depth.astype("uint8")) ``` @@ -90,7 +95,7 @@ depth = Image.fromarray(depth.astype("uint8")) ## Notes -- In the [original implementation](https://github.com/isl-org/ZoeDepth/blob/edb6daf45458569e24f50250ef1ed08c015f17a7/zoedepth/models/depth_model.py#L131) ZoeDepth model performs inference on both the original and flipped images and averages out the results. The ```post_process_depth_estimation``` function can handle this for us by passing the flipped outputs to the optional ```outputs_flipped``` argument: +- In the [original implementation](https://github.com/isl-org/ZoeDepth/blob/edb6daf45458569e24f50250ef1ed08c015f17a7/zoedepth/models/depth_model.py#L131) ZoeDepth performs inference on both the original and flipped images and averages the results. The `post_process_depth_estimation` function handles this by passing the flipped outputs to the optional `outputs_flipped` argument as shown below. ```py with torch.no_grad(): outputs = model(pixel_values) @@ -101,8 +106,9 @@ depth = Image.fromarray(depth.astype("uint8")) outputs_flipped=outputs_flipped, ) ``` -- A demo notebook regarding inference with ZoeDepth models can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ZoeDepth). - + +## Resources +- Refer to this [notebook](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ZoeDepth) for an inference example. ## ZoeDepthConfig