From 5e18010101386817be217ea8c28b1abfeea8e8ac Mon Sep 17 00:00:00 2001 From: steven Date: Thu, 19 Jun 2025 00:19:06 +0200 Subject: [PATCH 1/6] docs: first draft to more standard SuperPoint documentation --- docs/source/en/model_doc/superpoint.md | 115 +++++++++++++------------ 1 file changed, 59 insertions(+), 56 deletions(-) diff --git a/docs/source/en/model_doc/superpoint.md b/docs/source/en/model_doc/superpoint.md index aa22d30961ad..536f695aa947 100644 --- a/docs/source/en/model_doc/superpoint.md +++ b/docs/source/en/model_doc/superpoint.md @@ -10,48 +10,33 @@ specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. - --> -# SuperPoint - -
-PyTorch +
+
+ PyTorch +
-## Overview - -The SuperPoint model was proposed -in [SuperPoint: Self-Supervised Interest Point Detection and Description](https://huggingface.co/papers/1712.07629) by Daniel -DeTone, Tomasz Malisiewicz and Andrew Rabinovich. +# SuperPoint -This model is the result of a self-supervised training of a fully-convolutional network for interest point detection and -description. The model is able to detect interest points that are repeatable under homographic transformations and -provide a descriptor for each point. The use of the model in its own is limited, but it can be used as a feature -extractor for other tasks such as homography estimation, image matching, etc. +[SuperPoint](https://huggingface.co/papers/1712.07629) is the result of a self-supervised training of a fully-convolutional network for interest point detection and description. The model is able to detect interest points that are repeatable under homographic transformations and provide a descriptor for each point. The use of the model in its own is limited, but it can be used as a feature extractor for other tasks such as homography estimation, image matching, etc. -The abstract from the paper is the following: +You can find all the original SuperPoint checkpoints under the [SuperPoint](https://huggingface.co/magic-leap-community/superpoint) repository. -*This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a -large number of multiple-view geometry problems in computer vision. As opposed to patch-based neural networks, our -fully-convolutional model operates on full-sized images and jointly computes pixel-level interest point locations and -associated descriptors in one forward pass. We introduce Homographic Adaptation, a multi-scale, multi-homography -approach for boosting interest point detection repeatability and performing cross-domain adaptation (e.g., -synthetic-to-real). Our model, when trained on the MS-COCO generic image dataset using Homographic Adaptation, is able -to repeatedly detect a much richer set of interest points than the initial pre-adapted deep model and any other -traditional corner detector. The final system gives rise to state-of-the-art homography estimation results on HPatches -when compared to LIFT, SIFT and ORB.* +> [!TIP] +> Click on the SuperPoint models in the right sidebar for more examples of how to apply SuperPoint to different computer vision tasks. drawing SuperPoint overview. Taken from the original paper. -## Usage tips - -Here is a quick example of using the model to detect interest points in an image: +The example below demonstrates how to detect interest points in an image with the [`AutoModel`] class. + + -```python +```py from transformers import AutoImageProcessor, SuperPointForKeypointDetection import torch from PIL import Image @@ -64,46 +49,59 @@ processor = AutoImageProcessor.from_pretrained("magic-leap-community/superpoint" model = SuperPointForKeypointDetection.from_pretrained("magic-leap-community/superpoint") inputs = processor(image, return_tensors="pt") -outputs = model(**inputs) +with torch.no_grad(): + outputs = model(**inputs) + +# Post-process to get keypoints, scores, and descriptors +image_size = (image.height, image.width) +processed_outputs = processor.post_process_keypoint_detection(outputs, [image_size]) ``` -The outputs contain the list of keypoint coordinates with their respective score and description (a 256-long vector). + + + + -You can also feed multiple images to the model. Due to the nature of SuperPoint, to output a dynamic number of keypoints, -you will need to use the mask attribute to retrieve the respective information : +## Notes -```python +- SuperPoint outputs a dynamic number of keypoints per image, which makes it suitable for tasks requiring variable-length feature representations. + +```py from transformers import AutoImageProcessor, SuperPointForKeypointDetection import torch from PIL import Image import requests - +processor = AutoImageProcessor.from_pretrained("magic-leap-community/superpoint") +model = SuperPointForKeypointDetection.from_pretrained("magic-leap-community/superpoint") url_image_1 = "http://images.cocodataset.org/val2017/000000039769.jpg" image_1 = Image.open(requests.get(url_image_1, stream=True).raw) url_image_2 = "http://images.cocodataset.org/test-stuff2017/000000000568.jpg" image_2 = Image.open(requests.get(url_image_2, stream=True).raw) - images = [image_1, image_2] +inputs = processor(images, return_tensors="pt") +# Example of handling dynamic keypoint output +outputs = model(**inputs) +keypoints = outputs.keypoints # Shape varies per image +scores = outputs.scores # Confidence scores for each keypoint +descriptors = outputs.descriptors # 256-dimensional descriptors +mask = outputs.mask # Value of 1 corresponds to a keypoint detection +``` -processor = AutoImageProcessor.from_pretrained("magic-leap-community/superpoint") -model = SuperPointForKeypointDetection.from_pretrained("magic-leap-community/superpoint") +- The model provides both keypoint coordinates and their corresponding descriptors (256-dimensional vectors) in a single forward pass. +- For batch processing with multiple images, you need to use the mask attribute to retrieve the respective information for each image. You can use the `post_process_keypoint_detection` from the `SuperPointImageProcessor` to retrieve the each image information. +```py +# Batch processing example +images = [image1, image2, image3] inputs = processor(images, return_tensors="pt") outputs = model(**inputs) -image_sizes = [(image.height, image.width) for image in images] -outputs = processor.post_process_keypoint_detection(outputs, image_sizes) - -for output in outputs: - for keypoints, scores, descriptors in zip(output["keypoints"], output["scores"], output["descriptors"]): - print(f"Keypoints: {keypoints}") - print(f"Scores: {scores}") - print(f"Descriptors: {descriptors}") +image_sizes = [(img.height, img.width) for img in images] +processed_outputs = processor.post_process_keypoint_detection(outputs, image_sizes) ``` -You can then print the keypoints on the image of your choice to visualize the result: -```python +- You can then print the keypoints on the image of your choice to visualize the result: +```pyq import matplotlib.pyplot as plt - plt.axis("off") plt.imshow(image_1) plt.scatter( @@ -117,12 +115,18 @@ plt.savefig(f"output_image.png") ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/632885ba1558dac67c440aa8/ZtFmphEhx8tcbEQqOolyE.png) +## Resources + +The resources provided in the following sections consist of a list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SuperPoint. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. This model was contributed by [stevenbucaille](https://huggingface.co/stevenbucaille). The original code can be found [here](https://github.com/magicleap/SuperPointPretrainedNetwork). -## Resources + + +- [`SuperPointForKeypointDetection`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/vision) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/keypoint_detection.ipynb). +- Check the [Keypoint detection task guide](../tasks/keypoint_detection) on how to use the model. -A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SuperPoint. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. +**Visualization and inference** - A notebook showcasing inference and visualization with SuperPoint can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SuperPoint/Inference_with_SuperPoint_to_detect_interest_points_in_an_image.ipynb). 🌎 @@ -132,13 +136,12 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h ## SuperPointImageProcessor -[[autodoc]] SuperPointImageProcessor - -- preprocess -- post_process_keypoint_detection +[[autodoc]] SuperPointImageProcessor - preprocess - post_process_keypoint_detection + + ## SuperPointForKeypointDetection -[[autodoc]] SuperPointForKeypointDetection +[[autodoc]] SuperPointForKeypointDetection - forward -- forward + \ No newline at end of file From 83ca16e44540de387488a39adcacbb22b40d0337 Mon Sep 17 00:00:00 2001 From: StevenBucaille Date: Thu, 19 Jun 2025 11:28:28 +0200 Subject: [PATCH 2/6] Apply suggestions from code review Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/model_doc/superpoint.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/source/en/model_doc/superpoint.md b/docs/source/en/model_doc/superpoint.md index 536f695aa947..afc1290406d6 100644 --- a/docs/source/en/model_doc/superpoint.md +++ b/docs/source/en/model_doc/superpoint.md @@ -20,17 +20,18 @@ rendered properly in your Markdown viewer. # SuperPoint -[SuperPoint](https://huggingface.co/papers/1712.07629) is the result of a self-supervised training of a fully-convolutional network for interest point detection and description. The model is able to detect interest points that are repeatable under homographic transformations and provide a descriptor for each point. The use of the model in its own is limited, but it can be used as a feature extractor for other tasks such as homography estimation, image matching, etc. +[SuperPoint](https://huggingface.co/papers/1712.07629) is the result of self-supervised training of a fully-convolutional network for interest point detection and description. The model is able to detect interest points that are repeatable under homographic transformations and provide a descriptor for each point. Usage on it's own is limited, but it can be used as a feature extractor for other tasks such as homography estimation and image matching. -You can find all the original SuperPoint checkpoints under the [SuperPoint](https://huggingface.co/magic-leap-community/superpoint) repository. +You can find all the original SuperPoint checkpoints under the [Magic Leap Community](https://huggingface.co/magic-leap-community) organization. > [!TIP] +> This model was contributed by [stevenbucaille](https://huggingface.co/stevenbucaille). +> > Click on the SuperPoint models in the right sidebar for more examples of how to apply SuperPoint to different computer vision tasks. drawing - SuperPoint overview. Taken from the original paper. The example below demonstrates how to detect interest points in an image with the [`AutoModel`] class. @@ -100,7 +101,7 @@ processed_outputs = processor.post_process_keypoint_detection(outputs, image_siz ``` - You can then print the keypoints on the image of your choice to visualize the result: -```pyq +```py import matplotlib.pyplot as plt plt.axis("off") plt.imshow(image_1) @@ -126,7 +127,6 @@ The original code can be found [here](https://github.com/magicleap/SuperPointPre - [`SuperPointForKeypointDetection`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/vision) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/keypoint_detection.ipynb). - Check the [Keypoint detection task guide](../tasks/keypoint_detection) on how to use the model. -**Visualization and inference** - A notebook showcasing inference and visualization with SuperPoint can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SuperPoint/Inference_with_SuperPoint_to_detect_interest_points_in_an_image.ipynb). 🌎 From 24deae6dc4028f6add7006850f075849b5c23c57 Mon Sep 17 00:00:00 2001 From: steven Date: Thu, 19 Jun 2025 11:30:34 +0200 Subject: [PATCH 3/6] docs: reverted changes on Auto classes --- docs/source/en/model_doc/superpoint.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/docs/source/en/model_doc/superpoint.md b/docs/source/en/model_doc/superpoint.md index afc1290406d6..1bd09f21c743 100644 --- a/docs/source/en/model_doc/superpoint.md +++ b/docs/source/en/model_doc/superpoint.md @@ -136,12 +136,17 @@ The original code can be found [here](https://github.com/magicleap/SuperPointPre ## SuperPointImageProcessor -[[autodoc]] SuperPointImageProcessor - preprocess - post_process_keypoint_detection +[[autodoc]] SuperPointImageProcessor + +- preprocess +- post_process_keypoint_detection ## SuperPointForKeypointDetection -[[autodoc]] SuperPointForKeypointDetection - forward +[[autodoc]] SuperPointForKeypointDetection + +- forward \ No newline at end of file From 2bd9bb093c0a75a449e878c537826934aae7bcfe Mon Sep 17 00:00:00 2001 From: steven Date: Thu, 19 Jun 2025 11:55:04 +0200 Subject: [PATCH 4/6] docs: addressed the rest of the comments --- docs/source/en/model_doc/superpoint.md | 100 +++++++++++-------------- 1 file changed, 45 insertions(+), 55 deletions(-) diff --git a/docs/source/en/model_doc/superpoint.md b/docs/source/en/model_doc/superpoint.md index 1bd09f21c743..1c9da8b23eec 100644 --- a/docs/source/en/model_doc/superpoint.md +++ b/docs/source/en/model_doc/superpoint.md @@ -22,6 +22,9 @@ rendered properly in your Markdown viewer. [SuperPoint](https://huggingface.co/papers/1712.07629) is the result of self-supervised training of a fully-convolutional network for interest point detection and description. The model is able to detect interest points that are repeatable under homographic transformations and provide a descriptor for each point. Usage on it's own is limited, but it can be used as a feature extractor for other tasks such as homography estimation and image matching. + + You can find all the original SuperPoint checkpoints under the [Magic Leap Community](https://huggingface.co/magic-leap-community) organization. > [!TIP] @@ -29,8 +32,6 @@ You can find all the original SuperPoint checkpoints under the [Magic Leap Commu > > Click on the SuperPoint models in the right sidebar for more examples of how to apply SuperPoint to different computer vision tasks. - The example below demonstrates how to detect interest points in an image with the [`AutoModel`] class. @@ -61,75 +62,64 @@ processed_outputs = processor.post_process_keypoint_detection(outputs, [image_si - - ## Notes - SuperPoint outputs a dynamic number of keypoints per image, which makes it suitable for tasks requiring variable-length feature representations. -```py -from transformers import AutoImageProcessor, SuperPointForKeypointDetection -import torch -from PIL import Image -import requests -processor = AutoImageProcessor.from_pretrained("magic-leap-community/superpoint") -model = SuperPointForKeypointDetection.from_pretrained("magic-leap-community/superpoint") -url_image_1 = "http://images.cocodataset.org/val2017/000000039769.jpg" -image_1 = Image.open(requests.get(url_image_1, stream=True).raw) -url_image_2 = "http://images.cocodataset.org/test-stuff2017/000000000568.jpg" -image_2 = Image.open(requests.get(url_image_2, stream=True).raw) -images = [image_1, image_2] -inputs = processor(images, return_tensors="pt") -# Example of handling dynamic keypoint output -outputs = model(**inputs) -keypoints = outputs.keypoints # Shape varies per image -scores = outputs.scores # Confidence scores for each keypoint -descriptors = outputs.descriptors # 256-dimensional descriptors -mask = outputs.mask # Value of 1 corresponds to a keypoint detection -``` + ```py + from transformers import AutoImageProcessor, SuperPointForKeypointDetection + import torch + from PIL import Image + import requests + processor = AutoImageProcessor.from_pretrained("magic-leap-community/superpoint") + model = SuperPointForKeypointDetection.from_pretrained("magic-leap-community/superpoint") + url_image_1 = "http://images.cocodataset.org/val2017/000000039769.jpg" + image_1 = Image.open(requests.get(url_image_1, stream=True).raw) + url_image_2 = "http://images.cocodataset.org/test-stuff2017/000000000568.jpg" + image_2 = Image.open(requests.get(url_image_2, stream=True).raw) + images = [image_1, image_2] + inputs = processor(images, return_tensors="pt") + # Example of handling dynamic keypoint output + outputs = model(**inputs) + keypoints = outputs.keypoints # Shape varies per image + scores = outputs.scores # Confidence scores for each keypoint + descriptors = outputs.descriptors # 256-dimensional descriptors + mask = outputs.mask # Value of 1 corresponds to a keypoint detection + ``` - The model provides both keypoint coordinates and their corresponding descriptors (256-dimensional vectors) in a single forward pass. - For batch processing with multiple images, you need to use the mask attribute to retrieve the respective information for each image. You can use the `post_process_keypoint_detection` from the `SuperPointImageProcessor` to retrieve the each image information. -```py -# Batch processing example -images = [image1, image2, image3] -inputs = processor(images, return_tensors="pt") -outputs = model(**inputs) -image_sizes = [(img.height, img.width) for img in images] -processed_outputs = processor.post_process_keypoint_detection(outputs, image_sizes) -``` + ```py + # Batch processing example + images = [image1, image2, image3] + inputs = processor(images, return_tensors="pt") + outputs = model(**inputs) + image_sizes = [(img.height, img.width) for img in images] + processed_outputs = processor.post_process_keypoint_detection(outputs, image_sizes) + ``` - You can then print the keypoints on the image of your choice to visualize the result: -```py -import matplotlib.pyplot as plt -plt.axis("off") -plt.imshow(image_1) -plt.scatter( - outputs[0]["keypoints"][:, 0], - outputs[0]["keypoints"][:, 1], - c=outputs[0]["scores"] * 100, - s=outputs[0]["scores"] * 50, - alpha=0.8 -) -plt.savefig(f"output_image.png") -``` + ```py + import matplotlib.pyplot as plt + plt.axis("off") + plt.imshow(image_1) + plt.scatter( + outputs[0]["keypoints"][:, 0], + outputs[0]["keypoints"][:, 1], + c=outputs[0]["scores"] * 100, + s=outputs[0]["scores"] * 50, + alpha=0.8 + ) + plt.savefig(f"output_image.png") + ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/632885ba1558dac67c440aa8/ZtFmphEhx8tcbEQqOolyE.png) ## Resources -The resources provided in the following sections consist of a list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SuperPoint. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. -This model was contributed by [stevenbucaille](https://huggingface.co/stevenbucaille). -The original code can be found [here](https://github.com/magicleap/SuperPointPretrainedNetwork). - - - -- [`SuperPointForKeypointDetection`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/vision) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/keypoint_detection.ipynb). +- Refer to this [noteboook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SuperPoint/Inference_with_SuperPoint_to_detect_interest_points_in_an_image.ipynb) for an inference and visualization example. - Check the [Keypoint detection task guide](../tasks/keypoint_detection) on how to use the model. - -- A notebook showcasing inference and visualization with SuperPoint can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SuperPoint/Inference_with_SuperPoint_to_detect_interest_points_in_an_image.ipynb). 🌎 - ## SuperPointConfig [[autodoc]] SuperPointConfig From 7b04650667c395e5da91b67beb9c86db84d337d2 Mon Sep 17 00:00:00 2001 From: steven Date: Mon, 23 Jun 2025 16:13:25 +0200 Subject: [PATCH 5/6] docs: remove outdated reference to keypoint detection task guide in SuperPoint documentation --- docs/source/en/model_doc/superpoint.md | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/source/en/model_doc/superpoint.md b/docs/source/en/model_doc/superpoint.md index 1c9da8b23eec..e23e2157e2cf 100644 --- a/docs/source/en/model_doc/superpoint.md +++ b/docs/source/en/model_doc/superpoint.md @@ -118,7 +118,6 @@ processed_outputs = processor.post_process_keypoint_detection(outputs, [image_si ## Resources - Refer to this [noteboook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SuperPoint/Inference_with_SuperPoint_to_detect_interest_points_in_an_image.ipynb) for an inference and visualization example. -- Check the [Keypoint detection task guide](../tasks/keypoint_detection) on how to use the model. ## SuperPointConfig From 527b20fb37fbebbda3ed272e7af689cae2f4f321 Mon Sep 17 00:00:00 2001 From: Steven Liu <59462357+stevhliu@users.noreply.github.com> Date: Thu, 26 Jun 2025 09:57:41 -0700 Subject: [PATCH 6/6] Update superpoint.md --- docs/source/en/model_doc/superpoint.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/docs/source/en/model_doc/superpoint.md b/docs/source/en/model_doc/superpoint.md index e23e2157e2cf..31f40e5a374e 100644 --- a/docs/source/en/model_doc/superpoint.md +++ b/docs/source/en/model_doc/superpoint.md @@ -113,7 +113,10 @@ processed_outputs = processor.post_process_keypoint_detection(outputs, [image_si ) plt.savefig(f"output_image.png") ``` -![image/png](https://cdn-uploads.huggingface.co/production/uploads/632885ba1558dac67c440aa8/ZtFmphEhx8tcbEQqOolyE.png) + +
+ +
## Resources @@ -138,4 +141,4 @@ processed_outputs = processor.post_process_keypoint_detection(outputs, [image_si - forward - \ No newline at end of file +