From 7fa55c5bf38dddb246c6864ef2a4c44804f0101f Mon Sep 17 00:00:00 2001 From: Jiwook Han <33192762+mreraser@users.noreply.github.com> Date: Fri, 27 Jun 2025 15:23:00 +0900 Subject: [PATCH 1/4] model_doc_videomae.md update --- docs/source/en/model_doc/videomae.md | 25 +++++++++++-------------- 1 file changed, 11 insertions(+), 14 deletions(-) diff --git a/docs/source/en/model_doc/videomae.md b/docs/source/en/model_doc/videomae.md index ac3d6c044e64..4451ae4f64b7 100644 --- a/docs/source/en/model_doc/videomae.md +++ b/docs/source/en/model_doc/videomae.md @@ -14,30 +14,27 @@ rendered properly in your Markdown viewer. --> -# VideoMAE -
-PyTorch -FlashAttention -SDPA +
+ PyTorch + FlashAttention + SDPA +
-## Overview - -The VideoMAE model was proposed in [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://huggingface.co/papers/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. -VideoMAE extends masked auto encoders ([MAE](vit_mae)) to video, claiming state-of-the-art performance on several video classification benchmarks. +# VideoMAE -The abstract from the paper is the following: +## Overview -*Pre-training video transformers on extra large-scale datasets is generally required to achieve premier performance on relatively small datasets. In this paper, we show that video masked autoencoders (VideoMAE) are data-efficient learners for self-supervised video pre-training (SSVP). We are inspired by the recent ImageMAE and propose customized video tube masking and reconstruction. These simple designs turn out to be effective for overcoming information leakage caused by the temporal correlation during video reconstruction. We obtain three important findings on SSVP: (1) An extremely high proportion of masking ratio (i.e., 90% to 95%) still yields favorable performance of VideoMAE. The temporally redundant video content enables higher masking ratio than that of images. (2) VideoMAE achieves impressive results on very small datasets (i.e., around 3k-4k videos) without using any extra data. This is partially ascribed to the challenging task of video reconstruction to enforce high-level structure learning. (3) VideoMAE shows that data quality is more important than data quantity for SSVP. Domain shift between pre-training and target datasets are important issues in SSVP. Notably, our VideoMAE with the vanilla ViT backbone can achieve 83.9% on Kinects-400, 75.3% on Something-Something V2, 90.8% on UCF101, and 61.1% on HMDB51 without using any extra data.* +[VideoMAE](https://huggingface.co/papers/2203.12602) is a self-supervised video representation learning model that extends [Masked Autoencoders(MAE)](vit_mae) to video inputs. It learns by randomly masking a large portion of video patches (typically 90%–95%) and reconstructing the missing parts, making it highly data-efficient. Without using any external data, VideoMAE achieves competitive performance across popular video classification benchmarks. Its simple design, strong results, and ability to work with limited labeled data make it a practical choice for video understanding tasks. drawing - VideoMAE pre-training. Taken from the original paper. +You can find all the original VideoMAE checkpoints under the [MCG-NJU](https://huggingface.co/MCG-NJU/models) organization. -This model was contributed by [nielsr](https://huggingface.co/nielsr). -The original code can be found [here](https://github.com/MCG-NJU/VideoMAE). +> [!TIP] +> Click on the VideoMAE models in the right sidebar for more examples of how to apply VideoMAE to vision tasks. ## Using Scaled Dot Product Attention (SDPA) From 3ee65ac9ddbf2b8d1627c020a572da78dd62867a Mon Sep 17 00:00:00 2001 From: Jiwook Han <33192762+mreraser@users.noreply.github.com> Date: Sun, 31 Aug 2025 18:59:59 +0900 Subject: [PATCH 2/4] Finalize and clean up videomae.md model card --- docs/source/en/model_doc/videomae.md | 97 +++++++++++++++++++++++++++- 1 file changed, 95 insertions(+), 2 deletions(-) diff --git a/docs/source/en/model_doc/videomae.md b/docs/source/en/model_doc/videomae.md index 019750a7d1af..082422339fb6 100644 --- a/docs/source/en/model_doc/videomae.md +++ b/docs/source/en/model_doc/videomae.md @@ -25,8 +25,6 @@ rendered properly in your Markdown viewer. # VideoMAE -## Overview - [VideoMAE](https://huggingface.co/papers/2203.12602) is a self-supervised video representation learning model that extends [Masked Autoencoders(MAE)](vit_mae) to video inputs. It learns by randomly masking a large portion of video patches (typically 90%–95%) and reconstructing the missing parts, making it highly data-efficient. Without using any external data, VideoMAE achieves competitive performance across popular video classification benchmarks. Its simple design, strong results, and ability to work with limited labeled data make it a practical choice for video understanding tasks. drawing You can find all the original VideoMAE checkpoints under the [MCG-NJU](https://huggingface.co/MCG-NJU/models) organization. > [!TIP] +> This model was contributed by [nielsr](https://huggingface.co/nielsr). +> > Click on the VideoMAE models in the right sidebar for more examples of how to apply VideoMAE to vision tasks. +The example below demonstrates how to perform video classification with [`pipeline`] or the [`AutoModel`] class. + + + + +```python +from transformers import pipeline +from huggingface_hub import list_repo_files, hf_hub_download + +video_cls = pipeline( + task="video-classification", + model="MCG-NJU/videomae-base-finetuned-kinetics" +) + +files = list_repo_files("nateraw/kinetics-mini", repo_type="dataset") +videos = [f for f in files if f.endswith(".mp4")] +video_path = hf_hub_download("nateraw/kinetics-mini", repo_type="dataset", filename=videos[0]) + +preds = video_cls(video_path) +print(preds) +``` + + + + +```python +import torch +from huggingface_hub import list_repo_files, hf_hub_download +from torchvision.io import read_video +from torchvision.transforms.functional import to_pil_image +from transformers import AutoProcessor, AutoModelForVideoClassification + +files = list_repo_files("nateraw/kinetics-mini", repo_type="dataset") +videos = [f for f in files if f.endswith(".mp4")] +video_path = hf_hub_download("nateraw/kinetics-mini", repo_type="dataset", filename=videos[0]) + +video, _, _ = read_video(video_path, pts_unit="sec") +T = video.shape[0] +indices = torch.linspace(0, T - 1, steps=16).long().tolist() +frames = [to_pil_image(video[i].permute(2, 0, 1)) for i in indices] + +processor = AutoProcessor.from_pretrained("MCG-NJU/videomae-base-finetuned-kinetics") +model = AutoModelForVideoClassification.from_pretrained("MCG-NJU/videomae-base-finetuned-kinetics").eval() + +device = "cuda" if torch.cuda.is_available() else "cpu" +inputs = processor(frames, return_tensors="pt").to(device) +model.to(device) + +with torch.no_grad(): + logits = model(**inputs).logits + +probs = logits.softmax(-1)[0] +topk = probs.topk(5) + +id2label = model.config.id2label +print([{ "label": id2label[i.item()], "score": probs[i].item() } for i in topk.indices]) +``` + + + + +Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends. + +The example below uses [BitsAndBytes](https://huggingface.co/docs/transformers/main/en/quantization/bitsandbytes) to quantize the weights to 8-bit precision: + +```python +from transformers import AutoModelForVideoClassification + +model = AutoModelForVideoClassification.from_pretrained( + "MCG-NJU/videomae-base-finetuned-kinetics", + load_in_8bit=True, + device_map="auto" +).eval() +``` + ## Using Scaled Dot Product Attention (SDPA) PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function @@ -65,6 +140,24 @@ On a local benchmark (A100-40GB, PyTorch 2.3.0, OS Ubuntu 22.04) with `float32` | 4 | 43 | 32 | 1.34 | | 8 | 84 | 60 | 1.4 | +## Notes + +- Pre-training uses heavy masking (90–95%) on video patches. +- Fine-tuning checkpoints are available for datasets like Kinetics-400, UCF101, and Something-Something-V2. +- For custom datasets, you can follow the [Video classification](../tasks/video_classification) task guide. + + ```py + # Example: feature extraction + forward pass + from transformers import VideoMAEFeatureExtractor, VideoMAEModel + + feature_extractor = VideoMAEFeatureExtractor.from_pretrained("MCG-NJU/videomae-base") + model = VideoMAEModel.from_pretrained("MCG-NJU/videomae-base") + + video = ... # load frames + inputs = feature_extractor(list(video), return_tensors="pt") + outputs = model(**inputs) + ``` + ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with VideoMAE. If From 211c6c397959886e2718740bd752ab71b6711d13 Mon Sep 17 00:00:00 2001 From: Jiwook Han <33192762+mreraser@users.noreply.github.com> Date: Sat, 13 Sep 2025 18:52:56 +0900 Subject: [PATCH 3/4] badges position fix --- docs/source/en/model_doc/videomae.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/model_doc/videomae.md b/docs/source/en/model_doc/videomae.md index 082422339fb6..e9f0b4107bb7 100644 --- a/docs/source/en/model_doc/videomae.md +++ b/docs/source/en/model_doc/videomae.md @@ -15,7 +15,7 @@ rendered properly in your Markdown viewer. --> *This model was released on 2022-03-23 and added to Hugging Face Transformers on 2022-08-04.* -
+
PyTorch FlashAttention From f6035fced66c01b5371055098c2f0474c1754161 Mon Sep 17 00:00:00 2001 From: Jiwook Han <33192762+mreraser@users.noreply.github.com> Date: Sat, 13 Sep 2025 20:50:07 +0900 Subject: [PATCH 4/4] pipeline test code fix --- docs/source/en/model_doc/videomae.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/en/model_doc/videomae.md b/docs/source/en/model_doc/videomae.md index e9f0b4107bb7..7455a9a3db39 100644 --- a/docs/source/en/model_doc/videomae.md +++ b/docs/source/en/model_doc/videomae.md @@ -46,7 +46,7 @@ The example below demonstrates how to perform video classification with [`pipeli from transformers import pipeline from huggingface_hub import list_repo_files, hf_hub_download -video_cls = pipeline( +pipeline = pipeline( task="video-classification", model="MCG-NJU/videomae-base-finetuned-kinetics" ) @@ -55,7 +55,7 @@ files = list_repo_files("nateraw/kinetics-mini", repo_type="dataset") videos = [f for f in files if f.endswith(".mp4")] video_path = hf_hub_download("nateraw/kinetics-mini", repo_type="dataset", filename=videos[0]) -preds = video_cls(video_path) +preds = pipeline(video_path) print(preds) ```