-
Notifications
You must be signed in to change notification settings - Fork 605
Quick fix: Add user conf to yolo reference implementation #2446
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
|
@manpreetssokhi @hanyunfan , are you guys aware of what the value should be for YOLO benchmark in |
bf77e4f to
97d502c
Compare
| import json | ||
| import array | ||
| import argparse | ||
| parser.add_argument( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wrong formatting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like a new addition and should be towards the bottom with the other parser arguments.
@anandhu-eng I am not entirely sure what that number means, is there any guidance for how to get that number? |
|
Hi @manpreetssokhi , looking at the submission checker here , it seems like the minimum required queries for the offline scenario. I think it is generally determined by corresponding task force members based on a reasonable benchmark run time. Since |
|
We should keep the number at 1525. There's no advantage in making the runs go longer as we anyway have the 10 minutes minimum duration requirement. |
|
I have done some config updates in the above PR |
* Updates for YOLO * fix formatting * Remove duplicate log tracing argument Removed duplicate argument for enabling log tracing. * Add performance_sample_count_override for yolo * Update version check and filter scenarios for 6.0 * Remove min_query_count - interfering with runs Remove minimum query count requirement for performance mode.
| deepseek-r1-interactive.*.performance_sample_count_override = 4388 | ||
| whisper.*.performance_sample_count_override = 1633 | ||
| qwen3-vl-235b-a22b.*.performance_sample_count_override = 48289 | ||
| yolo.*.performance_sample_count_override = 5000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 5000 when the dataset size is 1513?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it was present in submission checker already, just copied down here. @manpreetssokhi , would it be fine if we bring it down to 1513?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
500 is used here: https://github.com/mlcommons/inference/blob/master/vision/classification_and_detection/yolo/yolo_loadgen.py#L150
The count should be such that the memory needed to load that much dataset should ideally be above a few MBs (> L3 size) but still run on edge systems (not above say 256MB or so)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in yolo_loadgen.py I had tried both 1525 and 500. I believe that the yolo.*.performance_sample_count_override = 5000 comes from retinanet being 5000. I am open to adjusting it to meet the MLC rules. I think it might be best to keep 1525 across:
https://github.com/mlcommons/inference/blob/master/vision/classification_and_detection/yolo/yolo_loadgen.py#L150
and
https://github.com/mlcommons/inference/pull/2446/files/97d502c84c1e574a6ca4f5a1d6bf5f877ffe1ad6..09537da212a780471f563fc63ff9d1f6edea4910#diff-60781b468b10ba9bf59f52a09114c63209a92f299bf957299a055a99900a35c8
what do you both think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
retinanet performance_sample_count is actually 64. If the image size is the same as in retinanet, we can use 64 itself for yolo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, then in that case we can go with 64 for yolo here. Does this overwrite the min of 500 performance_sample_count in the base implementation I have? I am trying to understand the difference and more broadly the different components in the repo. My intention behind the higher numbers of 500 and 1525 was to make sure we meet the minimum 10 min run.
Removed 'stable-diffusion-xl' and 'dlrm-v3' from scenarios.
* Generate final report: Update filter scenarios for version 6.0 * Updations
No description provided.