Skip to content

feat: Update Theoretical TFLOPS#1236

Merged
terrykong merged 2 commits intoNVIDIA-NeMo:mainfrom
youngeunkwon0405:a100-flops
Oct 1, 2025
Merged

feat: Update Theoretical TFLOPS#1236
terrykong merged 2 commits intoNVIDIA-NeMo:mainfrom
youngeunkwon0405:a100-flops

Conversation

@youngeunkwon0405
Copy link
Copy Markdown
Contributor

@youngeunkwon0405 youngeunkwon0405 commented Sep 30, 2025

What does this PR do ?

Update TFLOPS of other GPUs

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

  • New Features
    • Expanded FLOPS tracking to recognize newer NVIDIA GPUs: B200, B300, GB200, and GB300.
    • Automatically reports theoretical performance for both float32 and bfloat16 on these devices, consistent with existing A100/H100 support.
    • Improves accuracy and completeness of performance metrics when running on supported hardware.
    • No user action required—FLOPS estimates will be applied automatically based on detected device.

Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
@youngeunkwon0405 youngeunkwon0405 requested a review from a team as a code owner September 30, 2025 18:00
@youngeunkwon0405 youngeunkwon0405 changed the title Update Theoretical TFLOPS feat: Update Theoretical TFLOPS Sep 30, 2025
Comment thread nemo_rl/utils/flops_tracker.py
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Sep 30, 2025

📝 Walkthrough

Walkthrough

Extends THEORETICAL_TFLOPS in nemo_rl/utils/flops_tracker.py with entries for NVIDIA B200/B300/GB200/GB300 for torch.float32 and torch.bfloat16. No other logic or files changed.

Changes

Cohort / File(s) Summary of Changes
TFLOPS mapping updates
nemo_rl/utils/flops_tracker.py
Added new device–dtype keys to THEORETICAL_TFLOPS for NVIDIA B200/B300/GB200/GB300 across float32 and bfloat16; no functional logic changes.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Test Results For Major Changes ⚠️ Warning The pull request extends the FLOPS tracker by adding new theoretical TFLOPS entries for several NVIDIA GPUs, which directly affects how performance is estimated, yet the PR description contains no test results, benchmark comparisons, or verification that these new values produce correct or non-regressed outputs. Since the changes impact performance reporting, before-and-after performance metrics or unit tests validating the new entries are required. Please add test cases or benchmark results demonstrating that the new GPU TFLOPS values are correct and that performance estimates remain accurate, including clear before-and-after comparisons and the testing environment used.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title succinctly and accurately reflects the core change in this pull request, which is updating the theoretical TFLOPS mapping; it is concise, clear, and directly tied to the implemented feature without unnecessary detail.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8003918 and 0774384.

📒 Files selected for processing (1)
  • nemo_rl/utils/flops_tracker.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Follow the Google Python Style Guide for all Python code
Target Python 3.12+ for all Python code in NeMo-RL
Indent Python code with 4 spaces; do not use tabs
Python filenames should be snake_case (e.g., some_file.py)
Class names should be PascalCase
Function and method names should be snake_case
Local variable names should be snake_case; if starting with a number, prefix with k (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE and prefixed with G_ (e.g., G_MY_GLOBAL)
Constants should be UPPER_SNAKE_CASE
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
For public interfaces used outside a file, prefer docstrings over comments
Use comments mainly for code within a function or interfaces local to a file
Commented-out code must include a nearby comment explaining usage and why it is commented out; otherwise remove before merging
Use Google-style docstrings for classes and functions (Sphinx-parseable)
Avoid using reflection when functionality can be easily achieved without it
Limit except clauses to the smallest specific set of exceptions possible
For duck-typing via try/except, keep the try body minimal and use else for main logic
Add the NVIDIA copyright header (with current year) at the top of all Python files, excluding tests/ and test-only scripts

Files:

  • nemo_rl/utils/flops_tracker.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

nemo_rl/**/*.py: Do not set non-None configuration defaults in code; YAML is the single source of truth for defaults
Access required config attributes directly (e.g., policy_cfg["precision"]) and assume presence; do not introduce hidden defaults
Express configuration optionality via TypedDict using typing.NotRequired
When adding a new config key to a TypedDict subclass, document the key’s purpose, valid values/types, and recommended default in code
For any class or function decorated with @ray.remote, add '# pragma: no cover' on the class/def line (and on remote functions)

Files:

  • nemo_rl/utils/flops_tracker.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Lint check
  • GitHub Check: Post automodel integration comment / Comment on PR
  • GitHub Check: Post submodule check comment / Comment on PR

Comment thread nemo_rl/utils/flops_tracker.py Outdated
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Comment thread nemo_rl/utils/flops_tracker.py
@youngeunkwon0405 youngeunkwon0405 self-assigned this Sep 30, 2025
@youngeunkwon0405 youngeunkwon0405 added the CI:L0 Run doctests and unit tests label Sep 30, 2025
@youngeunkwon0405 youngeunkwon0405 added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L0 Run doctests and unit tests labels Sep 30, 2025
@youngeunkwon0405
Copy link
Copy Markdown
Contributor Author

For the record.

root@ptyche0126:/lustre/fsw/coreai_dlalgo_llm/youngeunk/mount/sandbox# python
Python 3.12.3 (main, Jun 18 2025, 17:59:45) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.backends.cuda.matmul.allow_tf32
/usr/local/lib/python3.12/dist-packages/torch/backends/cuda/__init__.py:131: UserWarning: Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = 'tf32' or torch.backends.cuda.matmul.fp32_precision = 'ieee'. Old settings, e.g, torch.backends.cuda.matmul.allow_tf32 = True, torch.backends.cudnn.allow_tf32 = True, allowTF32CuDNN() and allowTF32CuBLAS() will be deprecated after Pytorch 2.9. Please see https://pytorch.org/docs/main/notes/cuda.html#tensorfloat-32-tf32-on-ampere-and-later-devices (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/Context.cpp:80.)

@youngeunkwon0405
Copy link
Copy Markdown
Contributor Author

Hi @terrykong, can I ask for your help on merging this PR?

@terrykong terrykong added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Oct 1, 2025
@terrykong terrykong enabled auto-merge (squash) October 1, 2025 01:42
@terrykong terrykong merged commit f7645f3 into NVIDIA-NeMo:main Oct 1, 2025
115 of 122 checks passed
@coderabbitai coderabbitai Bot mentioned this pull request Nov 19, 2025
4 tasks
PrinsYin pushed a commit to PrinsYin/RL that referenced this pull request Nov 30, 2025
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 21, 2026
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI:L1 Run doctests, unit tests, and functional tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants