Skip to content

gptq + nvfp4 smoke test on LLaMa 3.2 1B + wikitext perplexity#4303

Merged
vkuzo merged 10 commits intomainfrom
gh/vkuzo/249/head
Apr 22, 2026
Merged

gptq + nvfp4 smoke test on LLaMa 3.2 1B + wikitext perplexity#4303
vkuzo merged 10 commits intomainfrom
gh/vkuzo/249/head

Conversation

@vkuzo
Copy link
Copy Markdown
Contributor

@vkuzo vkuzo commented Apr 20, 2026

Summary:

Adds a very short e2e test for nonsequential (no modeling changes for dense) GPTQ on LLaMa 3.2 1B + wikitext perplexity. This shows that nonsequential GPTQ is directionally working. Optimizing for speed of iteration here.

GPTQ currently takes around 3 minutes to complete for this model, on a B200. This is before any performance optimizations.

Test Plan:

time torchao/prototype/gptq/gptq_nvfp4_llama3_2_1b_nonsequential_wikitext.sh 2>&1 | tee ~/tmp/20260421_gptq_run.txt

// full logs: https://gist.github.com/vkuzo/ca83b024b8268259e21f80522971f240

// baseline

| Tasks  |Version|Filter|n-shot|    Metric     |   | Value |   |Stderr|
|--------|------:|------|-----:|---------------|---|------:|---|------|
|wikitext|      2|none  |     0|bits_per_byte  || 0.6699|±  |   N/A|
|        |       |none  |     0|byte_perplexity|| 1.5910|±  |   N/A|
|        |       |none  |     0|word_perplexity||11.9782|±  |   N/A|

// nvfp4-rtn

| Tasks  |Version|Filter|n-shot|    Metric     |   | Value |   |Stderr|
|--------|------:|------|-----:|---------------|---|------:|---|------|
|wikitext|      2|none  |     0|bits_per_byte  || 0.7169|±  |   N/A|
|        |       |none  |     0|byte_perplexity|| 1.6436|±  |   N/A|
|        |       |none  |     0|word_perplexity||14.2549|±  |   N/A|

// nvfp4-gptq-nonsequential

| Tasks  |Version|Filter|n-shot|    Metric     |   | Value |   |Stderr|
|--------|------:|------|-----:|---------------|---|------:|---|------|
|wikitext|      2|none  |     0|bits_per_byte  || 0.7078|±  |   N/A|
|        |       |none  |     0|byte_perplexity|| 1.6333|±  |   N/A|
|        |       |none  |     0|word_perplexity||13.7856|±  |   N/A|

vkuzo added 2 commits April 20, 2026 20:52
[ghstack-poisoned]
[ghstack-poisoned]
@vkuzo
Copy link
Copy Markdown
Contributor Author

vkuzo commented Apr 20, 2026

@vkuzo vkuzo requested a review from jerryzh168 as a code owner April 20, 2026 20:52
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 20, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/4303

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 20, 2026
vkuzo added 2 commits April 21, 2026 00:28
[ghstack-poisoned]
[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Apr 21, 2026
Summary:

very wip, don't look yet

Test Plan:

```
time python torchao/prototype/gptq/gptq_example.py --quantization nvfp4-gptq-nonsequential --dataset-id hellaswag --model-id unsloth/Llama-3.2-1B
```
ghstack-source-id: 2b1ed27
ghstack-comment-id: 4284207546
Pull-Request: #4303
@vkuzo vkuzo added the module: not user facing Use this tag if you don't want this PR to show up in release notes label Apr 21, 2026
[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Apr 21, 2026
Summary:

very wip, don't look yet

Test Plan:

```
time python torchao/prototype/gptq/gptq_example.py --quantization nvfp4-gptq-nonsequential --dataset-id hellaswag --model-id unsloth/Llama-3.2-1B
```
ghstack-source-id: 8130542
ghstack-comment-id: 4284207546
Pull-Request: #4303
vkuzo added 2 commits April 21, 2026 10:21
[ghstack-poisoned]
[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Apr 21, 2026
Summary:

very wip, don't look yet

Test Plan:

```
time python torchao/prototype/gptq/gptq_example.py --quantization nvfp4-gptq-nonsequential --dataset-id hellaswag --model-id unsloth/Llama-3.2-1B
```
ghstack-source-id: 6b962eb
ghstack-comment-id: 4284207546
Pull-Request: #4303
[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Apr 21, 2026
Summary:

very wip, don't look yet

Test Plan:

```
time python torchao/prototype/gptq/gptq_example.py --quantization nvfp4-gptq-nonsequential --dataset-id hellaswag --model-id unsloth/Llama-3.2-1B
```
ghstack-source-id: 1cda8ac
ghstack-comment-id: 4284207546
Pull-Request: #4303
@vkuzo vkuzo changed the title [wip] gptq + nvfp4 e2e test gptq + nvfp4 smoke test on LLaMa 3.2 1B + wikitext perplexity Apr 21, 2026
[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Apr 21, 2026
Summary:

very wip, don't look yet

Test Plan:

```
time python torchao/prototype/gptq/gptq_example.py --quantization nvfp4-gptq-nonsequential --dataset-id hellaswag --model-id unsloth/Llama-3.2-1B
```
ghstack-source-id: 6d79311
ghstack-comment-id: 4284207546
Pull-Request: #4303
Comment on lines +332 to +333
and "lm_head" not in fqn
and "o_proj" in fqn
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: do we need both checks? or just checking for o_proj will achieve the same thing?

return module
elif config.step == "convert":
global gptq_convert_layer_counter
print(f"gptq convert {gptq_convert_layer_counter}")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: convert to log?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, in future PR

@jerryzh168
Copy link
Copy Markdown
Contributor

@claude review

Copy link
Copy Markdown
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@claude
Copy link
Copy Markdown

claude Bot commented Apr 21, 2026

Claude finished @jerryzh168's task in 1m 41s —— View job


Code Review

  • Gather PR diff and context
  • Read changed files
  • Analyze code and provide review feedback

Overall this PR looks good — it adds NVFP4 support to GPTQ and a useful smoke test script. The perplexity results show the expected improvement of GPTQ over RTN (13.79 vs 14.25 word perplexity). A few items to consider:

Issues

  1. Global mutable state for progress counter (api.py:104)
    The gptq_convert_layer_counter module-level global won't reset between calls. If someone calls quantize_ with GPTQ convert step multiple times (e.g., in a test suite or interactive session), the counter will keep incrementing from where it left off. As noted by the TODO, this needs cleanup — consider passing a counter through or using logging with the module FQN instead.
    Fix this →

  2. print() instead of logging (api.py:129)
    Echoing Jerry's review nit — print(f"gptq convert {gptq_convert_layer_counter}") should use logging.info() or similar. print() statements in library code are harder for users to suppress.

  3. TODO(before land) still present (gptq_example.py:452)

    # transformers 5.0.0 have a lot of errors with nvfp4 subclasses
    # TODO(before land): debug this further
    assert Version(transformers.__version__) < Version("5.0.0"), (

    This TODO explicitly says "before land" — make sure this is resolved or converted to a tracked issue before merging. Also, the hard assert will crash ungracefully for users who happen to have transformers >= 5.0.0; a clearer error message with guidance (e.g., "please downgrade transformers") would be more helpful.

  4. Hardcoded path in shell script (gptq_nvfp4_llama3_2_1b_nonsequential_wikitext.sh:7)

    COMMON_ARGS="--output-dir-prefix /home/dev/tmp/20260421 ..."

    The output directory prefix /home/dev/tmp/20260421 is hardcoded to a specific machine path. Consider parameterizing this (e.g., accepting it as a script argument or using $(mktemp -d)) so others can run the script without modification.

  5. skip_lm_head_o_proj redundancy (gptq_example.py:329-334) — Jerry's nit

    def skip_lm_head_o_proj(module, fqn):
        return (
            isinstance(module, torch.nn.Linear)
            and "lm_head" not in fqn
            and "o_proj" in fqn
        )

    The "lm_head" not in fqn check is unnecessary since lm_head won't match "o_proj" in fqn. Only checking "o_proj" in fqn would be sufficient. That said, keeping both is harmless for safety.

Minor Nits

  • Unused name variable (gptq_example.py:424): In for name, param in model.named_parameters(), name is unused. Convention is for _, param in ....

  • group_size parameter not relevant for NVFP4 (gptq_example.py:308): When quantization is "nvfp4-rtn" or "nvfp4-gptq-*", the output directory still appends _gs{args.group_size} (default 128), but NVFP4 uses a fixed group size of 16. This could be confusing in directory names.

CI Status

The H100 test failure (test_cast_to_float8_e4m3fn_saturation_behavior) is unrelated to this PR — it's in the MX formats test suite and appears to be a pre-existing issue.


[ghstack-poisoned]
@vkuzo vkuzo requested a review from danielvegamyhre as a code owner April 22, 2026 11:24
@vkuzo vkuzo changed the base branch from gh/vkuzo/248/head to main April 22, 2026 11:24
@vkuzo vkuzo merged commit 1b04df3 into main Apr 22, 2026
44 of 66 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. module: not user facing Use this tag if you don't want this PR to show up in release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants