Skip to content

support gptqmodel 7.0.0 and fix bug in CI#1772

Open
xin3he wants to merge 2 commits intomainfrom
xinhe/4-30
Open

support gptqmodel 7.0.0 and fix bug in CI#1772
xin3he wants to merge 2 commits intomainfrom
xinhe/4-30

Conversation

@xin3he
Copy link
Copy Markdown
Contributor

@xin3he xin3he commented Apr 30, 2026

Description

GPTQmodel 7.0.0 removes Quant string from Linear class name

Type of Change

Bug fix

Related Issues

Fixes or relates to #1771

Checklist Before Submitting

  • My code has been tested locally.
  • Documentation has been updated as needed.
  • New or updated tests are included where applicable.
  • The CUDA CI has passed. You can trigger it by commenting /azp run Unit-Test-CUDA-AutoRound.

xin3he added 2 commits April 30, 2026 15:21
Signed-off-by: Xin He <xin3.he@intel.com>
Signed-off-by: Xin He <xin3.he@intel.com>
Copilot AI review requested due to automatic review settings April 30, 2026 08:34
@xin3he
Copy link
Copy Markdown
Contributor Author

xin3he commented Apr 30, 2026

/azp run Unit-Test-CUDA-AutoRound

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 1 pipeline(s).

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates AutoRound’s GPTQModel integration to accommodate GPTQModel 7.0.0 API/class renames and adjusts some test helper logic used in CI/model loading.

Changes:

  • Add GPTQModel 7.0.0 compatibility for Marlin kernels and ExllamaV2 class naming changes.
  • Refactor GPTQModel backend linear selection to unify GPTQ/AWQ handling and dynamically resolve renamed classes.
  • Simplify get_tiny_model(..., from_config=True) config-only model construction path in tests.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.

File Description
test/helpers.py Adjusts model-path selection and refactors config-only tiny model instantiation logic.
auto_round_extension/cuda/gptqmodel_marlin.py Adds GPTQModel 7.0.0 Marlin import/argument handling.
auto_round/inference/convert_model.py Switches ExllamaV2 class import based on GPTQModel version.
auto_round/inference/backend.py Unifies GPTQModel GPTQ/AWQ backend routing and resolves renamed QuantLinear classes via dynamic import.

Comment thread test/helpers.py
Comment on lines +307 to +314
architectures = getattr(config, "architectures", [None])[0]
if (
is_mllm
and architectures.endswith("Model")
and hasattr(base_lib, n := architectures.replace("Model", "ForConditionalGeneration"))
):
model_cls = getattr(base_lib, n)
elif hasattr(base_lib, architectures):
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

architectures can be None when the config doesn’t define an architectures list (you default to [None]). In that case architectures.endswith("Model") will raise an AttributeError. Add a guard (e.g., ensure architectures is a str before calling string methods) and fall back to AutoModelForCausalLM when it’s missing/empty.

Suggested change
architectures = getattr(config, "architectures", [None])[0]
if (
is_mllm
and architectures.endswith("Model")
and hasattr(base_lib, n := architectures.replace("Model", "ForConditionalGeneration"))
):
model_cls = getattr(base_lib, n)
elif hasattr(base_lib, architectures):
config_architectures = getattr(config, "architectures", None)
architectures = (
config_architectures[0]
if isinstance(config_architectures, (list, tuple)) and len(config_architectures) > 0
else None
)
if (
is_mllm
and isinstance(architectures, str)
and architectures.endswith("Model")
and hasattr(base_lib, n := architectures.replace("Model", "ForConditionalGeneration"))
):
model_cls = getattr(base_lib, n)
elif isinstance(architectures, str) and hasattr(base_lib, architectures):

Copilot uses AI. Check for mistakes.
Comment on lines +58 to +64
if NEW_VERSION_7_0:
import gptqmodel.utils.marlin as gptqmodel_marlin_kernels
else:
try:
import gptqmodel_marlin_kernels # pylint: disable=E0401
except ImportError as e:
marlin_import_exception = e
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For gptqmodel>=7.0.0 you import gptqmodel.utils.marlin without handling ImportError, but older versions store the exception and raise a clear ValueError when the backend is used. Wrap the v7 import in the same try/except and set marlin_import_exception so failures produce a consistent, actionable error message.

Copilot uses AI. Check for mistakes.
Comment on lines 893 to 896
if "awq" in backend:
if "gptqmodel" in backend:
return get_gptqmodel_awq_infer_linear(backend)
return get_gptqmodel_infer_linear(backend, is_awq=True)
else:
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This if "gptqmodel" in backend branch inside the generic "awq" in backend block is now unreachable because the earlier if "gptqmodel" in backend: returns first for all gptqmodel backends. Consider removing this dead branch to avoid confusion and keep backend routing maintainable.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants