Support new model Qwen/Qwen3.6-35B-A3B#1705
Conversation
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Pull request overview
Adds compatibility for the new HuggingFace model Qwen/Qwen3.6-35B-A3B by improving AutoRound’s HF conversion logic when a composite checkpoint is loaded as a text-only submodel and module path prefixes differ.
Changes:
- Added
_remap_paths_for_text_model()to remap quantization block paths using Transformers checkpoint conversion mappings. - Updated
get_layer_config()to retry layer discovery with remapped paths when the initial block-path match finds zero layers.
|
Shall we add version control if it's related to Transformer bugs, so we can handle them cleanly if they’re fixed in future releases |
This issue does not occur when using |
|
/azp run Unit-Test-CUDA-AutoRound |
|
Azure Pipelines successfully started running 1 pipeline(s). |
Description
Support new model Qwen/Qwen3.6-35B-A3B.
Qwen/Qwen3.6-35B-A3B still use the same model type with Qwen/Qwen3.5-35B-A3B.
This PR adds compatibility for Qwen/Qwen3.6-35B-A3B in the AutoRound HF conversion path.
Type of Change
Related Issues
Fixes or relates to #
Checklist Before Submitting