fix compile#1774
Open
wenhuach21 wants to merge 2 commits intomainfrom
Open
Conversation
Contributor
There was a problem hiding this comment.
Pull request overview
This PR appears intended to address a compilation-related issue by disabling torch.compile usage for block_forward in both the legacy and new compressor paths, as well as in the quantization algorithm base.
Changes:
- Commented out
block_forwardcompile/selection logic in the legacy compressor base. - Commented out
block_forwardcompile/selection logic in the new compressor base hardware setup. - Removed the
enable_torch_compilebranch that compiled/cachedblock_forwardin quantization base resolution.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
auto_round/compressors_new/base.py |
Disables (comments out) block-forward compile/selection during hardware setup. |
auto_round/compressors/base.py |
Disables (comments out) legacy block_forward assignment/compile logic in __init__. |
auto_round/algorithms/quantization/base.py |
Removes the compiled/cached block_forward resolution branch, making resolution always return plain block_forward. |
Comment on lines
+536
to
+548
| # if ( | ||
| # (self.act_bits < 16 and (not self.act_dynamic or self.data_type == "nvfp")) # have hooks | ||
| # or self.enable_alg_ext # Use imatrix | ||
| # or not self.disable_opt_rtn # Use imatrix | ||
| # ): | ||
| # self.block_forward = block_forward | ||
| # else: | ||
| # # TODO FIXME | ||
| # # This function could not be compiled, causing a large accuracy drop when `enable_alg_ext` is used. | ||
| # # To avoid issues, remove it in all scenarios except WOQ. | ||
| # self.block_forward = ( | ||
| # compile_func(block_forward, self.device) if self.enable_torch_compile else block_forward | ||
| # ) |
Comment on lines
+974
to
+977
| # if self.enable_torch_compile and not _needs_plain_forward and self.need_calib: | ||
| # self.block_forward = compile_func(block_forward, self.compress_context.device) | ||
| # else: | ||
| # self.block_forward = block_forward |
Comment on lines
377
to
381
| self.config.is_act_quantize and (not self.config.act_dynamic or self.config.is_act_nv_fp) | ||
| ) or self.enable_alg_ext: | ||
| self._resolved_block_forward = block_forward | ||
| elif self.compress_context.enable_torch_compile: | ||
| compiled = self.__dict__.get("_compiled_block_forward") | ||
| if compiled is None: | ||
| compiled = compile_func(block_forward, self.compress_context.device) | ||
| self._compiled_block_forward = compiled | ||
| self._resolved_block_forward = compiled | ||
| else: | ||
| self._resolved_block_forward = block_forward |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Please briefly describe your main changes, the motivation.
Type of Change
Bug fix
Related Issues
Fixes or relates to #
Checklist Before Submitting
/azp run Unit-Test-CUDA-AutoRound.