[wip][meta-llama][torch.compile] Fix issues with torch.compile#32102
[wip][meta-llama][torch.compile] Fix issues with torch.compile#32102anijain2305 wants to merge 1 commit intohuggingface:mainfrom
Conversation
| torch._dynamo.mark_static_address(new_layer_key_cache) | ||
| torch._dynamo.mark_static_address(new_layer_value_cache) | ||
| # torch._dynamo.mark_static_address(new_layer_key_cache) | ||
| # torch._dynamo.mark_static_address(new_layer_value_cache) |
There was a problem hiding this comment.
This is in __init__ function. A proper fix would be to hoist the construction of the StaticCache such that it is outside of torch.compile scope.
There was a problem hiding this comment.
We can maybe compile _generate, and generate inits the cache then call the compiled _generate.
There was a problem hiding this comment.
Ok, I looked more into why this was not a problem for gpt-fast.
In gpt-fast, the KV cache is an instance of nn.Module, with the k and v caches being marked as buffers. Since buffers are considered static by torch.compile, we don't need to annotate them as mark_static_address. This is the pointer - https://github.com/pytorch-labs/gpt-fast/blob/main/model.py#L73
@ArthurZucker @gante Would you be willing to carry out a refactor similar to gpt-fast by making these caches an instance of nn-module? This will improve compatibility with torch.compile.
Thanks to @mlazos and @yanboliang for pointing me to gpt-fast. cc @Chillee
There was a problem hiding this comment.
Yeah! TBH as long as we don't see performance reduction, this should be alright (AKA just inheriting from nn.Module + marking as buffers).
You can also potentially add it here!
There was a problem hiding this comment.
@anijain2305 sounds great!
Will you open a PR for it? :D
There was a problem hiding this comment.
#32159 we actually needed the deep copy so went ahead and added that for now, testing is required on my end
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
What does this PR do?
Fixes # (issue)
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.