Skip to content

[Llama ROPE] Fix torch export but also slow downs in forward#29198

Merged
ArthurZucker merged 25 commits intomainfrom
llama-sincos-bc
Feb 28, 2024
Merged

[Llama ROPE] Fix torch export but also slow downs in forward#29198
ArthurZucker merged 25 commits intomainfrom
llama-sincos-bc

Conversation

@ArthurZucker
Copy link
Copy Markdown
Collaborator

@ArthurZucker ArthurZucker commented Feb 22, 2024

What does this PR do?

Reverts some of the breaking changes introduce in #29109
The release mentions that we have a breaking change.
This makes it truly BC in the way you access sin_cache without memory / tracing / forward issue.
fixes #29173

@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Copy Markdown
Contributor

@younesbelkada younesbelkada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems good once the CI passes!

@ArthurZucker ArthurZucker marked this pull request as ready for review February 22, 2024 04:38
@ArthurZucker ArthurZucker requested a review from gante February 22, 2024 09:14
Copy link
Copy Markdown
Contributor

@gante gante left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing: a test for torch.compile!

@fxmarty
Copy link
Copy Markdown
Contributor

fxmarty commented Feb 22, 2024

+1 @gante there should be tests for torch.compile / torch.compile with fullgraph=True

@fxmarty fxmarty mentioned this pull request Feb 22, 2024
4 tasks
@ArthurZucker
Copy link
Copy Markdown
Collaborator Author

Yes yes!

@ArthurZucker ArthurZucker changed the title [Llama ROPE] Export but also slow down in forward. [Llama ROPE] Fix torch export but also slow downs in forward Feb 28, 2024
@ArthurZucker ArthurZucker merged commit 8a8a0a4 into main Feb 28, 2024
@ArthurZucker ArthurZucker deleted the llama-sincos-bc branch February 28, 2024 09:45
ArthurZucker added a commit that referenced this pull request Feb 28, 2024
* remove control flow

* update gptneox

* update ....

* nits

* Actually let's just break. Otherwise we are silently failing which imo is not optimal

* version BC

* fix tests

* fix eager causal

* nit

* add a test

* style

* nits

* nits

* more nits for the test

* update and fix

* make sure cuda graphs are not skipped

* read token is needed for meta llama

* update!

* fiixup

* compile test should be slow

* fix thet fix copies

* stle 🫠
ArthurZucker added a commit that referenced this pull request Mar 1, 2024
* remove control flow

* update gptneox

* update ....

* nits

* Actually let's just break. Otherwise we are silently failing which imo is not optimal

* version BC

* fix tests

* fix eager causal

* nit

* add a test

* style

* nits

* nits

* more nits for the test

* update and fix

* make sure cuda graphs are not skipped

* read token is needed for meta llama

* update!

* fiixup

* compile test should be slow

* fix thet fix copies

* stle 🫠
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

llama model: causal_mask does not exist

5 participants