Skip to content

Generation / FIX: Fix multi-device generation#30746

Merged
younesbelkada merged 11 commits intohuggingface:mainfrom
younesbelkada:fix-multi-gpu-bnb
May 13, 2024
Merged

Generation / FIX: Fix multi-device generation#30746
younesbelkada merged 11 commits intohuggingface:mainfrom
younesbelkada:fix-multi-gpu-bnb

Conversation

@younesbelkada
Copy link
Copy Markdown
Contributor

@younesbelkada younesbelkada commented May 10, 2024

What does this PR do?

Fixes failing tests for multi-device (e.g. Multi-GPU, GPU + CPU etc) generation. The fix is simply to make sure pad_token_id and all other special tokens are initialized on the correct device (e.g. for models offloaded on CPU self.device return "meta" which breaks the generation after 😢 )

cc @gante @ArthurZucker

@younesbelkada younesbelkada marked this pull request as ready for review May 10, 2024 16:41
@younesbelkada
Copy link
Copy Markdown
Contributor Author

The fix is to initialize the special tokens on the correct devices all the time, I updated the description of the PR

@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@ArthurZucker
Copy link
Copy Markdown
Collaborator

cc @gante !

Copy link
Copy Markdown
Contributor

@gante gante left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a suggestion to enable this change on all modalities!

Comment thread src/transformers/generation/utils.py Outdated
Comment on lines +1522 to +1526
device = None
if "input_ids" in model_kwargs and isinstance(model_kwargs["input_ids"], torch.Tensor):
device = model_kwargs["input_ids"].device

self._prepare_special_tokens(generation_config, kwargs_has_attention_mask, device=device)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I get it right: the device comes from the main model input, and not from the model itself.

Assuming what I wrote above is correct, we should get the device variable after the _prepare_model_inputs call, which extracts the main model input from the different keywords we might see (for instance, Whisper does not use input_ids). In that case, I would move these lines to after L1532 (currently batch_size = inputs_tensor.shape[0]), and use device=inputs_tensor.device :D

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes totally sense! Done!

@younesbelkada younesbelkada requested a review from gante May 13, 2024 10:02
@younesbelkada younesbelkada changed the title Generation / FIX: Attempt to fix multi-device generation Generation / FIX: Fix multi-device generation May 13, 2024
Copy link
Copy Markdown
Contributor

@gante gante left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perfect, thank you for iterating 👌

@younesbelkada
Copy link
Copy Markdown
Contributor Author

Thanks ! cc @ArthurZucker for the final review

Copy link
Copy Markdown
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing. A small test is welcome (instead of the slow one!) to make sure we catch this earlier!

)
can_infer_attention_mask = is_pad_token_in_inputs * is_pad_token_not_equal_to_eos_token_id
attention_mask_from_padding = inputs.ne(pad_token_id).long()

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

weird that this is changed 😄

@younesbelkada
Copy link
Copy Markdown
Contributor Author

thanks ! I don't think we can add tests as they would require a GPU, this is implictly tested through our models + quantization slow tests, hence how I catched the bug

@younesbelkada younesbelkada merged commit f823fec into huggingface:main May 13, 2024
@younesbelkada younesbelkada deleted the fix-multi-gpu-bnb branch May 13, 2024 12:35
@ArthurZucker
Copy link
Copy Markdown
Collaborator

ok if there is no way to repro with a minimal trick putting the weights on meta device voluntarily!

eginhard added a commit to idiap/coqui-ai-TTS that referenced this pull request Jun 16, 2024
….41.1

Fixes #31. The handling of special tokens in `transformers` was changed in
huggingface/transformers#30624 and
huggingface/transformers#30746. This updates the XTTS
streaming code accordingly.
eginhard added a commit to idiap/coqui-ai-TTS that referenced this pull request Jun 16, 2024
….41.1

Fixes #31. The handling of special tokens in `transformers` was changed in
huggingface/transformers#30624 and
huggingface/transformers#30746. This updates the XTTS
streaming code accordingly.
eginhard added a commit to idiap/coqui-ai-TTS that referenced this pull request Jun 17, 2024
….41.1

Fixes #31. The handling of special tokens in `transformers` was changed in
huggingface/transformers#30624 and
huggingface/transformers#30746. This updates the XTTS
streaming code accordingly.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants