Skip to content

Fix: Conditionally import torch.distributed.algorithms.join in accelerator.py#3962

Merged
SunMarc merged 1 commit intohuggingface:mainfrom
0xDELUXA:fix-torch-distributed-import
Mar 11, 2026
Merged

Fix: Conditionally import torch.distributed.algorithms.join in accelerator.py#3962
SunMarc merged 1 commit intohuggingface:mainfrom
0xDELUXA:fix-torch-distributed-import

Conversation

@0xDELUXA
Copy link
Copy Markdown
Contributor

@0xDELUXA 0xDELUXA commented Mar 6, 2026

What does this PR do?

torch.distributed is not available in all PyTorch builds (for example, Windows ROCm). Importing torch.distributed.algorithms.join.Join unconditionally at the top level of accelerator.py causes an immediate crash with:

ModuleNotFoundError: No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package

This makes import accelerate fail entirely on those builds, even for users who have no intention of using distributed training.

Changes

  • Wrapped from torch.distributed.algorithms.join import Join in if torch.distributed.is_available():
  • Join is only used inside join_uneven_inputs() which is documented as multi-GPU DDP only, so this guard is safe and consistent with how the rest of the file handles optional distributed functionality

Prior art

The same fix was applied to huggingface/diffusers in:

A previous fix for the same class of issue in huggingface/transformers was made in:

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

@SunMarc

@0xDELUXA 0xDELUXA changed the title fix: conditionally import torch.distributed.algorithms.join in accelerator Fix: conditionally import torch.distributed.algorithms.join in accelerator.py Mar 6, 2026
@0xDELUXA 0xDELUXA changed the title Fix: conditionally import torch.distributed.algorithms.join in accelerator.py Fix: Conditionally import torch.distributed.algorithms.join in accelerator.py Mar 6, 2026
Copy link
Copy Markdown
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thx !

@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@0xDELUXA
Copy link
Copy Markdown
Contributor Author

@SunMarc huggingface/transformers#44507 was recently merged. This one should be good to go as well. Thanks!

@SunMarc SunMarc merged commit 9bbee85 into huggingface:main Mar 11, 2026
25 checks passed
@0xDELUXA 0xDELUXA deleted the fix-torch-distributed-import branch March 13, 2026 10:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants