[docs] distributed training#44420
Open
stevhliu wants to merge 5 commits intohuggingface:mainfrom
Open
Conversation
SunMarc
reviewed
Apr 23, 2026
Member
SunMarc
left a comment
There was a problem hiding this comment.
Thanks ! Left a comment but overall, we will move on to FSDPv2 very soon. I'm thinking about changing the default to that soon so we should update the docs to reflect fsdpv2 arguments.
Comment on lines
+51
to
+64
| ## Sharding strategies | ||
|
|
||
| Always start by running the [accelerate config](https://hf.co/docs/accelerate/package_reference/cli#accelerate-config) command to help Accelerate set up the correct distributed training environment. | ||
| Pass one of the sharding strategies below to [fsdp](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.fsdp). | ||
|
|
||
| ```bash | ||
| accelerate config | ||
| ``` | ||
| | strategy | description | | ||
| |---|---| | ||
| | `full_shard` | shard parameters, gradients, and optimizer states | | ||
| | `shard_grad_op` | shard gradients and optimizer states | | ||
| | `no_shard` | DDP | | ||
| | `hybrid_shard` | full shard within a node, replicate across nodes | | ||
| | `hybrid_shard_zero2` | shard gradients and optimizer states within a node, replicate across nodes | | ||
| | `offload` | CPU offload (combine with `full_shard` or `shard_grad_op`) | | ||
|
|
||
| The section below discusses some of the more important FSDP configuration options. Learn more about other available options in the [fsdp_config](https://hf.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.fsdp_config) parameter. | ||
| Always combine a sharding strategy with `auto_wrap` to enable the auto-wrapping policy like `fsdp="full_shard auto_wrap"`. Without `auto_wrap`, the entire model is one FSDP unit and you lose the memory benefit of sharding. |
Member
There was a problem hiding this comment.
Let's update the docs for fsdpv2 args only. I feel like this could be a nicer transition instead of keeping the old arguments. We don't have this arg anymore. We only have reshard_after_forward now.
Comment on lines
30
to
41
| compute_environment: LOCAL_MACHINE | ||
| debug: false | ||
| distributed_type: FSDP | ||
| downcast_bf16: 'no' | ||
| fsdp_config: | ||
| fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP | ||
| fsdp_backward_prefetch_policy: BACKWARD_PRE | ||
| fsdp_forward_prefetch: false | ||
| fsdp_cpu_ram_efficient_loading: true | ||
| fsdp_offload_params: false | ||
| fsdp_sharding_strategy: FULL_SHARD | ||
| fsdp_state_dict_type: SHARDED_STATE_DICT | ||
| fsdp_sync_module_states: true | ||
| fsdp_transformer_layer_cls_to_wrap: BertLayer | ||
| fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer | ||
| fsdp_use_orig_params: true |
fa428ef to
90c1b9c
Compare
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
This was referenced Apr 29, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Trainernow instead of raw PyTorch loop). this guide links out to more detailed FSDP/DDP guides for full config-specific settings. also documents someAcceleratorConfigsettingsauto_wrapexplanation and detailed explanation of what the config settings are