docs: fix 5 docstring errors in Gemma3nTextConfig (typos, grammar, formatting)#45370
Merged
Rocketknight1 merged 1 commit intohuggingface:mainfrom Apr 13, 2026
Merged
Conversation
Fix five documentation errors in Gemma3nTextConfig docstring: - Typo: "emebeddings" → "embeddings" - Incomplete sentence for altup_active_idx (truncated at "or correct") - Grammar: "should be make" → "should make" in altup_num_inputs - Grammar: "number of layer" → "number of layers" in num_kv_shared_layers - Formatting: add missing backticks around type annotations for laurel_rank and activation_sparsity_pattern to match HF docstring conventions Both modular_gemma3n.py (source of truth) and the generated configuration_gemma3n.py are updated in sync. Built by Rudrendu Paul, developed with Claude Code
Contributor
|
[For maintainers] Suggested jobs to run (before merge) run-slow: gemma3n |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
sirzechs66
pushed a commit
to sirzechs66/transformers
that referenced
this pull request
Apr 18, 2026
…rmatting) (huggingface#45370) docs: fix docstring errors in Gemma3nTextConfig Fix five documentation errors in Gemma3nTextConfig docstring: - Typo: "emebeddings" → "embeddings" - Incomplete sentence for altup_active_idx (truncated at "or correct") - Grammar: "should be make" → "should make" in altup_num_inputs - Grammar: "number of layer" → "number of layers" in num_kv_shared_layers - Formatting: add missing backticks around type annotations for laurel_rank and activation_sparsity_pattern to match HF docstring conventions Both modular_gemma3n.py (source of truth) and the generated configuration_gemma3n.py are updated in sync. Built by Rudrendu Paul, developed with Claude Code Co-authored-by: Rudrendu <RudrenduPaul@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Fixes five documentation errors in the
Gemma3nTextConfigdocstring inmodular_gemma3n.py(and the generatedconfiguration_gemma3n.py):"emebeddings"→"embeddings"inhidden_size_per_layer_inputaltup_active_idxdescription was cut off at"or correct"— completed to"or correct the active prediction.""should be make"→"should make"inaltup_num_inputs"number of layer"→"number of layers"innum_kv_shared_layerslaurel_rankandactivation_sparsity_patternto match HF docstring conventions used consistently throughout the fileBoth the modular source file (
modular_gemma3n.py) and the generated configuration file (configuration_gemma3n.py) are updated in sync.Before submitting
Note to maintainers: These are purely documentation corrections — no logic changes. The modular file is the source of truth; the generated file was updated by hand to match (identical diff in both files). Running
make fix-repowill confirm the generated file is in sync.AI disclosure
Who can review?
@stevhliu @zucchini-nlp