Skip to content

Comments

feat: add GLM-5 model support to Z.ai provider#11440

Merged
daniel-lxs merged 1 commit intomainfrom
feat/add-glm-5-zai-provider
Feb 12, 2026
Merged

feat: add GLM-5 model support to Z.ai provider#11440
daniel-lxs merged 1 commit intomainfrom
feat/add-glm-5-zai-provider

Conversation

@roomote
Copy link
Contributor

@roomote roomote bot commented Feb 12, 2026

Related GitHub Issue

N/A - Model addition

Description

Adds GLM-5 model support to the Z.ai provider with a 202,752 token context window.

Changes:

  • packages/types/src/providers/zai.ts: Added glm-5 to both internationalZAiModels (international pricing) and mainlandZAiModels (mainland China pricing) with 202,752 context window, 16,384 max output tokens, prompt caching support, and reasoning effort support (["disable", "medium"])
  • src/api/providers/zai.ts: Generalized the thinking model detection from a hardcoded modelId === "glm-4.7" check to Array.isArray(info.supportsReasoningEffort), so any model declaring reasoning support (including GLM-5) automatically gets thinking mode via providerOptions
  • src/api/providers/__tests__/zai.spec.ts: Added test cases for GLM-5 in both international and China model sections, plus a dedicated GLM-5 Thinking Mode test section verifying default-enabled thinking and explicit disable

Test Procedure

  • Ran cd src && npx vitest run api/providers/__tests__/zai.spec.ts - all 31 tests pass
  • All lint checks pass via turbo
  • All type checks pass across all 14 packages

Pre-Submission Checklist

  • Scope: Changes are focused on adding GLM-5 model support
  • Self-Review: Performed thorough self-review
  • Testing: Added tests for GLM-5 model in international, China, and thinking mode scenarios
  • Contribution Guidelines: Read and agree to the Contributor Guidelines

Documentation Updates

  • No documentation updates are required.

@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. Enhancement New feature or request labels Feb 12, 2026
@roomote
Copy link
Contributor Author

roomote bot commented Feb 12, 2026

Rooviewer Clock   See task

No issues found. The model addition follows existing patterns, the thinking-mode detection generalization is correct, and test coverage is adequate. All 31 tests pass.

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Feb 12, 2026
@daniel-lxs daniel-lxs merged commit cdf481c into main Feb 12, 2026
18 checks passed
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Feb 12, 2026
@daniel-lxs daniel-lxs deleted the feat/add-glm-5-zai-provider branch February 12, 2026 14:14
hannesrudolph added a commit that referenced this pull request Feb 14, 2026
* fix: correct Bedrock model ID for Claude Opus 4.6 (#11232)

Remove the :0 suffix from the Claude Opus 4.6 model ID to match
the correct AWS Bedrock model identifier.

The model ID was "anthropic.claude-opus-4-6-v1:0" but should be
"anthropic.claude-opus-4-6-v1" per AWS Bedrock documentation.

Fixes #11231

Co-authored-by: Roo Code <roomote@roocode.com>

* fix: guard against empty-string baseURL in provider constructors (#11233)

When the 'custom base URL' checkbox is unchecked in the UI, the setting
is set to '' (empty string). Providers that passed this directly to their
SDK constructors caused 'Failed to parse URL' errors because the SDK
treated '' as a valid but broken base URL override.

- gemini.ts: use || undefined (was passing raw option)
- openai-native.ts: use || undefined (was passing raw option)
- openai.ts: change ?? to || for fallback default
- deepseek.ts: change ?? to || for fallback default
- moonshot.ts: change ?? to || for fallback default

Adds test coverage for Gemini and OpenAI Native constructors verifying
empty-string baseURL is coerced to undefined.

* fix: make defaultTemperature required in getModelParams to prevent silent temperature overrides (#11218)

* fix: DeepSeek temperature defaulting to 0 instead of 0.3

Pass defaultTemperature: DEEP_SEEK_DEFAULT_TEMPERATURE to getModelParams() in
DeepSeekHandler.getModel() to ensure the correct default temperature (0.3)
is used when no user configuration is provided.

Closes #11194

* refactor: make defaultTemperature required in getModelParams

Make the defaultTemperature parameter required in getModelParams() instead
of defaulting to 0. This prevents providers with their own non-zero default
temperature (like DeepSeek's 0.3) from being silently overridden by the
implicit 0 default.

Every provider now explicitly declares its temperature default, making the
temperature resolution chain clear:
  user setting → model default → provider default

---------

Co-authored-by: Roo Code <roomote@roocode.com>
Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com>

* feat: batch consecutive tool calls in chat UI with shared utility (#11245)

* feat: group consecutive list_files tool calls into single UI block

Consolidate consecutive listFilesTopLevel/listFilesRecursive ask messages
into a single 'Roo wants to view multiple directories' block, matching the
existing read_file batching pattern.

* chore: add missing translation keys for all locales

* refactor: consolidate duplicate listFiles batch-handling blocks in ChatRow

Merge the separate listFilesTopLevel and listFilesRecursive case blocks
into a single combined case with shared batch-detection logic, selecting
the icon and translation key based on the tool type. This removes the
duplicated isBatchDirRequest check and BatchListFilesPermission render.

* feat: batch consecutive file-edit tool calls into single UI block

Add edit-file batching in ChatView groupedMessages that consolidates
consecutive editedExistingFile, appliedDiff, newFileCreated,
insertContent, and searchAndReplace asks into a single BatchDiffApproval
block. Move batchDiffs detection in ChatRow above the switch statement
so it applies to any file-edit tool type.

* refactor: extract batchConsecutive utility, fix batch UI issues

- Extract generic batchConsecutive() utility from 3 identical while-loops
- Fix React key collisions in BatchListFilesPermission, BatchFilePermission, BatchDiffApproval
- Normalize language prop to "shellsession" (was "shell-session" for top-level)
- Remove unused _batchedMessages property from synthetic messages
- Remove dead didViewMultipleDirectories i18n key from all 18 locale files
- Add batch button text for listFilesTopLevel/listFilesRecursive
- Add batchConsecutive utility tests (6 cases)

* fix: audit improvements for batch tool-call UI

- Make batchConsecutive() generic instead of ClineMessage-specific
- Add batch-aware button text for edit-file batches ("Save All"/"Deny All")
- Add dedicated list-batch/edit-batch i18n keys (stop reusing read-batch)
- Add JSON.parse defense-in-depth in all three synthesizers
- Fix mixed list_files batch icon to default to FolderTree
- Add 6 missing test cases (all-match, immutability, spy, single-dir)

* chore: minor type cleanup (out-of-scope housekeeping)

- Trim unused recursive/isOutsideWorkspace from DirPermissionItem interface
- Remove 4 pre-existing `as any` casts in ChatView.tsx:
  - window cast → precise inline type
  - checkpoint bracket access → removed unnecessary casts
  - condensing message → `as ClineMessage`
  - debounce cancel → `.clear()` (correct API)
- Update BatchListFilesPermission test data to match trimmed interface

* i18n: add list-batch and edit-batch translations for all locales

* feat: add IPC query handlers for commands, modes, and models (#11279)

Add GetCommands, GetModes, and GetModels to the IPC protocol so external
clients can fetch slash commands, available modes, and Roo provider models
without going through the internal webview message channel.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add lock toggle to pin API config across all modes in workspace (#11295)

* feat: add lock toggle to pin API config across all modes in workspace

Add a lock/unlock toggle inside the API config selector popover (next to
the settings gear) that, when enabled, applies the selected API
configuration to all modes in the current workspace.

- Add lockApiConfigAcrossModes to ExtensionState and WebviewMessage types
- Store setting in workspaceState (per-workspace, not global)
- When locked, activateProviderProfile sets config for all modes
- Lock icon in ApiConfigSelector popover bottom bar next to gear
- Full i18n: English + 17 locale translations (all mention workspace scope)
- 9 new tests: 2 ClineProvider, 2 handler, 5 UI (77 total pass)

* refactor: replace write-fan-out with read-time override for lock API config

The original lock implementation used setModeConfig() fan-out to write the
locked config to ALL modes globally. Since the lock flag lives in workspace-
scoped workspaceState but modeApiConfigs are in global secrets, this caused
cross-workspace data destruction.

Replaced with read-time guards:
- handleModeSwitch: early return when lock is on (skip per-mode config load)
- createTaskWithHistoryItem: skip mode-based config restoration under lock
- activateProviderProfile: removed fan-out block
- lockApiConfigAcrossModes handler: simplified to flag + state post only
- Fixed pre-existing workspaceState mock gap in ClineProvider.spec.ts and
  ClineProvider.sticky-profile.spec.ts

* fix: validate Gemini thinkingLevel against model capabilities and handle empty streams (#11303)

* fix: validate Gemini thinkingLevel against model capabilities and handle empty streams

getGeminiReasoning() now validates the selected effort against the model's
supportsReasoningEffort array before sending it as thinkingLevel. When a
stale settings value (e.g. 'medium' from a different model) is not in the
supported set, it falls back to the model's default reasoningEffort.

GeminiHandler.createMessage() now tracks whether any text content was
yielded during streaming and handles NoOutputGeneratedError gracefully
instead of surfacing the cryptic 'No output generated' error.

* fix: guard thinkingLevel fallback against 'none' effort and add i18n TODO

The array validation fallback in getGeminiReasoning() now only triggers
when the selected effort IS a valid Gemini thinking level but not in
the model's supported set. Values like 'none' (explicit no-reasoning
signal) are no longer overridden by the model default.

Also adds a TODO for moving the empty-stream message to i18n.

* fix: track tool_call_start in hasContent to avoid false empty-stream warning

Tool-only responses (no text) are valid content. Without this,
agentic tool-call responses would incorrectly trigger the empty
response warning message.

* chore(cli): prepare release v0.0.53 (#11425)

* feat: add GLM-5 model support to Z.ai provider (#11440)

* chore: regenerate pnpm-lock.yaml

* fix: resolve type errors and remove AI SDK test contamination

* docs: update progress.txt with rebuilt Batch 2 status

---------

Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>
Co-authored-by: Roo Code <roomote@roocode.com>
Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com>
Co-authored-by: Chris Estreich <cestreich@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Enhancement New feature or request lgtm This PR has been approved by a maintainer size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants