feat(models): add Opus 4.7, GPT-5.5, and 1M context variants to engine dropdowns#1199
Merged
zbigniewsobiecki merged 1 commit intodevfrom Apr 25, 2026
Merged
Conversation
…e dropdowns Add the latest Anthropic and OpenAI models to the dashboard engine pickers: - Claude Code: Opus 4.7, plus the 1M-context variants for Opus 4.7, Opus 4.6, and Sonnet 4.6 (using the canonical `[1m]` suffix syntax documented by Anthropic). - Codex: GPT-5.5. Also wire pricing metadata for the new model IDs (`claude-opus-4-7` = $5/$25 per Anthropic's current pricing — Opus pricing dropped vs the 4.1/4.5 generation; `gpt-5.5` = $5/$30 per OpenAI's published rates), and a conservative rate-limit profile for Opus 4.7 mirroring the existing Opus tier-1 throttle (50 RPM, 10K TPM). Defaults stay unchanged (Sonnet 4.5 for Claude Code, GPT-5.4 for Codex) to keep this PR additive. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
3 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Bump the dashboard engine model pickers to expose Anthropic's and OpenAI's latest models that were missing:
claude-opus-4-7) — current flagship, released 2026-04-16claude-opus-4-7[1m]claude-opus-4-6[1m]claude-sonnet-4-6[1m]gpt-5.5) — released 2026-04-23The
[1m]suffix is canonical Anthropic API syntax — the Claude Code docs confirm "append[1m]to a full model name" for the 1M-context variants. Only Opus 4.7, Opus 4.6, and Sonnet 4.6 support 1M context; Sonnet 4.5 and Haiku 4.5 do not, so we deliberately don't add[1m]variants for them.Pricing is wired for cost-reporting accuracy: Opus 4.7 = $5 / $25 per 1M tokens (Anthropic dropped Opus pricing this generation — do not assume the old $15/$75 from the existing 4.5 entry), Sonnet 4.6 1M = $3 / $15, GPT-5.5 = $5 / $30. The 1M variants share the same standard pricing — Anthropic does not charge a premium for tokens beyond 200K.
A conservative rate-limit profile is added for Opus 4.7 (50 RPM / 10K TPM, mirroring the existing Opus tier-1 throttle). The single prefix-match entry covers both
claude-opus-4-7andclaude-opus-4-7[1m].Defaults stay unchanged (Sonnet 4.5 for Claude Code, GPT-5.4 for Codex) to keep this PR strictly additive.
Gemini 3.1 Pro Preview audit: already fully wired (
src/config/customModels.ts+ pricing + rate-limits) and selectable via the OpenRouter combobox in free-text engines. No change needed.Test plan
npx vitest run --project unit-backends tests/unit/backends/claude-code.test.ts tests/unit/backends/codex.test.ts— green (152 tests).npx vitest run --project unit-core tests/unit/config/rateLimits.test.ts tests/unit/utils/llmMetrics.test.ts— green.npm run typecheck— clean.🤖 Generated with Claude Code