Skip to content

feat(models): add Opus 4.7, GPT-5.5, and 1M context variants to engine dropdowns#1199

Merged
zbigniewsobiecki merged 1 commit intodevfrom
feat/add-opus-4-7-and-gpt-5-5
Apr 25, 2026
Merged

feat(models): add Opus 4.7, GPT-5.5, and 1M context variants to engine dropdowns#1199
zbigniewsobiecki merged 1 commit intodevfrom
feat/add-opus-4-7-and-gpt-5-5

Conversation

@zbigniewsobiecki
Copy link
Copy Markdown
Member

Summary

Bump the dashboard engine model pickers to expose Anthropic's and OpenAI's latest models that were missing:

  • Claude Code dropdown gains:
    • Claude Opus 4.7 (claude-opus-4-7) — current flagship, released 2026-04-16
    • Claude Opus 4.7 (1M context) — claude-opus-4-7[1m]
    • Claude Opus 4.6 (1M context) — claude-opus-4-6[1m]
    • Claude Sonnet 4.6 (1M context) — claude-sonnet-4-6[1m]
  • Codex dropdown gains:
    • GPT-5.5 (gpt-5.5) — released 2026-04-23

The [1m] suffix is canonical Anthropic API syntax — the Claude Code docs confirm "append [1m] to a full model name" for the 1M-context variants. Only Opus 4.7, Opus 4.6, and Sonnet 4.6 support 1M context; Sonnet 4.5 and Haiku 4.5 do not, so we deliberately don't add [1m] variants for them.

Pricing is wired for cost-reporting accuracy: Opus 4.7 = $5 / $25 per 1M tokens (Anthropic dropped Opus pricing this generation — do not assume the old $15/$75 from the existing 4.5 entry), Sonnet 4.6 1M = $3 / $15, GPT-5.5 = $5 / $30. The 1M variants share the same standard pricing — Anthropic does not charge a premium for tokens beyond 200K.

A conservative rate-limit profile is added for Opus 4.7 (50 RPM / 10K TPM, mirroring the existing Opus tier-1 throttle). The single prefix-match entry covers both claude-opus-4-7 and claude-opus-4-7[1m].

Defaults stay unchanged (Sonnet 4.5 for Claude Code, GPT-5.4 for Codex) to keep this PR strictly additive.

Gemini 3.1 Pro Preview audit: already fully wired (src/config/customModels.ts + pricing + rate-limits) and selectable via the OpenRouter combobox in free-text engines. No change needed.

Test plan

  • Unit: npx vitest run --project unit-backends tests/unit/backends/claude-code.test.ts tests/unit/backends/codex.test.ts — green (152 tests).
  • Unit: npx vitest run --project unit-core tests/unit/config/rateLimits.test.ts tests/unit/utils/llmMetrics.test.ts — green.
  • npm run typecheck — clean.
  • Pre-commit + pre-push hooks (lint, auth-header-provenance, integration tests) — green.
  • UI smoke (post-merge to dev): open a project's Engine page; new entries appear at the top of the Claude Code and Codex dropdowns; a saved selection round-trips through the DB.

🤖 Generated with Claude Code

…e dropdowns

Add the latest Anthropic and OpenAI models to the dashboard engine pickers:

- Claude Code: Opus 4.7, plus the 1M-context variants for Opus 4.7,
  Opus 4.6, and Sonnet 4.6 (using the canonical `[1m]` suffix syntax
  documented by Anthropic).
- Codex: GPT-5.5.

Also wire pricing metadata for the new model IDs (`claude-opus-4-7` =
$5/$25 per Anthropic's current pricing — Opus pricing dropped vs the
4.1/4.5 generation; `gpt-5.5` = $5/$30 per OpenAI's published rates),
and a conservative rate-limit profile for Opus 4.7 mirroring the
existing Opus tier-1 throttle (50 RPM, 10K TPM).

Defaults stay unchanged (Sonnet 4.5 for Claude Code, GPT-5.4 for Codex)
to keep this PR additive.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@zbigniewsobiecki zbigniewsobiecki merged commit 3540d38 into dev Apr 25, 2026
8 checks passed
@zbigniewsobiecki zbigniewsobiecki deleted the feat/add-opus-4-7-and-gpt-5-5 branch April 25, 2026 16:34
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 25, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant