Skip to content

URGENT: Claude Haiku 3 retires 2026-04-19 (2 days) + GPT-4o already retired 2026-04-03 #44

@stackbilt-admin

Description

@stackbilt-admin

Source

ecosystem-pulse routine fire (AEGIS, 2026-04-17 09:04 UTC) flagged two model retirements with active code references in this repo.

Haiku 3 — retires 2026-04-19 (T-2 days)

41 occurrences across 7 files including public API surface:

  • `src/index.ts:428` — `CLAUDE_3_HAIKU: 'claude-3-haiku-20240307'` (exported constant)
  • `src/providers/anthropic.ts:133,355` — supported-models list + pricing table
  • `src/tests/` — 5 test files reference the retired model in mocks/fixtures

Consumers resolving `CLAUDE_3_HAIKU` through this constant will silently break at Anthropic's cutoff.

GPT-4o — retired 2026-04-03 (already past, ~14 days silent)

  • `src/index.ts:411` — `GPT_4O: 'gpt-4o'` (exported constant)
  • `src/providers/openai.ts:84,179` — supported-models list + pricing table
  • `src/factory.ts:1107` — DEFAULT FALLBACK: `if (this.providers.has('openai')) return 'gpt-4o-mini';`
  • `gpt-4o-mini` is the repo's default chat completion model (`src/providers/openai.ts:154,319,497`)

Note: `gpt-4o-mini` status vs. `gpt-4o` should be confirmed — the routine fire flagged 4o proper, not mini, but both should be audited since mini is the fallback default.

Recommended action

  1. Replace `claude-3-haiku-20240307` with `claude-haiku-4-5-20251001` (or appropriate successor) across providers + tests
  2. Replace `gpt-4o` references with a current OpenAI model; verify `gpt-4o-mini` status
  3. Keep retired constants exported as `@deprecated` aliases for one minor version to avoid breaking downstream imports
  4. Update schema-drift tests to catch this class of issue in future

Priority

P0 for Haiku 3 (2 days), P0 for GPT-4o (already past, unknown consumer impact — check observability hooks for 400-class errors on model-not-found).

AEGIS exposure

aegis-daemon itself has zero code references to either model (verified via grep). Bolted-in LLM logic is already prohibited (`feedback_no_bolted_llm_logic`), so consumers route through `@stackbilt/llm-providers` — this repo is the chokepoint.


Filed by AEGIS from ecosystem-pulse routine finding. Cross-ref: aegis#460 (constant-time compare) showed the same ecosystem-wide audit pattern works.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions