Skip to content

feat: add Codex AI provider + profile-based config system#106

Merged
luokerenx4 merged 15 commits intomasterfrom
dev
Apr 7, 2026
Merged

feat: add Codex AI provider + profile-based config system#106
luokerenx4 merged 15 commits intomasterfrom
dev

Conversation

@luokerenx4
Copy link
Copy Markdown
Contributor

Summary

  • Codex AI provider — calls OpenAI Codex models (gpt-5.4, etc.) via ChatGPT subscription OAuth or standard API key
  • Profile-based config system — replaces the old flat ai-provider-manager.json where model/loginMethod/baseUrl were shared across all backends. Each profile is a named, typed configuration (discriminated by backend). Main channel and subchannels reference profiles by slug
  • Frontend rewrite — AI Provider page becomes a profile manager (create/edit/delete/switch), channel config modal simplified to a single profile dropdown

Codex provider details

  • Auth from ~/.codex/auth.json (user runs codex login), auto token refresh via auth.openai.com
  • Dual auth: codex-oauth (ChatGPT subscription) or api-key (standard OpenAI billing)
  • Tools injected via Responses API tools field, manual tool loop
  • Structured multi-turn input (not text blob) via toResponsesInput()

Profile system details

  • Config: { apiKeys, profiles: { "slug": { backend, label, model, ... } }, activeProfile }
  • Auto-migration from old flat format on first load
  • resolveProfile(slug?) replaces scattered readAIProviderConfig() calls in providers
  • Profile CRUD API: GET/POST/PUT/DELETE /config/profiles, PUT /config/active-profile

Test plan

  • pnpm test — 959 tests passing
  • pnpm build — clean
  • Live Codex tool call round-trip verified (streaming, call_id handling)
  • UI: create/edit/delete profiles, switch active, channel profile selector
  • Migration: start with old flat config → auto-migrates to profiles

🤖 Generated with Claude Code

Ame and others added 15 commits April 6, 2026 23:11
Update website link to openalice.ai, add docs badge, and rewrite
Project Structure section to match current monorepo layout (packages/,
ui/, server/, removed plugins/ and skills/).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add heartbeat auto-copy at startup (same as persona)
- Heartbeat GET route falls back to default/ when data file missing
- Add persona GET/PUT backend routes with default fallback
- Add persona editor to Settings page in frontend

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Instructions are now rebuilt on each request instead of being frozen at
startup. Persona is re-read from disk and brain state (frontal lobe,
emotion) is pulled live from the Brain instance. Both providers accept
a () => Promise<string> getter; VercelAIProvider cache invalidates when
instructions content changes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ToolLoopAgent was a thin wrapper around generateText with no meaningful
extras. Using generateText directly removes the agent instance cache
and simplifies the code. System prompt is now passed per-request.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Type-assert executor.execute() return as unknown[] in bbProvider
integration tests. Remove invalid reduceOnly from MockBroker
closePosition (not part of TpSlParams interface).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…OAuth

Adds a new AI provider backend that calls OpenAI Codex models (gpt-5.x,
codex-mini-latest) through the ChatGPT subscription endpoint, using OAuth
tokens from ~/.codex/auth.json (created by `codex login`).

- New provider at src/ai-providers/codex/ with auth, tool-bridge, and
  manual tool loop over the Responses API
- Direct HTTP via `openai` SDK — no CLI subprocess, no Vercel wrapper
- Context managed by us (toTextHistory), tools injected per-request
- Token auto-refresh against auth.openai.com/oauth/token
- Replaces @openai/codex-sdk dependency with openai SDK

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Frontend:
- Replace legacy "OpenAI" (Vercel wrapper) card with native "OpenAI / Codex"
- Add CodexAuthForm with ChatGPT Subscription / API Key toggle
- Add Codex config section in ChannelConfigModal (model, baseUrl, apiKey, loginMethod)
- Add CodexOverride type + update provider unions across frontend

Backend:
- Codex provider supports dual auth: codex-oauth (default) reads ~/.codex/auth.json,
  api-key mode uses apiKeys.openai from config
- Add loginMethod to codexOverrideSchema and GenerateOpts
- Fix /ai-provider PUT validation to accept 'codex' backend

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
POST and PUT /channels were rejecting 'codex' as provider and not
passing the codex override object through to config.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The old ai-provider-manager.json was a flat object where model/loginMethod/baseUrl
were shared across all backends — switching from Claude to Codex would send
"claude-sonnet-4-6" to OpenAI (400 error).

New structure separates selection from configuration:

  { apiKeys: {...}, profiles: { "slug": { backend, label, model, ... } }, activeProfile: "slug" }

- Each profile carries only the fields its backend needs (discriminated union)
- Profiles are identified by slug, referenced by main channel and subchannels
- Global apiKeys shared, per-profile apiKey override supported
- Auto-migration from old flat format on first load
- Subchannels simplified: `profile: "slug"` replaces inline provider/vercelAiSdk/agentSdk/codex
- Providers receive ResolvedProfile instead of reading flat config directly
- GenerateRouter resolves profile → picks provider by backend
- Profile CRUD API: GET/POST/PUT/DELETE /config/profiles, PUT /config/active-profile
- Telegram settings menu shows profiles instead of backends

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- AIProviderPage: profile manager with create/edit/delete, active profile
  selection, per-backend form fields, global API keys section
- ChannelConfigModal: replace 3 inline override forms (~150 lines) with
  a single profile dropdown selector
- types.ts: Profile + AIBackend types, simplified WebChannel (profile slug
  instead of inline overrides), new AIProviderConfig structure
- config.ts: profile CRUD API methods (get/create/update/delete/setActive),
  API keys management, remove old setBackend()
- channels.ts: simplified ChannelListItem with profile reference

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ChatGPT's /backend-api/codex/responses requires:
- store: false (server-side storage not supported on subscription endpoint)
- input as array (not string)

Without these, requests get 400 with "Store must be set to false" or
"Input must be a list".

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace toTextHistory() + buildChatHistoryPrompt() with new
toResponsesInput() that converts SessionEntry[] to Responses API
input items (messages, function_call, function_call_output).

The model now sees proper multi-turn conversation structure instead of
a single user message with XML-wrapped history text. Handles orphaned
tool calls from compaction truncation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The function_call_arguments.done event lacks call_id and name fields.
The complete function call object (with call_id, name, arguments) is
only available on the response.output_item.done event.

Verified with streaming round-trip test against live Codex endpoint.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@luokerenx4 luokerenx4 merged commit 3da936d into master Apr 7, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant