feat: add Codex AI provider + profile-based config system#106
Merged
luokerenx4 merged 15 commits intomasterfrom Apr 7, 2026
Merged
feat: add Codex AI provider + profile-based config system#106luokerenx4 merged 15 commits intomasterfrom
luokerenx4 merged 15 commits intomasterfrom
Conversation
Update website link to openalice.ai, add docs badge, and rewrite Project Structure section to match current monorepo layout (packages/, ui/, server/, removed plugins/ and skills/). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add heartbeat auto-copy at startup (same as persona) - Heartbeat GET route falls back to default/ when data file missing - Add persona GET/PUT backend routes with default fallback - Add persona editor to Settings page in frontend Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Instructions are now rebuilt on each request instead of being frozen at startup. Persona is re-read from disk and brain state (frontal lobe, emotion) is pulled live from the Brain instance. Both providers accept a () => Promise<string> getter; VercelAIProvider cache invalidates when instructions content changes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ToolLoopAgent was a thin wrapper around generateText with no meaningful extras. Using generateText directly removes the agent instance cache and simplifies the code. System prompt is now passed per-request. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Type-assert executor.execute() return as unknown[] in bbProvider integration tests. Remove invalid reduceOnly from MockBroker closePosition (not part of TpSlParams interface). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…OAuth Adds a new AI provider backend that calls OpenAI Codex models (gpt-5.x, codex-mini-latest) through the ChatGPT subscription endpoint, using OAuth tokens from ~/.codex/auth.json (created by `codex login`). - New provider at src/ai-providers/codex/ with auth, tool-bridge, and manual tool loop over the Responses API - Direct HTTP via `openai` SDK — no CLI subprocess, no Vercel wrapper - Context managed by us (toTextHistory), tools injected per-request - Token auto-refresh against auth.openai.com/oauth/token - Replaces @openai/codex-sdk dependency with openai SDK Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Frontend: - Replace legacy "OpenAI" (Vercel wrapper) card with native "OpenAI / Codex" - Add CodexAuthForm with ChatGPT Subscription / API Key toggle - Add Codex config section in ChannelConfigModal (model, baseUrl, apiKey, loginMethod) - Add CodexOverride type + update provider unions across frontend Backend: - Codex provider supports dual auth: codex-oauth (default) reads ~/.codex/auth.json, api-key mode uses apiKeys.openai from config - Add loginMethod to codexOverrideSchema and GenerateOpts - Fix /ai-provider PUT validation to accept 'codex' backend Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
POST and PUT /channels were rejecting 'codex' as provider and not passing the codex override object through to config. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The old ai-provider-manager.json was a flat object where model/loginMethod/baseUrl
were shared across all backends — switching from Claude to Codex would send
"claude-sonnet-4-6" to OpenAI (400 error).
New structure separates selection from configuration:
{ apiKeys: {...}, profiles: { "slug": { backend, label, model, ... } }, activeProfile: "slug" }
- Each profile carries only the fields its backend needs (discriminated union)
- Profiles are identified by slug, referenced by main channel and subchannels
- Global apiKeys shared, per-profile apiKey override supported
- Auto-migration from old flat format on first load
- Subchannels simplified: `profile: "slug"` replaces inline provider/vercelAiSdk/agentSdk/codex
- Providers receive ResolvedProfile instead of reading flat config directly
- GenerateRouter resolves profile → picks provider by backend
- Profile CRUD API: GET/POST/PUT/DELETE /config/profiles, PUT /config/active-profile
- Telegram settings menu shows profiles instead of backends
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- AIProviderPage: profile manager with create/edit/delete, active profile selection, per-backend form fields, global API keys section - ChannelConfigModal: replace 3 inline override forms (~150 lines) with a single profile dropdown selector - types.ts: Profile + AIBackend types, simplified WebChannel (profile slug instead of inline overrides), new AIProviderConfig structure - config.ts: profile CRUD API methods (get/create/update/delete/setActive), API keys management, remove old setBackend() - channels.ts: simplified ChannelListItem with profile reference Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ChatGPT's /backend-api/codex/responses requires: - store: false (server-side storage not supported on subscription endpoint) - input as array (not string) Without these, requests get 400 with "Store must be set to false" or "Input must be a list". Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace toTextHistory() + buildChatHistoryPrompt() with new toResponsesInput() that converts SessionEntry[] to Responses API input items (messages, function_call, function_call_output). The model now sees proper multi-turn conversation structure instead of a single user message with XML-wrapped history text. Handles orphaned tool calls from compaction truncation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The function_call_arguments.done event lacks call_id and name fields. The complete function call object (with call_id, name, arguments) is only available on the response.output_item.done event. Verified with streaming round-trip test against live Codex endpoint. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
5 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
ai-provider-manager.jsonwhere model/loginMethod/baseUrl were shared across all backends. Each profile is a named, typed configuration (discriminated by backend). Main channel and subchannels reference profiles by slugCodex provider details
~/.codex/auth.json(user runscodex login), auto token refresh viaauth.openai.comcodex-oauth(ChatGPT subscription) orapi-key(standard OpenAI billing)toolsfield, manual tool looptoResponsesInput()Profile system details
{ apiKeys, profiles: { "slug": { backend, label, model, ... } }, activeProfile }resolveProfile(slug?)replaces scatteredreadAIProviderConfig()calls in providersGET/POST/PUT/DELETE /config/profiles,PUT /config/active-profileTest plan
pnpm test— 959 tests passingpnpm build— clean🤖 Generated with Claude Code