Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
c963bd5
Merge pull request #73 from open-hax/dev
riatzukiza Nov 23, 2025
e74622e
welp
riatzukiza Dec 5, 2025
ada7abd
Potential fix for pull request finding 'Unused variable, import, func…
riatzukiza Dec 5, 2025
253c991
ESLint issues fixed: complexity, warnings, file length
opencode-agent[bot] Dec 5, 2025
a58aa43
chore: update 1 file(s) [auto]
riatzukiza Dec 6, 2025
d8a1b17
Merge branch 'device/stealth' of github.com:open-hax/codex into devic…
riatzukiza Dec 13, 2025
4e25c4c
feat: update 2 file(s) [auto]
riatzukiza Dec 13, 2025
39366bf
feat: update 2 file(s) [auto]
riatzukiza Dec 13, 2025
62ca435
feat: update 3 file(s) [auto]
riatzukiza Dec 13, 2025
948e174
chore: update 3 file(s) [auto]
riatzukiza Dec 13, 2025
2c78e16
feat: update 3 file(s) [auto]
riatzukiza Dec 13, 2025
c3cdaa7
feat: update 3 file(s) [auto]
riatzukiza Dec 13, 2025
995ddd0
feat: update 2 file(s) [auto]
riatzukiza Dec 13, 2025
4d8837a
feat: update 2 file(s) [auto]
riatzukiza Dec 13, 2025
f473297
feat: update 2 file(s) [auto]
riatzukiza Dec 13, 2025
cf5be49
feat: update 1 file(s) [auto]
riatzukiza Dec 13, 2025
f0ace80
feat: update 1 file(s) [auto]
riatzukiza Dec 13, 2025
3e9dd60
feat: update 1 file(s) [auto]
riatzukiza Dec 13, 2025
2380515
feat: update 2 file(s) [auto]
riatzukiza Dec 13, 2025
0312bf2
feat: update 3 file(s) [auto]
riatzukiza Dec 13, 2025
ba315e4
feat: update 5 file(s) [auto]
riatzukiza Dec 13, 2025
83fc02b
feat: update 2 file(s) [auto]
riatzukiza Dec 13, 2025
c373e94
feat: update 2 file(s) [auto]
riatzukiza Dec 13, 2025
e981943
feat: update 1 file(s) [auto]
riatzukiza Dec 13, 2025
7d4f3b8
feat: update 1 file(s) [auto]
riatzukiza Dec 13, 2025
e3eb468
feat: update 2 file(s) [auto]
riatzukiza Dec 13, 2025
aae4dc0
feat: update 3 file(s) [auto]
riatzukiza Dec 13, 2025
a435669
feat: update 1 file(s) [auto]
riatzukiza Dec 13, 2025
f962b7e
feat: update 3 file(s) [auto]
riatzukiza Dec 13, 2025
9a02105
feat: update 4 file(s) [auto]
riatzukiza Dec 13, 2025
52dce35
Potential fix for pull request finding 'Useless assignment to local v…
riatzukiza Dec 13, 2025
96953f9
feat: update 1 file(s) [auto]
riatzukiza Dec 13, 2025
c48ec78
feat: update 1 file(s) [auto]
riatzukiza Dec 13, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 26 additions & 10 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This file provides coding guidance for AI agents (including Claude Code, Codex,

## Overview

This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It now mirrors the Codex CLI lineup, making `gpt-5.1-codex-max` (with optional `xhigh` reasoning) the default alongside the existing `gpt-5.1-codex`, `gpt-5.1-codex-mini`, and legacy `gpt-5` models—all available through a ChatGPT subscription instead of OpenAI Platform API credits.
This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It now mirrors the Codex CLI lineup, making `gpt-5.1-codex-max` (with optional `xhigh` reasoning) the default while also exposing the new `gpt-5.2` frontier preset, the existing `gpt-5.1-codex` / `gpt-5.1-codex-mini`, and legacy `gpt-5` models—all available through a ChatGPT subscription instead of OpenAI Platform API credits.

**Key architecture principle**: 7-step fetch flow that intercepts opencode's OpenAI SDK requests, transforms them for the ChatGPT backend API, and handles OAuth token management.

Expand Down Expand Up @@ -58,25 +58,30 @@ The main entry point orchestrates a **7-step fetch flow**:
### Module Organization

**Core Plugin** (`index.ts`)

- Plugin definition and main fetch orchestration
- OAuth loader (extracts ChatGPT account ID from JWT)
- Configuration loading and CODEX_MODE determination

**Authentication** (`lib/auth/`)

- `auth.ts`: OAuth flow (PKCE, token exchange, JWT decoding, refresh)
- `server.ts`: Local HTTP server for OAuth callback (port 1455)
- `browser.ts`: Platform-specific browser opening

**Request Handling** (`lib/request/`)

- `fetch-helpers.ts`: 10 focused helper functions for main fetch flow
- `request-transformer.ts`: Body transformations (model normalization, reasoning config, input filtering)
- `response-handler.ts`: SSE to JSON conversion

**Prompts** (`lib/prompts/`)

- `codex.ts`: Fetches Codex instructions from GitHub (ETag-cached), tool remap message
- `codex-opencode-bridge.ts`: CODEX_MODE bridge prompt for CLI parity

**Configuration** (`lib/`)

- `config.ts`: Plugin config loading, CODEX_MODE determination
- `constants.ts`: All magic values, URLs, error messages
- `types.ts`: TypeScript type definitions
Expand All @@ -85,28 +90,33 @@ The main entry point orchestrates a **7-step fetch flow**:
### Key Design Patterns

**1. Stateless Operation**: Uses `store: false` + `include: ["reasoning.encrypted_content"]`

- Allows multi-turn conversations without server-side storage
- Encrypted reasoning content persists context across turns

**2. CODEX_MODE** (enabled by default):

- **Priority**: `CODEX_MODE` env var > `~/.opencode/openhax-codex-config.json` > default (true)
- When enabled: Filters out OpenCode system prompts, adds Codex-OpenCode bridge prompt with Task tool & MCP awareness
- When disabled: Uses legacy tool remap message
- Bridge prompt (~550 tokens): Tool mappings, available tools, working style, **Task tool/sub-agent awareness**, **MCP tool awareness**
- **Prompt verification**: Caches OpenCode's codex.txt from GitHub (ETag-based) to verify exact prompt removal, with fallback to text signature matching

**3. Configuration Merging**:

- Global options (`provider.openai.options`) + per-model options (`provider.openai.models[name].options`)
- Model-specific options override global
- Plugin defaults: `reasoningEffort: "medium"`, `reasoningSummary: "auto"`, `textVerbosity: "medium"`

**4. Model Normalization**:

- All `gpt-5-codex` variants → `gpt-5-codex`
- All `gpt-5-codex-mini*` or `codex-mini-latest` variants → `codex-mini-latest`
- All `gpt-5` variants → `gpt-5`
- `minimal` effort auto-normalized to `low` for gpt-5-codex (API limitation) and clamped to `medium` (or `high` when requested) for Codex Mini

**5. Codex Instructions Caching**:

- Fetches from latest release tag (not main branch)
- ETag-based HTTP conditional requests
- Cache invalidation when release tag changes
Expand All @@ -124,6 +134,7 @@ The main entry point orchestrates a **7-step fetch flow**:
### Modifying Request Transformation

All request transformations go through `transformRequestBody()`:

- Input filtering: `filterInput()`, `filterOpenCodeSystemPrompts()`
- Message injection: `addCodexBridgeMessage()` or `addToolRemapMessage()`
- Reasoning config: `getReasoningConfig()` (follows Codex CLI defaults, not opencode defaults)
Expand All @@ -132,6 +143,7 @@ All request transformations go through `transformRequestBody()`:
### OAuth Flow Modifications

OAuth implementation follows OpenAI Codex CLI patterns:

- Client ID: `app_EMoamEEZ73f0CkXaXp7hrann`
- PKCE with S256 challenge
- Special params: `codex_cli_simplified_flow=true`, `originator=codex_cli_rs`
Expand All @@ -148,16 +160,16 @@ OAuth implementation follows OpenAI Codex CLI patterns:

This plugin **intentionally differs from opencode defaults** because it accesses ChatGPT backend API (not OpenAI Platform API):

| Setting | opencode Default | This Plugin Default | Reason |
|---------|-----------------|---------------------|--------|
| `reasoningEffort` | "high" (gpt-5) | "medium" | Matches Codex CLI default |
| `textVerbosity` | "low" (gpt-5) | "medium" | Matches Codex CLI default |
| `reasoningSummary` | "detailed" | "auto" | Matches Codex CLI default |
| gpt-5-codex config | (excluded) | Full support | opencode excludes gpt-5-codex from auto-config |
| `store` | true | false | Required for ChatGPT backend |
| `include` | (not set) | `["reasoning.encrypted_content"]` | Required for stateless operation |
| Setting | opencode Default | This Plugin Default | Reason |
| ------------------ | ---------------- | --------------------------------- | ---------------------------------------------- |
| `reasoningEffort` | "high" (gpt-5) | "medium" | Matches Codex CLI default |
| `textVerbosity` | "low" (gpt-5) | "medium" | Matches Codex CLI default |
| `reasoningSummary` | "detailed" | "auto" | Matches Codex CLI default |
| gpt-5-codex config | (excluded) | Full support | opencode excludes gpt-5-codex from auto-config |
| `store` | true | false | Required for ChatGPT backend |
| `include` | (not set) | `["reasoning.encrypted_content"]` | Required for stateless operation |

> **Extra High reasoning**: `reasoningEffort: "xhigh"` is only honored for `gpt-5.1-codex-max`. Other models automatically downgrade it to `high` so their API calls remain valid.
> **Extra High reasoning**: `reasoningEffort: "xhigh"` is honored for `gpt-5.1-codex-max` and `gpt-5.2`. Other models automatically downgrade it to `high` so their API calls remain valid.

## File Paths & Locations

Expand Down Expand Up @@ -188,9 +200,11 @@ This plugin **intentionally differs from opencode defaults** because it accesses
## Dependencies

**Production**:

- `@openauthjs/openauth` (OAuth PKCE implementation)

**Development**:

- `@opencode-ai/plugin` (peer dependency)
- `vitest` (testing framework)
- TypeScript
Expand All @@ -200,11 +214,13 @@ This plugin **intentionally differs from opencode defaults** because it accesses
## 🔗 Cross-Repository Integration

### Comprehensive Cross-References

- **[CROSS_REFERENCES.md](./CROSS_REFERENCES.md)** - Complete cross-references to all related repositories
- **[Workspace AGENTS.md](../AGENTS.md)** - Main workspace documentation
- **[Repository Index](../REPOSITORY_INDEX.md)** - Complete repository overview

### Related Repositories

- **[promethean](../promethean/)**: Agent orchestration and automated testing
- **[agent-shell](../agent-shell/)**: Authentication patterns for Agent Shell
- **[moofone/codex-ts-sdk](../moofone/codex-ts-sdk/)**: TypeScript SDK compatibility
Expand Down
14 changes: 14 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,20 @@

All notable changes to this project are documented here. Dates use the ISO format (YYYY-MM-DD).

## [3.4.0] - 2025-12-12

### Added

- GPT-5.2 support mirroring the latest Codex CLI release: model normalization, reasoning heuristics (including native `xhigh`), text-verbosity defaults, sample config/test coverage, and docs describing the new frontier preset.

### Changed

- README, AGENTS.md, configuration docs, and the diagnostic script now call out GPT-5.2 alongside Codex Max wherever reasoning tiers or available presets are listed.

### Fixed

- Requests targeting `gpt-5.2` now clamp unsupported `none`/`minimal` reasoning values to `low`, preventing invalid API calls while keeping `xhigh` available without Codex Max.

## [3.3.0] - 2025-11-19

### Added
Expand Down
34 changes: 17 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,12 @@ This plugin enables opencode to use OpenAI's Codex backend via ChatGPT Plus/Pro
2. Restart OpenCode (it installs plugins automatically). If prompted, run `opencode auth login` and finish the OAuth flow with your ChatGPT account.
3. In the TUI, choose `GPT 5.1 Codex Max (OAuth)` and start chatting.

Need a full walkthrough or update/cleanup steps? See [docs/getting-started.md](./docs/getting-started.md) and [docs/index.md](./docs/index.md#installation).

Prefer every preset? Copy [`config/full-opencode.json`](./config/full-opencode.json) instead; it registers all GPT-5.1/GPT-5 Codex variants with recommended settings.

Need live stats? A local dashboard now starts automatically (binds to 127.0.0.1 on a random port) and shows cache/request metrics plus the last few transformed requests; check logs for the URL.

Want to customize? Jump to [Configuration reference](#configuration-reference).

## Plugin-Level Settings
Expand Down Expand Up @@ -108,17 +112,7 @@ Example:
- **Reduces token consumption** by reusing cached prompts
- **Lowers costs** significantly for multi-turn conversations

### Reducing Cache Churn (keep `prompt_cache_key` stable)

- Why caches reset: OpenCode rebuilds the system/developer prompt every turn; the env block includes today’s date and a ripgrep tree of your workspace, so daily rollovers or file tree changes alter the prefix and trigger a new cache key.
- Keep the tree stable: ensure noisy/ephemeral dirs are ignored (e.g. `dist/`, `build/`, `.next/`, `coverage/`, `.cache/`, `logs/`, `tmp/`, `.turbo/`, `.vite/`, `.stryker-tmp/`, `artifacts/`, and similar). Put transient outputs under an ignored directory or `/tmp`.
- Don’t thrash the workspace mid-session: large checkouts, mass file generation, or moving directories will change the ripgrep listing and force a cache miss.
- Model/provider switches also change the system prompt (different base prompt), so avoid swapping models in the middle of a session if you want to reuse cache.
- Optional: set `CODEX_APPEND_ENV_CONTEXT=1` to reattach env/files at the end of the prompt instead of stripping them. This keeps the shared prefix stable (better cache reuse) while still sending env/files as a trailing developer message. Default is off (env/files stripped to maximize stability).

### Managing Caching

#### Recommended: Full Configuration (Codex CLI Experience)
## Recommended: Full Configuration (Codex CLI Experience)

For the complete experience with all reasoning variants matching the official Codex CLI:

Expand Down Expand Up @@ -441,14 +435,20 @@ For the complete experience with all reasoning variants matching the official Co
**Global config**: `~/.config/opencode/opencode.json`
**Project config**: `<project>/.opencode.json`

This now gives you 21 model variants: the refreshed GPT-5.1 lineup (with Codex Max as the default) plus every legacy gpt-5 preset for backwards compatibility.
This now gives you 22 model variants: the refreshed GPT-5.2 frontier preset, the GPT-5.1 lineup (with Codex Max as the default), plus every legacy gpt-5 preset for backwards compatibility.

All appear in the opencode model selector as "GPT 5.1 Codex Low (OAuth)", "GPT 5 High (OAuth)", etc.

### Available Model Variants (Full Config)

When using [`config/full-opencode.json`](./config/full-opencode.json), you get these GPT-5.1 presets plus the original gpt-5 variants:

#### GPT-5.2 frontier preset

| CLI Model ID | TUI Display Name | Reasoning Effort | Best For |
| ------------ | ---------------- | ------------------------------ | ---------------------------------------------------------------------- |
| `gpt-5.2` | GPT 5.2 (OAuth) | Low/Medium/High/**Extra High** | Latest frontier model with improved reasoning + general-purpose coding |

#### GPT-5.1 lineup (recommended)

| CLI Model ID | TUI Display Name | Reasoning Effort | Best For |
Expand All @@ -464,7 +464,7 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t
| `gpt-5.1-medium` | GPT 5.1 Medium (OAuth) | Medium | Default adaptive reasoning for everyday work |
| `gpt-5.1-high` | GPT 5.1 High (OAuth) | High | Deep analysis when reliability matters most |

> **Extra High reasoning:** `reasoningEffort: "xhigh"` provides maximum computational effort for complex, multi-step problems and is exclusive to `gpt-5.1-codex-max`. Other models automatically map that option to `high` so their API calls remain valid.
> **Extra High reasoning:** `reasoningEffort: "xhigh"` provides maximum computational effort for complex, multi-step problems and is honored on `gpt-5.1-codex-max` and `gpt-5.2`. Other models automatically map that option to `high` so their API calls remain valid.

#### Legacy GPT-5 lineup (still supported)

Expand Down Expand Up @@ -520,7 +520,7 @@ When no configuration is specified, the plugin uses these defaults for all GPT-5
- **`reasoningSummary: "auto"`** - Automatically adapts summary verbosity
- **`textVerbosity: "medium"`** - Balanced output length

These defaults match the official Codex CLI behavior and can be customized (see Configuration below). GPT-5.1 requests automatically start at `reasoningEffort: "none"`, while Codex/Codex Mini presets continue to clamp to their supported levels.
These defaults match the official Codex CLI behavior and can be customized (see Configuration below). GPT-5.1 requests automatically start at `reasoningEffort: "none"`, while Codex/Codex Mini presets continue to clamp to their supported levels, and GPT-5.2 keeps `reasoningEffort: "medium"` but accepts `xhigh` while mapping `none`/`minimal` to `low`.

## Configuration Reference

Expand Down Expand Up @@ -560,7 +560,7 @@ Use the smallest working provider config if you only need one flagship model:

The easiest way to get all presets is to use [`config/full-opencode.json`](./config/full-opencode.json), which provides:

- 21 pre-configured model variants matching the latest Codex CLI presets (GPT-5.1 Codex Max + GPT-5.1 + GPT-5)
- 22 pre-configured model variants matching the latest Codex CLI presets (GPT-5.2 + GPT-5.1 Codex lineup + GPT-5)
- Optimal settings for each reasoning level
- All variants visible in the opencode model selector

Expand All @@ -581,9 +581,9 @@ If you want to customize settings yourself, you can configure options at provide
| `textVerbosity` | `low`, `medium`, `high` | `medium` only | `medium` |
| `include` | Array of strings | Array of strings | `["reasoning.encrypted_content"]` |

> **Note**: `minimal` effort is auto-normalized to `low` for gpt-5-codex (not supported by the API). `none` is only supported on GPT-5.1 general models; when used with legacy gpt-5 it is normalized to `minimal`. `xhigh` is exclusive to `gpt-5.1-codex-max`—other Codex presets automatically map it to `high`.
> **Note**: `minimal` effort is auto-normalized to `low` for gpt-5-codex (not supported by the API). `none` is only supported on GPT-5.1 general models; when used with legacy gpt-5 it is normalized to `minimal`, and `gpt-5.2` automatically bumps `none`/`minimal` to `low`. `xhigh` is honored on `gpt-5.1-codex-max` and `gpt-5.2`—other presets automatically map it to `high`.
>
> † **Extra High reasoning**: `reasoningEffort: "xhigh"` provides maximum computational effort for complex, multi-step problems and is only available on `gpt-5.1-codex-max`.
> † **Extra High reasoning**: `reasoningEffort: "xhigh"` provides maximum computational effort for complex, multi-step problems and is only available on `gpt-5.1-codex-max` and `gpt-5.2`.

#### Global Configuration Example

Expand Down
Loading
Loading