From 6152bfcbd63304bbf07a792f9a771429148de380 Mon Sep 17 00:00:00 2001 From: Error Date: Fri, 21 Nov 2025 15:08:05 -0600 Subject: [PATCH] docs: clarify install and plugin settings --- README.md | 313 ++++++++++++++++++++++------------------- config/README.md | 6 +- docs/README.md | 2 +- spec/readme-cleanup.md | 47 +++++++ 4 files changed, 225 insertions(+), 143 deletions(-) create mode 100644 spec/readme-cleanup.md diff --git a/README.md b/README.md index 68e87da..59947de 100644 --- a/README.md +++ b/README.md @@ -8,24 +8,65 @@ This plugin enables opencode to use OpenAI's Codex backend via ChatGPT Plus/Pro > **Maintained by Open Hax.** Follow project updates at [github.com/open-hax/codex](https://github.com/open-hax/codex) and report issues or ideas there. -## ⚠️ Terms of Service & Usage Notice +## Installation -**Important:** This plugin is designed for **personal development use only** with your own ChatGPT Plus/Pro subscription. By using this tool, you agree to: +- **Prerequisites:** ChatGPT Plus or Pro subscription; OpenCode installed ([opencode.ai](https://opencode.ai)); Node.js 18+. -- ✅ Use only for individual productivity and coding assistance -- ✅ Respect OpenAI's rate limits and usage policies -- ✅ Not use to power commercial services or resell access -- ✅ Comply with [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use/) and [Usage Policies](https://openai.com/policies/usage-policies/) +**Quick start (minimal provider config — one model):** -**This tool uses OpenAI's official OAuth authentication** (the same method as OpenAI's official Codex CLI). However, users are responsible for ensuring their usage complies with OpenAI's terms. +```json +{ + "$schema": "https://opencode.ai/config.json", + "plugin": ["@openhax/codex"], + "model": "openai/gpt-5.1-codex-max", + "provider": { + "openai": { + "options": { + "reasoningEffort": "medium", + "reasoningSummary": "auto", + "textVerbosity": "medium", + "include": ["reasoning.encrypted_content"], + "store": false + }, + "models": { + "gpt-5.1-codex-max": { + "name": "GPT 5.1 Codex Max (OAuth)" + } + } + } + } +} +``` -### ⚠️ Not Suitable For: -- Commercial API resale or white-labeling -- High-volume automated extraction beyond personal use -- Applications serving multiple users with one subscription -- Any use that violates OpenAI's acceptable use policies +1. Save that to `~/.config/opencode/opencode.json` (or project-specific `.opencode.json`). +2. Restart OpenCode (it installs plugins automatically). If prompted, run `opencode auth login` and finish the OAuth flow with your ChatGPT account. +3. In the TUI, choose `GPT 5.1 Codex Max (OAuth)` and start chatting. -**For production applications or commercial use, use the [OpenAI Platform API](https://platform.openai.com/) with proper API keys.** +Prefer every preset? Copy [`config/full-opencode.json`](./config/full-opencode.json) instead; it registers all GPT-5.1/GPT-5 Codex variants with recommended settings. + +Want to customize? Jump to [Configuration reference](#configuration-reference). + +## Plugin-Level Settings + +Set these in `~/.opencode/openhax-codex-config.json` (applies to all models): + +- `codexMode` (default `true`): enable the Codex ↔ OpenCode bridge prompt and tool remapping +- `enablePromptCaching` (default `true`): keep a stable `prompt_cache_key` so Codex can reuse cached prompts +- `enableCodexCompaction` (default `true`): allow `/codex-compact` behavior once upstream support lands +- `autoCompactTokenLimit` (optional): trigger Codex compaction after an approximate token threshold +- `autoCompactMinMessages` (default `8`): minimum conversation turns before auto-compaction is considered + +Example: + +```json +{ + "codexMode": true, + "enablePromptCaching": true, + "enableCodexCompaction": true, + "autoCompactTokenLimit": 120000, + "autoCompactMinMessages": 8 +} +``` --- @@ -44,25 +85,12 @@ This plugin enables opencode to use OpenAI's Codex backend via ChatGPT Plus/Pro - ✅ **Usage-aware errors** - Shows clear guidance when ChatGPT subscription limits are reached - ✅ **Type-safe & tested** - Strict TypeScript with 160+ unit tests + 14 integration tests - ✅ **Modular architecture** - Easy to maintain and extend -**Prompt caching is enabled by default** to optimize your token usage and reduce costs. - -### Built-in Codex Commands - -These commands are typed as normal chat messages (no slash required). `codex-metrics`/`codex-inspect` run entirely inside the plugin. `codex-compact` issues a Codex summarization request, stores the summary, and trims future turns to keep prompts short. - -| Command | Aliases | Description | -|---------|---------|-------------| -| `codex-metrics` | `?codex-metrics`, `codexmetrics`, `/codex-metrics`* | Shows cache stats, recent prompt-cache sessions, and cache-warm status | -| `codex-inspect` | `?codex-inspect`, `codexinspect`, `/codex-inspect`* | Dumps the pending request configuration (model, prompt cache key, tools, reasoning/text settings) | -| `codex-compact` | `/codex-compact`, `compact`, `codexcompact` | Runs the Codex CLI compaction flow: summarizes the current conversation, replies with the summary, and resets Codex-side context to that summary | -> \*Slash-prefixed variants only work in environments that allow arbitrary `/` commands. In the opencode TUI, stick to `codex-metrics` / `codex-inspect` / `codex-compact` so the message is treated as normal chat text. - -**Auto compaction:** Configure `autoCompactTokenLimit`/`autoCompactMinMessages` in `~/.opencode/openhax-codex-config.json` to run compaction automatically when conversations grow long. When triggered, the plugin replies with the Codex summary and a note reminding you to resend the paused instruction; subsequent turns start from that summary instead of the entire backlog. +**Prompt caching is enabled by default** to optimize your token usage and reduce costs. ### How Caching Works -- **Enabled by default**: `enablePromptCaching: true` +- **Enabled by default**: `enablePromptCaching: true` - **GPT-5.1 models** leverage OpenAI's extended 24-hour prompt cache retention window for cheaper follow-ups - **Maintains conversation context** across multiple turns - **Reduces token consumption** by reusing cached prompts @@ -75,21 +103,18 @@ These commands are typed as normal chat messages (no slash required). `codex-met For the complete experience with all reasoning variants matching the official Codex CLI: 1. **Copy the full configuration** from [`config/full-opencode.json`](./config/full-opencode.json) to your opencode config file: + ```json { "$schema": "https://opencode.ai/config.json", - "plugin": [ - "@openhax/codex" - ], + "plugin": ["@openhax/codex"], "provider": { "openai": { "options": { "reasoningEffort": "medium", "reasoningSummary": "auto", "textVerbosity": "medium", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false }, "models": { @@ -103,9 +128,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "medium", "reasoningSummary": "auto", "textVerbosity": "medium", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -119,9 +142,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "low", "reasoningSummary": "auto", "textVerbosity": "medium", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -135,9 +156,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "medium", "reasoningSummary": "auto", "textVerbosity": "medium", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -151,9 +170,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "high", "reasoningSummary": "detailed", "textVerbosity": "medium", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -167,9 +184,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "medium", "reasoningSummary": "auto", "textVerbosity": "medium", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -183,9 +198,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "high", "reasoningSummary": "detailed", "textVerbosity": "medium", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -199,9 +212,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "none", "reasoningSummary": "auto", "textVerbosity": "medium", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -215,9 +226,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "low", "reasoningSummary": "auto", "textVerbosity": "low", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -231,9 +240,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "medium", "reasoningSummary": "auto", "textVerbosity": "medium", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -247,9 +254,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "high", "reasoningSummary": "detailed", "textVerbosity": "high", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -263,9 +268,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "low", "reasoningSummary": "auto", "textVerbosity": "medium", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -279,9 +282,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "medium", "reasoningSummary": "auto", "textVerbosity": "medium", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -295,9 +296,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "high", "reasoningSummary": "detailed", "textVerbosity": "medium", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -311,9 +310,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "medium", "reasoningSummary": "auto", "textVerbosity": "medium", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -327,9 +324,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "high", "reasoningSummary": "detailed", "textVerbosity": "medium", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -343,9 +338,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "minimal", "reasoningSummary": "auto", "textVerbosity": "low", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -359,9 +352,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "low", "reasoningSummary": "auto", "textVerbosity": "low", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -375,9 +366,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "medium", "reasoningSummary": "auto", "textVerbosity": "medium", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -391,9 +380,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "high", "reasoningSummary": "detailed", "textVerbosity": "high", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -407,9 +394,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "low", "reasoningSummary": "auto", "textVerbosity": "low", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } }, @@ -423,9 +408,7 @@ For the complete experience with all reasoning variants matching the official Co "reasoningEffort": "minimal", "reasoningSummary": "auto", "textVerbosity": "low", - "include": [ - "reasoning.encrypted_content" - ], + "include": ["reasoning.encrypted_content"], "store": false } } @@ -435,12 +418,12 @@ For the complete experience with all reasoning variants matching the official Co } ``` - **Global config**: `~/.config/opencode/opencode.json` - **Project config**: `/.opencode.json` +**Global config**: `~/.config/opencode/opencode.json` +**Project config**: `/.opencode.json` - This now gives you 21 model variants: the refreshed GPT-5.1 lineup (with Codex Max as the default) plus every legacy gpt-5 preset for backwards compatibility. +This now gives you 21 model variants: the refreshed GPT-5.1 lineup (with Codex Max as the default) plus every legacy gpt-5 preset for backwards compatibility. - All appear in the opencode model selector as "GPT 5.1 Codex Low (OAuth)", "GPT 5 High (OAuth)", etc. +All appear in the opencode model selector as "GPT 5.1 Codex Low (OAuth)", "GPT 5 High (OAuth)", etc. ### Available Model Variants (Full Config) @@ -448,25 +431,25 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t #### GPT-5.1 lineup (recommended) -| CLI Model ID | TUI Display Name | Reasoning Effort | Best For | -|--------------|------------------|-----------------|----------| -| `gpt-5.1-codex-max` | GPT 5.1 Codex Max (OAuth) | Low/Medium/High/**Extra High** | Default flagship tier with `xhigh` reasoning for complex, multi-step problems | -| `gpt-5.1-codex-low` | GPT 5.1 Codex Low (OAuth) | Low | Fast code generation on the newest Codex tier | -| `gpt-5.1-codex-medium` | GPT 5.1 Codex Medium (OAuth) | Medium | Balanced code + tooling workflows | -| `gpt-5.1-codex-high` | GPT 5.1 Codex High (OAuth) | High | Multi-step coding tasks with deep tool use | -| `gpt-5.1-codex-mini-medium` | GPT 5.1 Codex Mini Medium (OAuth) | Medium | Budget-friendly Codex runs (200k/100k tokens) | -| `gpt-5.1-codex-mini-high` | GPT 5.1 Codex Mini High (OAuth) | High | Cheaper Codex tier with maximum reasoning | -| `gpt-5.1-none` | GPT 5.1 None (OAuth) | **None** | Latency-sensitive chat/tasks using the "no reasoning" mode | -| `gpt-5.1-low` | GPT 5.1 Low (OAuth) | Low | Fast general-purpose chat with light reasoning | -| `gpt-5.1-medium` | GPT 5.1 Medium (OAuth) | Medium | Default adaptive reasoning for everyday work | -| `gpt-5.1-high` | GPT 5.1 High (OAuth) | High | Deep analysis when reliability matters most | +| CLI Model ID | TUI Display Name | Reasoning Effort | Best For | +| --------------------------- | --------------------------------- | ------------------------------ | ----------------------------------------------------------------------------- | +| `gpt-5.1-codex-max` | GPT 5.1 Codex Max (OAuth) | Low/Medium/High/**Extra High** | Default flagship tier with `xhigh` reasoning for complex, multi-step problems | +| `gpt-5.1-codex-low` | GPT 5.1 Codex Low (OAuth) | Low | Fast code generation on the newest Codex tier | +| `gpt-5.1-codex-medium` | GPT 5.1 Codex Medium (OAuth) | Medium | Balanced code + tooling workflows | +| `gpt-5.1-codex-high` | GPT 5.1 Codex High (OAuth) | High | Multi-step coding tasks with deep tool use | +| `gpt-5.1-codex-mini-medium` | GPT 5.1 Codex Mini Medium (OAuth) | Medium | Budget-friendly Codex runs (200k/100k tokens) | +| `gpt-5.1-codex-mini-high` | GPT 5.1 Codex Mini High (OAuth) | High | Cheaper Codex tier with maximum reasoning | +| `gpt-5.1-none` | GPT 5.1 None (OAuth) | **None** | Latency-sensitive chat/tasks using the "no reasoning" mode | +| `gpt-5.1-low` | GPT 5.1 Low (OAuth) | Low | Fast general-purpose chat with light reasoning | +| `gpt-5.1-medium` | GPT 5.1 Medium (OAuth) | Medium | Default adaptive reasoning for everyday work | +| `gpt-5.1-high` | GPT 5.1 High (OAuth) | High | Deep analysis when reliability matters most | > **Extra High reasoning:** `reasoningEffort: "xhigh"` provides maximum computational effort for complex, multi-step problems and is exclusive to `gpt-5.1-codex-max`. Other models automatically map that option to `high` so their API calls remain valid. #### Legacy GPT-5 lineup (still supported) | CLI Model ID | TUI Display Name | Reasoning Effort | Best For | -|--------------|------------------|-----------------|----------| +| ------------ | ---------------- | ---------------- | -------- | | `gpt-5-codex-low` | GPT 5 Codex Low (OAuth) | Low | Fast code generation | | `gpt-5-codex-medium` | GPT 5 Codex Medium (OAuth) | Medium | Balanced code tasks | @@ -519,11 +502,44 @@ When no configuration is specified, the plugin uses these defaults for all GPT-5 These defaults match the official Codex CLI behavior and can be customized (see Configuration below). GPT-5.1 requests automatically start at `reasoningEffort: "none"`, while Codex/Codex Mini presets continue to clamp to their supported levels. -## Configuration +## Configuration Reference + +Already set up from Installation? You're all set. Use this section when you want to tweak defaults or build custom presets. + +### Minimal configuration (single model) -### Recommended: Use Pre-Configured File +Use the smallest working provider config if you only need one flagship model: + +```json +{ + "$schema": "https://opencode.ai/config.json", + "plugin": ["@openhax/codex"], + "model": "openai/gpt-5.1-codex-max", + "provider": { + "openai": { + "options": { + "reasoningEffort": "medium", + "reasoningSummary": "auto", + "textVerbosity": "medium", + "include": ["reasoning.encrypted_content"], + "store": false + }, + "models": { + "gpt-5.1-codex-max": { + "name": "GPT 5.1 Codex Max (OAuth)" + } + } + } + } +} +``` + +`gpt-5.1-codex-max` is the recommended default for balanced reasoning + tool use. Switch the `model` value if you prefer another preset. + +### Full preset bundle + +The easiest way to get all presets is to use [`config/full-opencode.json`](./config/full-opencode.json), which provides: -The easiest way to get started is to use [`config/full-opencode.json`](./config/full-opencode.json), which provides: - 21 pre-configured model variants matching the latest Codex CLI presets (GPT-5.1 Codex Max + GPT-5.1 + GPT-5) - Optimal settings for each reasoning level - All variants visible in the opencode model selector @@ -538,26 +554,18 @@ If you want to customize settings yourself, you can configure options at provide ⚠️ **Important**: The two base models have different supported values. -| Setting | GPT-5 / GPT-5.1 Values | GPT-5-Codex / Codex Mini Values | Plugin Default | -|---------|-------------|-------------------|----------------| -| `reasoningEffort` | `none`, `minimal`, `low`, `medium`, `high` | `low`, `medium`, `high`, `xhigh`† | `medium` | -| `reasoningSummary` | `auto`, `detailed` | `auto`, `detailed` | `auto` | -| `textVerbosity` | `low`, `medium`, `high` | `medium` only | `medium` | -| `include` | Array of strings | Array of strings | `["reasoning.encrypted_content"]` | +| Setting | GPT-5 / GPT-5.1 Values | GPT-5-Codex / Codex Mini Values | Plugin Default | +| ------------------ | ------------------------------------------ | --------------------------------- | --------------------------------- | +| `reasoningEffort` | `none`, `minimal`, `low`, `medium`, `high` | `low`, `medium`, `high`, `xhigh`† | `medium` | +| `reasoningSummary` | `auto`, `detailed` | `auto`, `detailed` | `auto` | +| `textVerbosity` | `low`, `medium`, `high` | `medium` only | `medium` | +| `include` | Array of strings | Array of strings | `["reasoning.encrypted_content"]` | > **Note**: `minimal` effort is auto-normalized to `low` for gpt-5-codex (not supported by the API). `none` is only supported on GPT-5.1 general models; when used with legacy gpt-5 it is normalized to `minimal`. `xhigh` is exclusive to `gpt-5.1-codex-max`—other Codex presets automatically map it to `high`. -> +> > † **Extra High reasoning**: `reasoningEffort: "xhigh"` provides maximum computational effort for complex, multi-step problems and is only available on `gpt-5.1-codex-max`. -#### Plugin-Level Settings - -Set these in `~/.opencode/openhax-codex-config.json`: - -- `codexMode` (default `true`): enable the Codex ↔ OpenCode bridge prompt -- `enablePromptCaching` (default `true`): keep a stable `prompt_cache_key` and preserved message IDs so Codex can reuse cached prompts, reducing token usage and costs -- `enableCodexCompaction` (default `true`): expose `/codex-compact` and allow the plugin to rewrite history based on Codex summaries -- `autoCompactTokenLimit` (default unset): when set, triggers Codex compaction once the approximate token count exceeds this value -- `autoCompactMinMessages` (default `8`): minimum number of conversation turns before auto-compaction is considered +See [Plugin-Level Settings](#plugin-level-settings) above for global toggles. Below are provider/model examples. #### Global Configuration Example @@ -636,6 +644,7 @@ This plugin respects the same rate limits enforced by OpenAI's official Codex CL - **The plugin does NOT and CANNOT bypass** OpenAI's rate limits ### Best Practices: + - ✅ Use for individual coding tasks, not bulk processing - ✅ Avoid rapid-fire automated requests - ✅ Monitor your usage to stay within subscription limits @@ -647,11 +656,6 @@ This plugin respects the same rate limits enforced by OpenAI's official Codex CL --- -## Requirements - -- **ChatGPT Plus or Pro subscription** (required) -- **OpenCode** installed ([opencode.ai](https://opencode.ai)) - ## Updating & Clearing Caches OpenCode caches plugins under `~/.cache/opencode` and stores Codex-specific assets (prompt-warm files, instruction caches, logs) under `~/.opencode`. When this plugin ships a new release, clear both locations so OpenCode reinstalls the latest bits and the warmed prompts align with the new version. @@ -680,7 +684,6 @@ OpenCode caches plugins under `~/.cache/opencode` and stores Codex-specific asse ## Debug Mode - Enable detailed logging: ```bash @@ -704,6 +707,7 @@ See [Troubleshooting Guide](https://open-hax.github.io/codex/troubleshooting) fo This plugin uses **OpenAI's official OAuth authentication** (the same method as their official Codex CLI). It's designed for personal coding assistance with your own ChatGPT subscription. However, **users are responsible for ensuring their usage complies with OpenAI's Terms of Use**. This means: + - Personal use for your own development - Respecting rate limits - Not reselling access or powering commercial services @@ -720,12 +724,14 @@ For commercial applications, production systems, or services serving multiple us Using OAuth authentication for personal coding assistance aligns with OpenAI's official Codex CLI use case. However, violating OpenAI's terms could result in account action: **Safe use:** + - Personal coding assistance - Individual productivity - Legitimate development work - Respecting rate limits **Risky use:** + - Commercial resale of access - Powering multi-user services - High-volume automated extraction @@ -734,6 +740,7 @@ Using OAuth authentication for personal coding assistance aligns with OpenAI's o ### What's the difference between this and scraping session tokens? **Critical distinction:** + - ✅ **This plugin:** Uses official OAuth authentication through OpenAI's authorization server - ❌ **Session scraping:** Extracts cookies/tokens from browsers (clearly violates TOS) @@ -758,10 +765,11 @@ ChatGPT, GPT-5, and Codex are trademarks of OpenAI. **Prompt caching is enabled by default** to save you money: - **Reduces token usage** by reusing conversation context across turns -- **Lowers costs** significantly for multi-turn conversations +- **Lowers costs** significantly for multi-turn conversations - **Maintains context** so the AI remembers previous parts of your conversation You can disable it by creating `~/.opencode/openhax-codex-config.json` with: + ```json { "enablePromptCaching": false @@ -775,12 +783,14 @@ You can disable it by creating `~/.opencode/openhax-codex-config.json` with: ## Credits & Attribution This plugin implements OAuth authentication for OpenAI's Codex backend, using the same authentication flow as: + - [OpenAI's official Codex CLI](https://github.com/openai/codex) - OpenAI's OAuth authorization server (https://chatgpt.com/oauth) ### Acknowledgments Based on research and working implementations from: + - [ben-vargas/ai-sdk-provider-chatgpt-oauth](https://github.com/ben-vargas/ai-sdk-provider-chatgpt-oauth) - [ben-vargas/ai-opencode-chatgpt-auth](https://github.com/ben-vargas/ai-opencode-chatgpt-auth) - [openai/codex](https://github.com/openai/codex) OAuth flow @@ -795,12 +805,33 @@ Based on research and working implementations from: ## Documentation **📖 Documentation:** + - [Installation](#installation) - Get started in 2 minutes -- [Configuration](#configuration) - Customize your setup +- [Configuration reference](#configuration-reference) - Customize your setup - [Troubleshooting](#troubleshooting) - Common issues - [GitHub Pages Docs](https://open-hax.github.io/codex/) - Extended guides - [Developer Docs](https://open-hax.github.io/codex/development/ARCHITECTURE) - Technical deep dive +## Terms of Service & Usage Notice + +**Important:** This plugin is designed for **personal development use only** with your own ChatGPT Plus/Pro subscription. By using this tool, you agree to: + +- ✅ Use only for individual productivity and coding assistance +- ✅ Respect OpenAI's rate limits and usage policies +- ✅ Not use to power commercial services or resell access +- ✅ Comply with [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use/) and [Usage Policies](https://openai.com/policies/usage-policies/) + +**This tool uses OpenAI's official OAuth authentication** (the same method as OpenAI's official Codex CLI). However, users are responsible for ensuring their usage complies with OpenAI's terms. + +### ⚠️ Not Suitable For: + +- Commercial API resale or white-labeling +- High-volume automated extraction beyond personal use +- Applications serving multiple users with one subscription +- Any use that violates OpenAI's acceptable use policies + +**For production applications or commercial use, use the [OpenAI Platform API](https://platform.openai.com/) with proper API keys.** + ## License GPL-3.0 — see [LICENSE](./LICENSE) for details. diff --git a/config/README.md b/config/README.md index 7a4ea7c..4c297c2 100644 --- a/config/README.md +++ b/config/README.md @@ -5,6 +5,7 @@ This directory contains example opencode configuration files for the OpenAI Code ## Files ### minimal-opencode.json + The simplest possible configuration using plugin defaults. ```bash @@ -12,6 +13,7 @@ cp config/minimal-opencode.json ~/.config/opencode/opencode.json ``` This uses default settings: + - `reasoningEffort`: "medium" - `reasoningSummary`: "auto" - `textVerbosity`: "medium" @@ -19,6 +21,7 @@ This uses default settings: - `store`: false (required for AI SDK 2.0.50+ compatibility) ### full-opencode.json + Complete configuration example showing all model variants with custom settings. ```bash @@ -26,6 +29,7 @@ cp config/full-opencode.json ~/.config/opencode/opencode.json ``` This demonstrates: + - Global options for all models - Per-model configuration overrides - All supported model variants (gpt-5-codex, gpt-5-codex-mini, gpt-5, gpt-5-mini, gpt-5-nano) @@ -41,4 +45,4 @@ This demonstrates: ## Configuration Options -See the main [README.md](../README.md#configuration) for detailed documentation of all configuration options. +See the main [README.md](../README.md#configuration-reference) for detailed documentation of all configuration options. diff --git a/docs/README.md b/docs/README.md index 4e91055..36f111d 100644 --- a/docs/README.md +++ b/docs/README.md @@ -5,7 +5,7 @@ Welcome to the OpenHax Codex Plugin documentation! ## For Users - **[Getting Started](../README.md)** - Installation, configuration, and quick start -- **[Configuration Guide](../README.md#configuration)** - Complete config reference +- **[Configuration Guide](../README.md#configuration-reference)** - Complete config reference - **[Troubleshooting](../README.md#troubleshooting)** - Common issues and debugging - **[Changelog](../CHANGELOG.md)** - Version history and release notes diff --git a/spec/readme-cleanup.md b/spec/readme-cleanup.md new file mode 100644 index 0000000..a95f7ce --- /dev/null +++ b/spec/readme-cleanup.md @@ -0,0 +1,47 @@ +# README cleanup and installation clarity + +**Date**: 2025-11-21 +**Owner**: Codex agent +**Goal**: Make README quieter, surface installation guidance, and move TOS to the bottom. + +## Code touchpoints (current line refs) + +- `README.md:11-62` — Installation with minimal provider config (plugin + single model + provider options/models) emphasized first. +- `README.md:64-96` — Plugin-Level Settings section surfaced right after installation. +- `README.md:451-522` — Plugin defaults and Configuration Reference intro include minimal provider config subsection (no duplicated plugin-level settings here; points back to top section). +- `README.md:51-68` — Removed non-functional Built-in Codex Commands section. +- `README.md:737-745` — Documentation links reordered above TOS. +- `README.md:747-764` — Terms of Service & Usage Notice relocated near bottom. +- `README.md:770-783` — Auto-generated package doc matrix; must remain untouched. + +## Existing issues / PRs + +- None found for README noise/installation confusion (quick repository scan; no linked issue/PR identified). + +## Definition of done + +- TOS/usage notice sits near the end (after FAQs/Docs, before License or similar closing content). +- Clear "Installation" section early that explains prerequisites and setup steps distinct from configuration. +- Configuration content split into digestible pieces (recommended config vs custom/advanced) with concise intros. +- Overall README flow reduces upfront noise while preserving key links and auto-generated matrix. + +## Requirements & constraints + +- Keep auto-generated `PACKAGE-DOC-MATRIX` block unchanged. +- Preserve existing links and accuracy of configuration examples; adjust anchors if section names change. +- Maintain guidance on ChatGPT subscription and opencode prerequisites. +- Avoid removing critical warnings; relocation is acceptable. + +## Plan / phases + +1. Draft new top-level outline (Intro, Installation, Key features/commands, Configuration split, Caching, Troubleshooting, Docs, TOS/License). +2. Rewrite README sections to match outline (minimal edits to examples, move TOS near end). +3. Quick pass for clarity/heading consistency and anchor references. + +## Changelog + +- 2025-11-21: Added Installation section, renamed Configuration Reference, removed standalone requirements block, moved TOS near bottom, and updated related anchors in docs/config README files. +- 2025-11-21: Promoted minimal provider config (plugin array + single `openai/gpt-5.1-codex-max` model with provider/openai options) to top of Installation and Configuration Reference. +- 2025-11-21: Removed non-functional Built-in Codex Commands section pending upstream support. +- 2025-11-21: Surfaced plugin-level settings (codexMode, caching, compaction) immediately after Installation with example JSON. +- 2025-11-21: Removed duplicated plugin-level settings block from Configuration Reference; now it links back to the top settings section.