Generate a ready‑to‑run Codex CLI configuration for OpenAI‑compatible servers like LM Studio, Ollama, and others.
This small, dependency‑free Python script:
- Detects a local server (LM Studio on
:1234or Ollama on:11434) or uses a custom base URL. - Fetches available models from
/v1/modelsand lets you pick one. - Emits a modern
~/.codex/config.toml(Codex “# Config” schema) and can optionally emit JSON and YAML siblings. - Backs up any existing config (adds
.bak). - Stores a tiny linker state (
~/.codex/linker_config.jsonor./.codex-linker.jsonwhen using workspace override) to remember your last choices. - Can preload defaults from a remote JSON via
--config-url. - Preview the would-be files with
--dry-run(prints to stdout, no writes).
General Information
CI Status
- Auto-detects local OpenAI-compatible servers (LM Studio, Ollama, vLLM, Text‑Gen‑WebUI OpenAI plugin, TGI shim, OpenRouter local) and normalizes base URLs.
- Interactive and non-interactive flows:
--auto,--full-auto, or fully manual. - Model discovery from
/v1/modelswith a simple picker or direct--model/--model-index. - Produces TOML by default and optional JSON/YAML mirrors to keep schema parity.
- Centralized schema shaping via a single
build_config_dict()—no duplicated logic. - Creates safe backups (
*.bak) before overwriting existing configs. - Remembers last choices in
~/.codex/linker_config.json(or./.codex-linker.jsonwhen using workspace override) for faster repeat runs. - First-class cross‑platform UX: clean colors, concise messages, no auto‑launch side effects.
- Diagnostic tooling: verbose logging, file logging, JSON logs, and remote HTTP log export.
- Tunable retry/timeout parameters for flaky networks; Azure-style
api-versionsupport. - Security-aware: never writes API keys to disk; favors env vars (
NULLKEYplaceholder by default).- Optional:
--keychaincan store your--api-keyin the OS keychain (macOS Keychain, Windows DPAPI/Credential Manager, Linux Secret Service viasecretstorage, GNU pass, or integrate with Bitwarden/1Password CLIs). The config still references the env var; secrets are never written to config files.
- Optional:
- Compatible with Codex CLI approvals, sandbox, and history controls without post-editing.
Works on macOS, Linux, and Windows. No third‑party Python packages required.
- Quick start
- Installation
- How it works
- Configuration files it writes
- Command-line usage
- Docker
- Config keys written
- Examples
- Environment variables
- Troubleshooting
- Windows Defender False Positive
- Changelog
- Development
- License
Pick one of the three options below.
git clone https://github.com/supermarsx/codex-cli-linker
cd codex-cli-linker
python3 codex-cli-linker.py # interactiveNon‑interactive examples:
python3 codex-cli-linker.py --auto # detect server, still prompts for model
python3 codex-cli-linker.py --full-auto # detect server and first model (no prompts)Download from Releases:
- Windows x64: codex-cli-linker-windows-x64.exe
- Windows arm64: codex-cli-linker-windows-arm64.exe
- macOS x64: codex-cli-linker-macos-x64
- macOS arm64: codex-cli-linker-macos-arm64
- Linux x64: codex-cli-linker-linux-x64
- Linux arm64: codex-cli-linker-linux-arm64
Direct download buttons (latest assets):
Or fetch via curl (latest):
# macOS
curl -L -o codex-cli-linker-macos-x64 \
https://github.com/supermarsx/codex-cli-linker/releases/latest/download/codex-cli-linker-macos-x64 \
&& chmod +x codex-cli-linker-macos-x64 \
&& ./codex-cli-linker-macos-x64 --auto
# Linux
curl -L -o codex-cli-linker-linux-x64 \
https://github.com/supermarsx/codex-cli-linker/releases/latest/download/codex-cli-linker-linux-x64 \
&& chmod +x codex-cli-linker-linux-x64 \
&& ./codex-cli-linker-linux-x64 --auto
# Windows (PowerShell or CMD with curl available)
curl -L -o codex-cli-linker-windows-x64.exe ^
https://github.com/supermarsx/codex-cli-linker/releases/latest/download/codex-cli-linker-windows-x64.exe && \
.\codex-cli-linker-windows-x64.exe --autoThen run it (example):
# macOS/Linux once after download
chmod +x ./codex-cli-linker-*-x64
./codex-cli-linker-*-x64 --auto
# Windows
./codex-cli-linker-windows-x64.exe --auto-
Run from source without installing:
- macOS/Linux:
scripts/run.sh --auto - Windows (PowerShell):
scripts/run.ps1 --auto
- macOS/Linux:
-
Install from PyPI into an isolated venv under
$CODEX_HOME/venv(or~/.codex/venv) and run:- macOS/Linux:
scripts/pypi_venv_run.sh --auto - Windows (PowerShell):
scripts/pypi_venv_run.ps1 --auto
- macOS/Linux:
These helpers avoid polluting your global Python and reuse the venv across runs.
# Recommended: isolated install
pipx install codex-cli-linker
codex-cli-linker.py --auto
# Or user install via pip
python3 -m pip install --user codex-cli-linker
codex-cli-linker.py --autobrew tap supermarsx/codex-cli-linker https://github.com/supermarsx/codex-cli-linker
brew install supermarsx/codex-cli-linker/codex-cli-linker
codex-cli-linker --autoscoop bucket add codex-cli-linker https://github.com/supermarsx/codex-cli-linker
scoop install codex-cli-linker
codex-cli-linker --autoAfter generating files, launch Codex with the printed profile:
npx codex --profile lmstudio # or: codex --profile lmstudioMore examples:
# Target a specific server/model
python3 codex-cli-linker.py \
--base-url http://localhost:1234/v1 \
--provider lmstudio \
--profile lmstudio \
--model llama-3.1-8b # or: a substring like --model llama-3.1
# Also write JSON and/or YAML alongside TOML
python3 codex-cli-linker.py --json --yaml
# Preview config without writing files
python3 codex-cli-linker.py --dry-run --auto
# Troubleshooting verbosity
python3 codex-cli-linker.py --verbose --auto
# Log to a file / emit JSON logs / send logs remotely
python3 codex-cli-linker.py --log-file linker.log
python3 codex-cli-linker.py --log-json
python3 codex-cli-linker.py --log-remote http://example.com/log
# Run preflight diagnostics (connectivity, permissions)
python3 codex-cli-linker.py --doctor --base-url http://localhost:1234/v1
# Doctor with feature probing
python3 codex-cli-linker.py --doctor --doctor-detect-features --base-url http://localhost:1234/v1
# Preload defaults from a remote JSON
python3 codex-cli-linker.py --config-url https://example.com/defaults.json --autoCreate a profile that targets the hosted OpenAI API (no local server required). You can either select it in the interactive base URL picker or use flags:
# Non-interactive example
python3 codex-cli-linker.py --provider openai --auto --yes \
--model gpt-4o-mini --env-key-name OPENAI_API_KEYTwo OpenAI auth modes are supported; both set provider=openai:
# API-key mode (preferred_auth_method=apikey)
python3 codex-cli-linker.py --openai-api --auto --yes --model gpt-4o-mini
# ChatGPT mode (preferred_auth_method=chatgpt)
python3 codex-cli-linker.py --openai-gpt --auto --yes --model gpt-4o-miniIn interactive mode, choose "OpenAI API (https://api.openai.com/v1)" when asked for the base URL.
When provider is OpenAI, you'll be prompted to choose between API key and ChatGPT auth. If you choose API key mode, the tool offers to set/update your OPENAI_API_KEY in ~/.codex/auth.json during the flow.
If you just want to set an OPENAI_API_KEY in ~/.codex/auth.json and exit (no config writes):
# Interactive prompt (input hidden)
python3 codex-cli-linker.py --set-openai-key
# Non-interactive: pass the key
python3 codex-cli-linker.py --set-openai-key --api-key sk-...Notes:
- Writes
~/.codex/auth.jsonand creates a timestamped backup if it exists. - Never commits user configs; keep
auth.jsonprivate (contains secrets). - No other files are modified in this mode.
You can manage profiles before writing config:
# Interactive add/remove/edit of profiles during the run
python3 codex-cli-linker.py --manage-profiles
# Safety: prevent accidental overwrite of an existing [profiles.<name>]
# (the tool will prompt unless --yes is used)
python3 codex-cli-linker.py --profile myprofile # will prompt if exists
python3 codex-cli-linker.py --profile myprofile --overwrite-profile # allow overwrite
# Merge only the generated profiles into an existing config.toml (preserves other profiles)
python3 codex-cli-linker.py --merge-profiles --profile myprofile --model llama-3.1-8bConfigure external MCP servers under the top-level mcp_servers key (not mcpServers). You can manage them interactively or pass a JSON blob:
# Interactive management
python3 codex-cli-linker.py --manage-mcp
# Non-interactive via JSON (example entry named "search")
python3 codex-cli-linker.py --auto --yes --model gpt-4o-mini --provider openai \
--mcp-json '{
"search": {
"command": "npx",
"args": ["-y", "mcp-server"],
"env": {"API_KEY": "value"},
"startup_timeout_ms": 20000
}
}'Config structure produced:
[mcp_servers.search]
command = "npx"
args = ["-y", "mcp-server"]
startup_timeout_ms = 20000 # optional; default is 10000 when omitted
[mcp_servers.search.env]
API_KEY = "value"This repository ships a single script plus a couple of optional helper scripts:
./codex-cli-linker.py # the tool
./scripts/set_env.sh # example: set NULLKEY env (macOS/Linux)
./scripts/set_env.bat # example: set NULLKEY env (Windows)
Requirements:
- Python 3.8+
- Codex CLI (
codexon PATH, ornpx codex). If missing, the script will attemptnpm i -g @openai/codex-cli. - A local OpenAI‑compatible server (LM Studio or Ollama) if you want auto‑detection.
The tool itself does not talk to OpenAI; it only queries your local server’s
/v1/modelsto list model IDs.
If you prefer a package install (no cloning needed):
# Recommended: isolated install via pipx
pipx install codex-cli-linker
# Or user install via pip
python3 -m pip install --user codex-cli-linker
# Then run (the script is installed on your PATH)
codex-cli-linker.py --auto # or just: codex-cli-linker.pyHomebrew installs the tool in a virtual environment and exposes the codex-cli-linker entry point:
brew tap supermarsx/codex-cli-linker https://github.com/supermarsx/codex-cli-linker
brew install supermarsx/codex-cli-linker/codex-cli-linker
codex-cli-linker --autoUpgrade when a new release ships:
brew update
brew upgrade codex-cli-linkerSee docs/homebrew.md for tap maintenance notes.
Scoop installs the single-file executable and shims it onto your PATH:
scoop bucket add codex-cli-linker https://github.com/supermarsx/codex-cli-linker
scoop install codex-cli-linker
codex-cli-linker --autoUpgrade with:
scoop update codex-cli-linkerSee docs/scoop.md for manifest maintenance notes.
Notes:
- PyPI installs the single script; there is no Python package import. Run the script by name.
- On Windows PowerShell, use
py -m pip install codex-cli-linkerthen runcodex-cli-linker.py. - Homebrew keeps the package under
$(brew --cellar)/codex-cli-linker/<version>and exposes thecodex-cli-linkerentry point on your PATH. - Scoop keeps shims in
~/scoop/shims/while the app lives under~/scoop/apps/codex-cli-linker/current.
- Base URL — Auto‑detects a running server by probing common endpoints:
- LM Studio:
http://localhost:1234/v1 - Ollama:
http://localhost:11434/v1If detection fails, you can select a preset, enter a custom URL, or use your last saved URL.
- LM Studio:
- Model pick — Calls
GET <base>/modelsand listsdata[*].idfor selection. - Config synthesis — Builds a single in‑memory config object that mirrors the TOML schema (root keys +
[model_providers.<id>]+[profiles.<name>]). - Emission & backup — Writes
~/.codex/config.toml(always unless--dry-run) and, if requested,config.json/config.yaml. Any existing file is backed up to*.bakfirst. - State — Saves
~/.codex/linker_config.jsonso next run can preload your last base URL, provider, profile, and model.
By default, files live under $CODEX_HOME (defaults to ~/.codex).
config.toml← always written unless--dry-runconfig.json← when--jsonis passedconfig.yaml← when--yamlis passedlinker_config.json← small helper file this tool uses to remember your last choices
Existing
config.*are moved toconfig.*.bakbefore writing.
python3 codex-cli-linker.py [options]
Tip: All options have short aliases (e.g., -a for --auto). Run -h to see the full list.
Additional handy short aliases:
-mp/--merge-profiles,-mc/--merge-config,-mO/--merge-overwrite-mm/--manage-mcp,-mj/--mcp-json-Na/--network-access-Et/--exclude-tmpdir-env-var-Es/--exclude-slash-tmp,-ES/--no-exclude-slash-tmp-Wr/--writable-roots-wa/--wire-api-Hh/--http-header,-He/--env-http-header-Nt/--notify,-In/--instructions,-Tp/--trust-project-Tn/--tui-notifications,-Tt/--tui-notification-types-oa/--openai,-oA/--openai-api,-og/--openai-gpt-dr/--doctor,-cu/--check-updates,-nuc/--no-update-check-ll/--log-level,-rc/--remove-config,-rN/--remove-config-no-bak,-db/--delete-all-backups,-dc/--confirm-delete-backups-ws/--workspace-state,-oc/--open-config,-sK/--set-openai-key,-kc/--keychain,-op/--overwrite-profile,-mP/--manage-profiles
Connection & selection
--auto— skip base‑URL prompt and auto‑detect a server--full-auto— imply--autoand pick the first model with no prompts--model-index <N>— with--auto/--full-auto, pick model by list index (default 0)--base-url <URL>— explicit OpenAI‑compatible base URL (e.g.,http://localhost:1234/v1)--model <ID|substring>- exact model id or a case-insensitive substring; ties break deterministically (alphabetical)--provider <ID>— provider key for[model_providers.<id>](e.g.,lmstudio,ollama,custom)- Presets:
-or, --openrouter→ provideropenrouter-remote(https://openrouter.ai/api/v1)-an, --anthropic→ provideranthropic(https://api.anthropic.com/v1)-az, --azure→ providerazure(https://<resource>.openai.azure.com/<path>)- Optional:
--azure-resource <NAME>and--azure-path <PATH>to synthesizebase_url
- Optional:
-gq, --groq→ providergroq(https://api.groq.com/openai/v1)-mi, --mistral→ providermistral(https://api.mistral.ai/v1)-ds, --deepseek→ providerdeepseek(https://api.deepseek.com/v1)-ch, --cohere→ providercohere(https://api.cohere.com/v2)-bt, --baseten→ providerbaseten(https://inference.baseten.co/v1)-kb, --koboldcpp→ providerkoboldcpp(http://localhost:5000/v1)
--profile <NAME>— profile name for[profiles.<name>](default deduced from provider)--api-key <VAL>— dummy key to place in an env var--env-key-name <NAME>— env var name that holds the API key (defaultNULLKEY)--config-url <URL>— preload flag defaults from a remote JSON before prompting-V, --version— print the tool version and exit
Behavior & UX
--approval-policy {untrusted,on-failure,on-request,never}(default:on-failure)--sandbox-mode {read-only,workspace-write,danger-full-access}(default:workspace-write)--network-access— enablesandbox_workspace_write.network_access(omit to keep disabled)--exclude-tmpdir-env-var— exclude$TMPDIRfrom writable roots (workspace-write only; omit to include by default)--exclude-slash-tmp/--no-exclude-slash-tmp— exclude/include/tmpfrom writable roots (workspace-write only)--writable-roots <CSV>— extra writable roots for workspace-write (e.g.,/workspace,/data)--file-opener {vscode,vscode-insiders,windsurf,cursor,none}(default:vscode)--open-config— after writing files, print the exact editor command to openconfig.toml(no auto-launch)--tui-notifications— enable desktop notifications in the TUI (omit to keep disabled)--tui-notification-types <CSV>— filter to specific types:agent-turn-complete,approval-requested--reasoning-effort {minimal,low,medium,high}(default:low)--reasoning-summary {auto,concise,detailed,none}(default:auto)--verbosity {low,medium,high}(default:medium)--hide-agent-reasoning/--show-raw-agent-reasoning
History persistence & storage
--no-history- setshistory.persistence=none(otherwisesave-all)--history-max-bytes <N>- limit history size--disable-response-storage- do not store responses--state-file <PATH>- use a custom linker state JSON path (default$CODEX_HOME/linker_config.json)--workspace-state- prefer./.codex-linker.jsonin the current directory for linker state (auto-created when missing)--doctor-detect-features- while running--doctor, probe advanced API features (tool_choice, response_format, reasoning)
Keychain (optional)
--keychain {none,auto,macos,dpapi,secretstorage}— when--api-keyis provided, store it in an OS keychain:auto→ macOS Keychain on macOS, DPAPI on Windows, Secret Service on Linuxmacos→ macOSsecurity add-generic-passworddpapi→ Windows Credential Manager (Generic Credential)secretstorage→ Linux Secret Service (via optionalsecretstoragepackage)none(default) → do nothing Notes: This is best‑effort and never required; failures are logged and ignored. Config files still use env vars — secrets are not written to TOML/JSON/YAML.
Multiple providers & profiles
--providers lmstudio,ollama- add predefined routes for both providers and create matching profiles.- Also supports:
vllm,tgwui,tgi,openrouter(common local ports are probed automatically).
- Also supports:
Dry-run diffs
--dry-run --diff- show colorized, symbol-rich diffs in TTY (falls back to unified diff when color is unavailable). Additions are green+, deletions red-, unchanged lines dim.
Non-interactive
--yes- suppress prompts when inputs are fully specified (implies--autoand defaults--model-index 0when--modelis not provided).- Honors
NO_COLORand non‑TTY: disables ANSI; banners are omitted to keep logs clean.
- Guided pipeline:
codex-cli-linker.py --guided(add--no-emojisto hide emojis) - OpenAI (API key):
codex-cli-linker.py --openai-api --auto --yes --model gpt-4o-mini - OpenAI (ChatGPT):
codex-cli-linker.py --openai-gpt --auto --yes --model gpt-4o-mini - Azure:
codex-cli-linker.py --azure --azure-resource <name> --azure-api-version 2025-04-01-preview --auto - LM Studio:
codex-cli-linker.py --openrouter --auto(example preset; see Provider presets) - Continuous logs (no auto‑clear): add
--continuousto any interactive run - ESC in menus: backs to the previous menu level
- Experimental:
-U--experimental-resume,-I--experimental-instructions-file,-X--experimental-use-exec-command-tool,-O--responses-originator-header-internal-override,-M--preferred-auth-method,-W--tools-web-search,-z--azure-api-version,-K--request-max-retries,-S--stream-max-retries,-e--stream-idle-timeout-ms
Note: All single-letter shorts are already used; --log-level offers an alias --level for convenience. Use --level info (for example) or -v for a quick DEBUG/WARNING toggle.
Post‑run report includes: target file path, backup path (if created), profile, provider, model, context window, and max tokens.
Build the image locally (includes Codex CLI and this tool):
docker build -t codex-cli-linker:local .Run with your ~/.codex mounted so configs persist on the host:
docker run --rm -it \
-e CODEX_HOME=/data/.codex \
-v "$HOME/.codex:/data/.codex" \
codex-cli-linker:local --autoCompose option (uses docker-compose.yml):
docker compose up --build codex-linkerNotes:
- To target a local server on the host, use
--base-url http://host.docker.internal:1234/v1on macOS/Windows. On Linux, considernetwork_mode: hostin compose. - The container never auto-launches external apps; it only prints suggested commands.
--history-max-bytes <N>— setshistory.max_bytes--disable-response-storage— setsdisable_response_storage=true
Networking & compatibility
--azure-api-version <VER>— addsquery_params.api-version=<VER>to the selected provider--request-max-retries <N>(default:4)--stream-max-retries <N>(default:10)--stream-idle-timeout-ms <MS>(default:300000)--chatgpt-base-url <URL>— optional alternate base for ChatGPT‑authored requests--preferred-auth-method {chatgpt,apikey}(default:apikey)--tools-web-search— setstools.web_search=true--wire-api {chat,responses}— wire protocol for the provider--http-header KEY=VAL— static HTTP header (repeatable)--env-http-header KEY=ENV_VAR— env‑sourced header (repeatable)- Authorization headers use Bearer tokens for these presets:
- OpenRouter (remote), Groq, Mistral, DeepSeek, Cohere, Baseten →
Authorization: Bearer <env value> - Anthropic →
x-api-key: <env value> - Azure OpenAI →
api-key: <env value>
- OpenRouter (remote), Groq, Mistral, DeepSeek, Cohere, Baseten →
--notify '["program","arg1",...]'orprogram,arg1,...— top‑levelnotify--instructions <TEXT>— top‑levelinstructions--trust-project <PATH>— mark a project/worktree as trusted (repeatable)
Output formats
--json— also write~/.codex/config.json--yaml— also write~/.codex/config.yaml--dry-run— print configs to stdout without writing files
Diagnostics
--verbose— enable INFO/DEBUG logging--log-file <PATH>— append logs to a file--log-json— also emit logs as JSON to stdout--log-remote <URL>— POST log records to an HTTP endpoint
The
--launchflag is intentionally disabled; the script prints the exactnpx codex --profile <name>command (orcodex --profile <name>if installed) instead of auto‑launching.
At a glance, the script writes:
-
Root keys
model— chosen model idmodel_provider— provider key (e.g.,lmstudio,ollama)approval_policy,sandbox_mode,file_openermodel_reasoning_effort,model_reasoning_summary,model_verbositymodel_context_window(best‑effort auto‑detected; 0 if unknown)model_max_output_tokensproject_doc_max_byteshide_agent_reasoning,show_raw_agent_reasoning,model_supports_reasoning_summariespreferred_auth_methodtools.web_searchtui.style— TUI style (e.g.,table)tui.notifications— boolean to enable all notifications, or array of typesdisable_response_storagehistory.persistence&history.max_bytes
-
[model_providers.<id>](only the active one is emitted)name— human label (e.g., "LM Studio", "Ollama")base_url— your selected base URL, normalizedenv_key— environment variable holding the API keywire_api—chatorresponsesrequest_max_retries,stream_max_retries,stream_idle_timeout_mshttp_headers— static headers mapenv_http_headers— env‑sourced headers mapquery_params.api-version(when--azure-api-versionis provided)
-
mcp_servers.<id>command,args,env, optionalstartup_timeout_ms(default: 10000)
-
sandbox_workspace_write(applies whensandbox_mode = "workspace-write")writable_roots— extra writable roots (cwd, $TMPDIR, and /tmp are writable by default unless excluded)network_access— allow network in workspace-write (default: false)exclude_tmpdir_env_var— exclude$TMPDIR(default: false)exclude_slash_tmp— exclude/tmp(default: false)
-
[profiles.<name>]model,model_providermodel_context_window,model_max_output_tokensapproval_policy
The tool deliberately does not store API keys in the file.
# ~/.codex/config.toml (excerpt)
model = "llama-3.1-8b"
model_provider = "lmstudio"
approval_policy = "on-failure"
sandbox_mode = "workspace-write"
file_opener = "vscode"
model_reasoning_effort = "low"
model_reasoning_summary = "auto"
model_verbosity = "medium"
model_context_window = 0
model_max_output_tokens = 0
project_doc_max_bytes = 1048576
[tools]
web_search = false
[history]
persistence = "save-all"
max_bytes = 0
[model_providers.lmstudio]
name = "LM Studio"
base_url = "http://localhost:1234/v1"
wire_api = "chat"
request_max_retries = 4
stream_max_retries = 10
stream_idle_timeout_ms = 300000
[profiles.lmstudio]
model = "llama-3.1-8b"
model_provider = "lmstudio"
model_context_window = 0
model_max_output_tokens = 0
approval_policy = "on-failure"[model_providers.ollama]
name = "Ollama"
base_url = "http://localhost:11434/v1"
wire_api = "chat"
request_max_retries = 4
stream_max_retries = 10
stream_idle_timeout_ms = 300000
[profiles.ollama]
model = "llama3"
model_provider = "ollama"
model_context_window = 0
model_max_output_tokens = 0
approval_policy = "on-failure"python3 codex-cli-linker.py --json --yaml
ls ~/.codex/
# config.toml config.toml.bak config.json config.yaml linker_config.jsonCODEX_HOME— overrides the config directory (default:~/.codex).NULLKEY— default env var this tool initializes to"nullkey"so configs never need to include secrets; change with--env-key-name.- Optional helper scripts:
- macOS/Linux:
source scripts/set_env.sh - Windows:
scripts\set_env.bat
- macOS/Linux:
- Optional helper scripts:
If your provider requires a key, prefer exporting it in your shell and letting Codex read it from the environment rather than writing it to disk.
-
“No server auto‑detected.”
Ensure LM Studio’s local server is running (check Developer → Local Server is enabled) or that Ollama is running. Otherwise pass--base-url. -
“Models list is empty.”
Your server didn’t return anything fromGET /v1/models. Verify the endpoint and that at least one model is downloaded/available. -
Network errors
Use--request-max-retries,--stream-max-retries, or--stream-idle-timeout-msto tune resilience for flaky setups. -
History & storage
If you’re in a restricted environment, add--disable-response-storageand/or--no-historywhen generating the config. -
Azure/OpenAI compatibility When talking to Azure‑hosted compatible endpoints, pass
--azure-api-version <YYYY-MM-DD>to setquery_params.api-version.
The single‑file binaries are built with PyInstaller, which can occasionally trigger false‑positive warnings from Windows Defender or other AV engines.
Mitigations
- Prefer the source or PyPI install:
pipx install codex-cli-linker(orpip install --user codex-cli-linker) and runcodex-cli-linker. - Build locally from source:
python -m pip install build pyinstaller && pyinstaller -F -n codex-cli-linker codex-cli-linker.py. - Verify checksum against the GitHub Release artifact SHA shown in the release details.
- Optionally upload the artifact to VirusTotal to confirm multi‑engine status.
Note: We do not include third‑party code in the binary; it is produced directly from this repository’s source. If warnings persist, prefer the PyPI or source‑based install method.
See changelog.md for a summary of notable changes by version.
See development.md for CI, release, and code map information.
You can use CMake to run common tasks for this Python tool (no extra Python deps required):
# Configure once
cmake -S . -B build
# Syntax check
cmake --build build --target check
# Run unit tests via CTest
cmake --build build --target test # or: ctest --test-dir build -V
# Run the tool with flags controlled by cache vars
cmake -S . -B build \
-DCODEX_AUTO=ON \
-DCODEX_JSON=ON \
-DCODEX_BASE_URL=http://localhost:1234/v1
cmake --build build --target run
# Handy aggregate target
cmake --build build --target ci # runs check + testConfigurable cache variables for run:
CODEX_AUTO,CODEX_FULL_AUTO,CODEX_JSON,CODEX_YAML,CODEX_DRY_RUN,CODEX_VERBOSECODEX_BASE_URL,CODEX_PROVIDER,CODEX_PROFILE,CODEX_MODEL,CODEX_MODEL_INDEX
This keeps side effects explicit (no auto-launch) and matches the repo’s cross‑platform, no‑deps philosophy.
GNU Make wrapper around the CMake targets:
# Configure & run checks/tests
make configure
make check
make test
# Run the tool (pass flags as variables)
make run AUTO=1 JSON=1 BASE_URL=http://localhost:1234/v1
# Aggregate
make ci
# Clean build dir
make cleanReleased under the MIT License. See license.md for details.