Summary
Add first-class OpenAI Codex support to the agent-retro skill without regressing the current Claude Code workflow.
Proposed approach
- keep the current root skill install surface
- preserve Claude behavior and tests as the regression baseline
- refactor scripts/extract.py into a provider-dispatch entrypoint over shared normalization code
- add provider adapters for Claude and Codex transcript discovery and parsing
- keep the retro output contract shared across providers
- write Codex retros to ~/.codex/worklog/retros
- add Codex fixtures and tests alongside existing Claude fixtures
Why this shape
The retrospective methodology is agent-agnostic, but transcript storage and runtime metadata are provider-specific. A shared normalized core plus thin adapters keeps the skill portable without mixing Claude and Codex authority in one parser.
Scope for the first PR
- provider auto-detection plus a --provider flag with auto, claude, and codex modes
- Claude adapter extracted without behavior regressions
- Codex adapter for the current observed local transcript schema
- shared skill workflow docs plus provider-specific guidance
- README and install updates
- fixture-based tests for both providers
Notes
I already validated locally that Codex session transcripts are available under ~/.codex/sessions and contain enough information to support the same retro flow.
Summary
Add first-class OpenAI Codex support to the agent-retro skill without regressing the current Claude Code workflow.
Proposed approach
Why this shape
The retrospective methodology is agent-agnostic, but transcript storage and runtime metadata are provider-specific. A shared normalized core plus thin adapters keeps the skill portable without mixing Claude and Codex authority in one parser.
Scope for the first PR
Notes
I already validated locally that Codex session transcripts are available under ~/.codex/sessions and contain enough information to support the same retro flow.