wishcraft_openai: Codex ChatGPT planning conduit — status log and TODOs
This issue tracks the work to wire the Wishcraft planning conduit to Codex’s ChatGPT backend and what remains to get it production‑ready.
What’s done
- Exact Codex endpoint + headers
- Requests go to
https://chatgpt.com/backend-api/codex/responses (Responses API) using ChatGPT tokens from ~/.codex/auth.json.
- Sends headers Codex uses:
Authorization: Bearer <access_token>, chatgpt-account-id, OpenAI-Beta: responses=experimental, originator, conversation_id, session_id, Codex-Task-Type: standard, Accept: text/event-stream, Referer: https://chatgpt.com/.
- Token refresh support via
https://auth.openai.com/oauth/token (writes back to ~/.codex/auth.json).
- Payload parity (Responses wire)
- Mirrors codex-rs
ResponsesApiRequest: fields include model, instructions, input (ResponseItem array with type: message, content: [{type: input_text, text: ...}]), tool_choice: auto, parallel_tool_calls: false, store: false, stream: true, include: [].
- Embeds Codex base instructions verbatim by including vendored prompt files:
core/prompt.md (default)
core/gpt_5_codex_prompt.md (when model starts with gpt-5-codex or codex-).
- SSE parsing (streaming)
- Aggregates
response.delta and response.output_item.done into a single output_text and extracts model + usage from response.completed.
- Returns a simple JSON that our conduit maps to
plan_steps by line‑splitting.
- Chat fallback
- If the Responses call is rejected for shape reasons, falls back to
/codex/chat/completions with messages and a minimal tools array. (Not normally needed now that Responses payload is correct.)
- CLI integration
cargo run -p xtask -- wish codex plan --file data/wishes/wishcraft-docs.yaml --live --out /tmp/wish-docs.plan.json
- Uses
~/.codex/auth.json; no API key required.
- Registry + docs
- Conduit id:
openai.codex.v2025.plan in data/conduits/registry.yaml.
- Docs overview added:
docs/wish.md.
- Warnings fixed
- Removed the stray
mut from SSE parser.
Where the code lives
- Client/headers/SSE:
crates/wishcraft_openai/src/client.rs
- Conduit/planning prompt + payload:
crates/wishcraft_openai/src/conduit.rs
- Config defaults (base URL, model, CODEX_HOME):
crates/wishcraft_openai/src/config.rs
- CLI wiring:
xtask/src/main.rs
- Vendored Codex sources (reference only):
third_party/openai-codex/codex-rs/
Verified behavior
- Auth: loads
access_token + chatgpt-account-id from ~/.codex/auth.json.
- Endpoint: posts to
https://chatgpt.com/backend-api/codex/responses.
- Result: produces a non‑empty planning artifact at
/tmp/wish-docs.plan.json with plan_steps, model, and tokens_used.
TODO (production‑readiness)
Repro commands
# Lint wish file
cargo xtask wish lint data/wishes/wishcraft-docs.yaml
# Live plan (uses ~/.codex/auth.json)
cargo run -p xtask -- wish codex plan --file data/wishes/wishcraft-docs.yaml --live --out /tmp/wish-docs.plan.json
cat /tmp/wish-docs.plan.json
Notes
- We vendor codex-rs for reference only; no workspace linkage.
- Default model is
gpt-5 (env override OPENAI_MODEL).
- Base URL defaults to
https://chatgpt.com/backend-api/codex (env override CHATGPT_BASE_URL).
wishcraft_openai: Codex ChatGPT planning conduit — status log and TODOs
This issue tracks the work to wire the Wishcraft planning conduit to Codex’s ChatGPT backend and what remains to get it production‑ready.
What’s done
https://chatgpt.com/backend-api/codex/responses(Responses API) using ChatGPT tokens from~/.codex/auth.json.Authorization: Bearer <access_token>,chatgpt-account-id,OpenAI-Beta: responses=experimental,originator,conversation_id,session_id,Codex-Task-Type: standard,Accept: text/event-stream,Referer: https://chatgpt.com/.https://auth.openai.com/oauth/token(writes back to~/.codex/auth.json).ResponsesApiRequest: fields includemodel,instructions,input(ResponseItem array withtype: message,content: [{type: input_text, text: ...}]),tool_choice: auto,parallel_tool_calls: false,store: false,stream: true,include: [].core/prompt.md(default)core/gpt_5_codex_prompt.md(when model starts withgpt-5-codexorcodex-).response.deltaandresponse.output_item.doneinto a singleoutput_textand extractsmodel+usagefromresponse.completed.plan_stepsby line‑splitting./codex/chat/completionswithmessagesand a minimaltoolsarray. (Not normally needed now that Responses payload is correct.)cargo run -p xtask -- wish codex plan --file data/wishes/wishcraft-docs.yaml --live --out /tmp/wish-docs.plan.json~/.codex/auth.json; no API key required.openai.codex.v2025.planindata/conduits/registry.yaml.docs/wish.md.mutfrom SSE parser.Where the code lives
crates/wishcraft_openai/src/client.rscrates/wishcraft_openai/src/conduit.rscrates/wishcraft_openai/src/config.rsxtask/src/main.rsthird_party/openai-codex/codex-rs/Verified behavior
access_token+chatgpt-account-idfrom~/.codex/auth.json.https://chatgpt.com/backend-api/codex/responses./tmp/wish-docs.plan.jsonwithplan_steps,model, andtokens_used.TODO (production‑readiness)
response.delta,response.output_item.done, andresponse.completedframes; assertoutput_text,model,usageare assembled correctly.cf-rayand structured JSON error bodies consistently; addRetry-Afterhandling to Responses path (it exists in client; expand coverage).text.verbosityand/or output schema for specific model families (codex‑rs does this for gpt‑5).~/.codex/config.tomlordata/config/wishcraft.tomlto override model and tuning safely.tracingspans (wish_id,request_id/cf-ray, token usage) and rate‑limit snapshots to logs.docs/gdd/11-technical/for Codex conduit specifics and troubleshooting.Repro commands
Notes
gpt-5(env overrideOPENAI_MODEL).https://chatgpt.com/backend-api/codex(env overrideCHATGPT_BASE_URL).