Skip to content

wishcraft_openai: Codex ChatGPT planning conduit — status log and TODOs #117

@AtlantisPleb

Description

@AtlantisPleb

wishcraft_openai: Codex ChatGPT planning conduit — status log and TODOs

This issue tracks the work to wire the Wishcraft planning conduit to Codex’s ChatGPT backend and what remains to get it production‑ready.

What’s done

  • Exact Codex endpoint + headers
    • Requests go to https://chatgpt.com/backend-api/codex/responses (Responses API) using ChatGPT tokens from ~/.codex/auth.json.
    • Sends headers Codex uses: Authorization: Bearer <access_token>, chatgpt-account-id, OpenAI-Beta: responses=experimental, originator, conversation_id, session_id, Codex-Task-Type: standard, Accept: text/event-stream, Referer: https://chatgpt.com/.
    • Token refresh support via https://auth.openai.com/oauth/token (writes back to ~/.codex/auth.json).
  • Payload parity (Responses wire)
    • Mirrors codex-rs ResponsesApiRequest: fields include model, instructions, input (ResponseItem array with type: message, content: [{type: input_text, text: ...}]), tool_choice: auto, parallel_tool_calls: false, store: false, stream: true, include: [].
    • Embeds Codex base instructions verbatim by including vendored prompt files:
      • core/prompt.md (default)
      • core/gpt_5_codex_prompt.md (when model starts with gpt-5-codex or codex-).
  • SSE parsing (streaming)
    • Aggregates response.delta and response.output_item.done into a single output_text and extracts model + usage from response.completed.
    • Returns a simple JSON that our conduit maps to plan_steps by line‑splitting.
  • Chat fallback
    • If the Responses call is rejected for shape reasons, falls back to /codex/chat/completions with messages and a minimal tools array. (Not normally needed now that Responses payload is correct.)
  • CLI integration
    • cargo run -p xtask -- wish codex plan --file data/wishes/wishcraft-docs.yaml --live --out /tmp/wish-docs.plan.json
    • Uses ~/.codex/auth.json; no API key required.
  • Registry + docs
    • Conduit id: openai.codex.v2025.plan in data/conduits/registry.yaml.
    • Docs overview added: docs/wish.md.
  • Warnings fixed
    • Removed the stray mut from SSE parser.

Where the code lives

  • Client/headers/SSE: crates/wishcraft_openai/src/client.rs
  • Conduit/planning prompt + payload: crates/wishcraft_openai/src/conduit.rs
  • Config defaults (base URL, model, CODEX_HOME): crates/wishcraft_openai/src/config.rs
  • CLI wiring: xtask/src/main.rs
  • Vendored Codex sources (reference only): third_party/openai-codex/codex-rs/

Verified behavior

  • Auth: loads access_token + chatgpt-account-id from ~/.codex/auth.json.
  • Endpoint: posts to https://chatgpt.com/backend-api/codex/responses.
  • Result: produces a non‑empty planning artifact at /tmp/wish-docs.plan.json with plan_steps, model, and tokens_used.

TODO (production‑readiness)

  • SSE unit tests: feed canned response.delta, response.output_item.done, and response.completed frames; assert output_text, model, usage are assembled correctly.
  • Error hygiene: surface cf-ray and structured JSON error bodies consistently; add Retry-After handling to Responses path (it exists in client; expand coverage).
  • Reasoning/text controls: optionally include text.verbosity and/or output schema for specific model families (codex‑rs does this for gpt‑5).
  • Tooling parity: if the planner needs a specific function tool shape later, generate tool JSON via a helper rather than hand‑writing (kept empty for now since planning returns text).
  • Config ergonomics: allow a small TOML under ~/.codex/config.toml or data/config/wishcraft.toml to override model and tuning safely.
  • Apply conduit (separate): safe local apply (write file/patch) and later a high‑risk PR opener with strict gates.
  • Telemetry: add tracing spans (wish_id, request_id/cf-ray, token usage) and rate‑limit snapshots to logs.
  • Docs: add a short developer note under docs/gdd/11-technical/ for Codex conduit specifics and troubleshooting.

Repro commands

# Lint wish file
cargo xtask wish lint data/wishes/wishcraft-docs.yaml

# Live plan (uses ~/.codex/auth.json)
cargo run -p xtask -- wish codex plan --file data/wishes/wishcraft-docs.yaml --live --out /tmp/wish-docs.plan.json
cat /tmp/wish-docs.plan.json

Notes

  • We vendor codex-rs for reference only; no workspace linkage.
  • Default model is gpt-5 (env override OPENAI_MODEL).
  • Base URL defaults to https://chatgpt.com/backend-api/codex (env override CHATGPT_BASE_URL).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions