docs(readme,examples): Update readme example, format examples#71
Open
sinderpl wants to merge 2 commits intocharmbracelet:mainfrom
Open
docs(readme,examples): Update readme example, format examples#71sinderpl wants to merge 2 commits intocharmbracelet:mainfrom
sinderpl wants to merge 2 commits intocharmbracelet:mainfrom
Conversation
Author
|
cc @meowgorithm as per #60 (comment) |
Contributor
|
@meowgorithm do we want to keep the existing readme example or change it? |
birdmanmandbir
added a commit
to tta-lab/fantasy
that referenced
this pull request
May 6, 2026
Three changes to bring fantasy's Responses API streaming into parity with OpenAI's official codex CLI (codex-rs/codex-api/src/sse/responses.rs): 1. **Defer activeReasoning cleanup to end-of-stream.** The previous implementation deleted the entry on `response.output_item.done`, which meant any subsequent event for the same reasoning item (e.g. late delta or duplicate done) silently dropped. The codex official parser keeps items addressable until the full stream completes. 2. **Capture done.Item.Summary on output_item.done.** The streaming summary delta path may already populate state.metadata.Summary via reasoning_summary_text.delta events, but the done event carries the authoritative final list. Prefer it when non-empty so partial-summary streams are corrected to the final shape. 3. **Add response.reasoning_text.delta handler.** Some gpt-5.x reasoning variants stream reasoning via this event channel (raw reasoning text keyed by ItemID + ContentIndex) instead of, or in addition to, reasoning_summary_text.delta. The official codex parser handles both; fantasy previously only handled the summary path, dropping raw reasoning text for affected models. Background: empirical lenos session 35dd39ec (codex 5.4 multi-turn audit) showed turn 1 captured encrypted_content cleanly via the existing output_item.done capture (PR charmbracelet#71's Fix 2), but follow-up turns and gpt-5.5 high sessions (ab022528) showed state.metadata.EncryptedContent stuck at empty despite the API streaming reasoning text. Investigation against the official codex CLI source + multiple reverse-engineered backend proxies (MetaFARS/codex-relay, hermes-agent issue #5732, satoriweb's protocol notes) confirmed: - response.completed.output is unreliable on the Codex backend (can be empty even when output_item.done events delivered the data). - The reasoning_text.delta event is a separate channel from reasoning_summary_text.delta; both must be handled to capture all thinking text emitted by gpt-5.x reasoning variants. This commit reverts the Fix 3 attempt (commit 7ce6466 — re-emitting ReasoningEnd from response.completed.output) which was based on the incorrect assumption that completed.output is the source of truth.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
CONTRIBUTING.md.