Skip to content

Conversation

@aibrahim-oai
Copy link
Collaborator

@aibrahim-oai aibrahim-oai commented Oct 17, 2025

Users now hit a window exceeded limit and they usually don't know what to do. This starts auto compact at ~90% of the window.

Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting

_ if slug.starts_with("gpt-5-codex") => Some(ModelInfo::new(258000, 128_000)),

_ if slug.starts_with("gpt-5") => Some(ModelInfo::new(272_000, 128_000)),
_ if slug.starts_with("gpt-5") => Some(ModelInfo::new(258000, 128_000)),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we set the context size to real value and infer other?

]);
let post_auto_compact_turn = sse(vec![ev_completed_with_tokens("r4", 10)]);

let request_log = mount_sse_sequence(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: you can mount these one by one and only store mocks of the ones you care about into variables. Avoids needing to index into request_log

Copy link
Collaborator

@jif-oai jif-oai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you also add either a better title or a description. It's pretty hard to know exactly what we are reviewing atm

}
}

const fn apply_context_window_margin(context_window: u64) -> u64 {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

General comment but let's try to avoid using unsigned int. Even if the value is non-negative, the modulo arithmetic is pretty different

"gpt-4o-2024-05-13" => Some(ModelInfo::new(128_000, 4_096)),

// https://platform.openai.com/docs/models/gpt-4o?snapshot=gpt-4o-2024-11-20
"gpt-4o-2024-11-20" => Some(ModelInfo::new(128_000, 16_384)),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure all the models support well compaction? Mainly asking for OSS

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They don't have much options anyways they are about to hit the error

@aibrahim-oai aibrahim-oai changed the title compact Auto compact at ~90% Oct 20, 2025
@aibrahim-oai aibrahim-oai merged commit 049a61b into main Oct 20, 2025
20 checks passed
@aibrahim-oai aibrahim-oai deleted the compact-limits branch October 20, 2025 18:29
@github-actions github-actions bot locked and limited conversation to collaborators Oct 20, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants