-
Notifications
You must be signed in to change notification settings - Fork 6.2k
Auto compact at ~90% #5292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Auto compact at ~90% #5292
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting
| _ if slug.starts_with("gpt-5-codex") => Some(ModelInfo::new(258000, 128_000)), | ||
|
|
||
| _ if slug.starts_with("gpt-5") => Some(ModelInfo::new(272_000, 128_000)), | ||
| _ if slug.starts_with("gpt-5") => Some(ModelInfo::new(258000, 128_000)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we set the context size to real value and infer other?
codex-rs/core/tests/suite/compact.rs
Outdated
| ]); | ||
| let post_auto_compact_turn = sse(vec![ev_completed_with_tokens("r4", 10)]); | ||
|
|
||
| let request_log = mount_sse_sequence( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: you can mount these one by one and only store mocks of the ones you care about into variables. Avoids needing to index into request_log
jif-oai
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you also add either a better title or a description. It's pretty hard to know exactly what we are reviewing atm
codex-rs/core/src/client.rs
Outdated
| } | ||
| } | ||
|
|
||
| const fn apply_context_window_margin(context_window: u64) -> u64 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
General comment but let's try to avoid using unsigned int. Even if the value is non-negative, the modulo arithmetic is pretty different
| "gpt-4o-2024-05-13" => Some(ModelInfo::new(128_000, 4_096)), | ||
|
|
||
| // https://platform.openai.com/docs/models/gpt-4o?snapshot=gpt-4o-2024-11-20 | ||
| "gpt-4o-2024-11-20" => Some(ModelInfo::new(128_000, 16_384)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure all the models support well compaction? Mainly asking for OSS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They don't have much options anyways they are about to hit the error
Users now hit a window exceeded limit and they usually don't know what to do. This starts auto compact at ~90% of the window.