Skip to content

Conversation

@danesherbs
Copy link
Contributor

What: Adds support for gpt-3.5-turbo-16k to n_ctx_from_model_name.
Why: Currently n_ctx_from_model_name returns 4096 for gpt-3.5-turbo-16k.

Comment on lines +84 to +87
for model_prefix in {"gpt-3.5-turbo-", "gpt-4-"}:
if model_name.startswith(model_prefix):
return True

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may also want to throw an error when there's not an exact match, or be more restrictive about when the prefix matching applies to avoid similar errors in the future.

Copy link
Contributor

@logankilpatrick logankilpatrick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, thank you!

@logankilpatrick logankilpatrick merged commit bbe26f8 into openai:main Jan 3, 2024
jacobbieker pushed a commit to withmartian/-ARCHIVED--router-evals that referenced this pull request Jan 9, 2024
**What:** Adds support for `gpt-3.5-turbo-16k` to
`n_ctx_from_model_name`.
**Why:** Currently `n_ctx_from_model_name` returns 4096 for
`gpt-3.5-turbo-16k`.

Co-authored-by: Ian McKenzie <[email protected]>
Linmj-Judy pushed a commit to TablewareBox/evals that referenced this pull request Feb 27, 2024
**What:** Adds support for `gpt-3.5-turbo-16k` to
`n_ctx_from_model_name`.
**Why:** Currently `n_ctx_from_model_name` returns 4096 for
`gpt-3.5-turbo-16k`.

Co-authored-by: Ian McKenzie <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants