fix(providers): support per-model request_timeout in model_list#733
fix(providers): support per-model request_timeout in model_list#733xiaket merged 4 commits intosipeed:mainfrom
Conversation
There was a problem hiding this comment.
LGTM
Clean implementation:
- ✅ Backward compatible (<=0 uses default 120s)
- ✅ Progressive API design (new functions instead of signature changes)
- ✅ Good test coverage (parsing, default, override, propagation)
- ✅ Documentation updated (EN/CN README and migration guide)
Note for future: Ollama provider doesn't use openai_compat package, so local model timeouts need separate handling. This can be addressed in a follow-up.
| return NewProviderWithMaxTokensFieldAndTimeout(apiKey, apiBase, proxy, maxTokensField, 0) | ||
| } | ||
|
|
||
| func NewProviderWithMaxTokensFieldAndTimeout( |
There was a problem hiding this comment.
I think more idiomatic go would use the Functional Options pattern to configure the behaviour of a new struct, as it would make adding options easier.
This can be addressed as a future improvement and should not block merging this PR.
There was a problem hiding this comment.
Just to be clear, I'm wishing for something like the following, please consider below to be pseudo code.
func WithMaxTokensField(field string) Option {
return func(p *Provider) {
p.maxTokensField = field
}
}
func WithRequestTimeout(timeout time.Duration) Option {
return func(p *Provider) {
p.httpClient.Timeout = timeout
}
}
There was a problem hiding this comment.
Can I bother you with the addition of these lines into other READMEs? AI generated translation is fine.
|
Thanks for the detailed review — all mentioned items are now addressed in the latest updates. 1) Functional Options for provider constructionImplemented for
2) request_timeout migration consistency (including Ollama path)
3) Sync docs across translated READMEsSynced
This includes Quick Start JSON field, explanatory notes, and Custom Proxy/API example with Validation:
Commits pushed:
|
|
@0xYiliu Per-model request_timeout is a really useful addition. Local models like Ollama can be significantly slower than cloud APIs, so the fixed 120s default was definitely causing unnecessary failures. The backward-compatible fallback logic is clean too. We're building a PicoClaw Dev Group on Discord for contributors to connect and collaborate. If you'd like to join, send an email to |
…ed#733) * fix(providers): support per-model request_timeout in model_list * fix(lint): format provider constructors for golines * refactor(providers): adopt functional options and preserve timeout migration * docs(readme): sync request_timeout guidance across translated docs --------- Co-authored-by: Yiliu <yiliu@affiliate-guide.com>
…ed#733) * fix(providers): support per-model request_timeout in model_list * fix(lint): format provider constructors for golines * refactor(providers): adopt functional options and preserve timeout migration * docs(readme): sync request_timeout guidance across translated docs --------- Co-authored-by: Yiliu <yiliu@affiliate-guide.com>
📝 Description
Add per-model
request_timeoutsupport for OpenAI-compatible HTTP providers to fix slow local model timeout failures (Issue #637). This keeps backward compatibility with the previous 120s default while allowing model-specific overrides frommodel_list.🗣️ Type of Change
🤖 AI Code Generation
🔗 Related Issue
Fixes #637
📚 Technical Context (Skip for Docs)
request_timeoutconfigurable inconfig.json#637request_timeouttoModelConfigand propagate it through provider factory -> HTTP provider -> OpenAI-compatible provider constructor.<= 0fallback semantics to preserve existing default timeout behavior (120s).🧪 Test Environment
model_list(openai/anthropic-protocol branch in factory path)📸 Evidence (Optional)
Click to view Logs/Screenshots
make check(environment note: local runner lacksgolangci-lintbinary)GOLANGCI_LINT=echo make checkpassed:☑️ Checklist