Skip to content

fix(providers): support per-model request_timeout in model_list#733

Merged
xiaket merged 4 commits intosipeed:mainfrom
0xYiliu:fix/issue-637-request-timeout
Feb 26, 2026
Merged

fix(providers): support per-model request_timeout in model_list#733
xiaket merged 4 commits intosipeed:mainfrom
0xYiliu:fix/issue-637-request-timeout

Conversation

@0xYiliu
Copy link
Contributor

@0xYiliu 0xYiliu commented Feb 24, 2026

📝 Description

Add per-model request_timeout support for OpenAI-compatible HTTP providers to fix slow local model timeout failures (Issue #637). This keeps backward compatibility with the previous 120s default while allowing model-specific overrides from model_list.

🗣️ Type of Change

  • 🐞 Bug fix (non-breaking change which fixes an issue)
  • ✨ New feature (non-breaking change which adds functionality)
  • 📖 Documentation update
  • ⚡ Code refactoring (no functional changes, no api changes)

🤖 AI Code Generation

  • 🤖 Fully AI-generated (100% AI, 0% Human)
  • 🛠️ Mostly AI-generated (AI draft, Human verified/modified)
  • 👨‍💻 Mostly Human-written (Human lead, AI assisted or none)

🔗 Related Issue

Fixes #637

📚 Technical Context (Skip for Docs)

🧪 Test Environment

  • Hardware: x86_64 PC
  • OS: Linux
  • Model/Provider: OpenAI-compatible providers via model_list (openai/anthropic-protocol branch in factory path)
  • Channels: N/A

📸 Evidence (Optional)

Click to view Logs/Screenshots
  • make check (environment note: local runner lacks golangci-lint binary)
  • GOLANGCI_LINT=echo make check passed:
    • deps verified
    • fmt target executed
    • go vet passed
    • go test ./... passed (including new timeout tests)

☑️ Checklist

  • My code/docs follow the style of this project.
  • I have performed a self-review of my own changes.
  • I have updated the documentation accordingly.

Copy link
Collaborator

@yinwm yinwm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
Clean implementation:

  • ✅ Backward compatible (<=0 uses default 120s)
  • ✅ Progressive API design (new functions instead of signature changes)
  • ✅ Good test coverage (parsing, default, override, propagation)
  • ✅ Documentation updated (EN/CN README and migration guide)

Note for future: Ollama provider doesn't use openai_compat package, so local model timeouts need separate handling. This can be addressed in a follow-up.

return NewProviderWithMaxTokensFieldAndTimeout(apiKey, apiBase, proxy, maxTokensField, 0)
}

func NewProviderWithMaxTokensFieldAndTimeout(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think more idiomatic go would use the Functional Options pattern to configure the behaviour of a new struct, as it would make adding options easier.

This can be addressed as a future improvement and should not block merging this PR.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to be clear, I'm wishing for something like the following, please consider below to be pseudo code.

  func WithMaxTokensField(field string) Option {
      return func(p *Provider) {
          p.maxTokensField = field
      }
  }

  func WithRequestTimeout(timeout time.Duration) Option {
      return func(p *Provider) {
          p.httpClient.Timeout = timeout
      }
  }

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can I bother you with the addition of these lines into other READMEs? AI generated translation is fine.

@0xYiliu
Copy link
Contributor Author

0xYiliu commented Feb 25, 2026

Thanks for the detailed review — all mentioned items are now addressed in the latest updates.

1) Functional Options for provider construction

Implemented for openai_compat.Provider:

  • Added option pattern (WithMaxTokensField, WithRequestTimeout)
  • Kept backward compatibility by preserving existing constructor wrappers
  • Updated HTTP provider construction path to use options
  • Added tests covering option behavior and timeout semantics

2) request_timeout migration consistency (including Ollama path)

  • Added request_timeout support to legacy providers config (ProviderConfig)
  • Ensured migration from legacy providers -> model_list carries RequestTimeout across providers
  • Added migration test for timeout propagation

3) Sync docs across translated READMEs

Synced request_timeout guidance to:

  • README.ja.md
  • README.pt-br.md
  • README.vi.md
  • README.fr.md

This includes Quick Start JSON field, explanatory notes, and Custom Proxy/API example with request_timeout.

Validation:

  • make check passes locally after the changes.

Commits pushed:

  • 9d3f0af refactor(providers): adopt functional options and preserve timeout migration
  • 01ba912 docs(readme): sync request_timeout guidance across translated docs

@0xYiliu 0xYiliu requested a review from xiaket February 26, 2026 07:22
Copy link
Collaborator

@xiaket xiaket left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@xiaket xiaket merged commit 438f764 into sipeed:main Feb 26, 2026
2 checks passed
@Orgmar
Copy link
Contributor

Orgmar commented Feb 27, 2026

@0xYiliu Per-model request_timeout is a really useful addition. Local models like Ollama can be significantly slower than cloud APIs, so the fixed 120s default was definitely causing unnecessary failures. The backward-compatible fallback logic is clean too.

We're building a PicoClaw Dev Group on Discord for contributors to connect and collaborate. If you'd like to join, send an email to support@sipeed.com with the subject [Join PicoClaw Dev Group] 0xYiliu and we'll send you the invite link.

liangzhang-keepmoving pushed a commit to liangzhang-keepmoving/picoclaw that referenced this pull request Mar 2, 2026
…ed#733)

* fix(providers): support per-model request_timeout in model_list

* fix(lint): format provider constructors for golines

* refactor(providers): adopt functional options and preserve timeout migration

* docs(readme): sync request_timeout guidance across translated docs

---------

Co-authored-by: Yiliu <yiliu@affiliate-guide.com>
hyperwd pushed a commit to hyperwd/picoclaw that referenced this pull request Mar 5, 2026
…ed#733)

* fix(providers): support per-model request_timeout in model_list

* fix(lint): format provider constructors for golines

* refactor(providers): adopt functional options and preserve timeout migration

* docs(readme): sync request_timeout guidance across translated docs

---------

Co-authored-by: Yiliu <yiliu@affiliate-guide.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] Make hardcoded request_timeout configurable in config.json

4 participants