feat: support custom request headers and preserve reasoning_content#889
feat: support custom request headers and preserve reasoning_content#889chanchann wants to merge 6 commits intosipeed:mainfrom
Conversation
nikolasdehor
left a comment
There was a problem hiding this comment.
Good feature overall — custom headers and reasoning_content preservation are both valuable. A few concerns: (1) Security: header override — Custom headers are applied after Authorization via Header.Set, which means a config entry like headers: {Authorization: ...} would silently override the API key. Consider skipping reserved headers (Authorization, Content-Type) or documenting this behavior. (2) Conflict with #876 — This PR and #876 both add reasoning_content to openaiMessage with different field names (Reasoning vs ReasoningContent). These need to be coordinated. (3) Title — The title suggests Kimi-specific support but the changes are generic. Consider renaming. (4) Tests — Please add tests for custom headers, reserved header override behavior, and reasoning_content round-tripping.
|
Thanks for the detailed review! Here's how I'm addressing each point: (1) Reserved header protection Good catch. I'll skip (2) Conflict with #876 I'll rename the Go struct field from (3) Title Will rename to: (4) Tests Will add:
Will push an updated commit shortly. |
|
@chanchann please fix Linter and tests |
📝 Description
Add support for custom HTTP request headers in HTTP-based providers. Users can now specify arbitrary headers (e.g.,
User-Agent, vendor-specific tokens) per model entry inconfig.json. This is particularly useful for providers like Kimi for Coding that require a customUser-Agentheader for API access.Additionally, preserve
reasoning_contentacross multi-turn conversations so that models which return chain-of-thought reasoning (e.g., DeepSeek-R1, Kimi) correctly echo it back in subsequent requests.🗣️ Type of Change
🤖 AI Code Generation
🔗 Related Issue
📚 Technical Context
Reference URL: N/A
Reasoning:
Some HTTP-based LLM providers require non-standard headers beyond
Authorization— for example, Kimi's coding endpoint expects a specificUser-Agentto identify the client. Previously there was no way to inject arbitrary headers without modifying source code. This change threads aheadersmap from the config file all the way through to the HTTP request layer.Models that perform chain-of-thought reasoning (e.g., DeepSeek-R1, Kimi) return a
reasoning_contentfield alongside the maincontent. Without preserving it across turns, the model loses its reasoning context on every follow-up message, degrading response quality.Changed files
pkg/config/config.goHeaders map[string]stringfield toModelConfigheaderskey inconfig.jsonmodel entriespkg/providers/openai_compat/provider.gocustomHeadersfield toProvider; addWithCustomHeaders()option; apply headers on every request; addreasoning_contenttoopenaiMessageand populate it instripSystemPartspkg/providers/http_provider.goNewHTTPProviderWithOptions()(supersedesNewHTTPProviderWithMaxTokensFieldAndRequestTimeout); old function becomes a thin wrapperpkg/providers/factory_provider.goNewHTTPProviderWithOptions()and passcfg.Headerspkg/tools/toolloop.goReasoningContentfrom response into the assistant message appended to historyConfig example
{ "agents": { "defaults": { "model": "kimi-for-coding" } }, "model_list": [ { "model_name": "kimi-for-coding", "model": "openai/kimi-for-coding", "api_base": "https://api.kimi.com/coding/v1", "api_key": "sk-kimi-xxx", "headers": { "User-Agent": "claude-code/1.0.0" } } ] }The
headersfield is optional and omitted by default (omitempty), so existing configurations are unaffected.🧪 Test Environment
openai/kimi-for-coding)📸 Evidence (Optional)
Click to view Logs/Screenshots
☑️ Checklist