Feature hasn't been suggested before.
Describe the enhancement you want to request
Problem
When using OpenCode with a LiteLLM proxy, users must manually define every model in opencode.json. LiteLLM proxies already expose all available models via their /models endpoint (OpenAI-compatible). If the proxy has 20+ models, the config becomes tedious to maintain and goes stale as models are added/removed on the proxy side.
Proposed solution
Add an autoload: true option that works alongside litellmProxy: true. When both are set, OpenCode fetches the list of available models from the proxy's /models endpoint at startup. When only litellmProxy: true is set (without autoload), models must be defined manually as before.
Manual models only (litellmProxy: true):
{
"provider": {
"MyLiteLLMProxy": {
"npm": "@ai-sdk/openai-compatible",
"name": "My LiteLLM Proxy",
"options": {
"baseURL": "https://litellm.example.com/v1",
"litellmProxy": true
},
"models": {
"gpt-4": { "name": "GPT-4" },
"anthropic/claude-opus-4-6": { "name": "anthropic/claude-opus-4-6" },
"deepseek-chat": { "name": "DeepSeek Chat" }
}
}
}
}
Auto-load all models (litellmProxy: true + autoload: true):
{
"provider": {
"my-proxy": {
"npm": "@ai-sdk/openai-compatible",
"name": "My LiteLLM Proxy",
"options": {
"baseURL": "https://litellm.example.com/v1",
"litellmProxy": true,
"autoload": true,
"apiKey": "sk-key"
}
}
}
}
Key behaviors:
litellmProxy: true marks the provider as a LiteLLM proxy
autoload: true opts into fetching models from the /models endpoint at startup
- Both must be set for auto-loading (or provider ID contains "litellm" +
autoload: true)
- Manually configured models are never overridden (user config takes precedence)
Why this belongs in OpenCode
- "Support for new providers" is listed as an accepted contribution type in CONTRIBUTING.md
- LiteLLM is the most common proxy for teams running multiple LLM providers behind a single gateway
- This removes friction for self-hosted and enterprise setups where model lists change frequently
- Zero impact on existing users -
autoload is opt-in alongside litellmProxy
Feature hasn't been suggested before.
Describe the enhancement you want to request
Problem
When using OpenCode with a LiteLLM proxy, users must manually define every model in
opencode.json. LiteLLM proxies already expose all available models via their/modelsendpoint (OpenAI-compatible). If the proxy has 20+ models, the config becomes tedious to maintain and goes stale as models are added/removed on the proxy side.Proposed solution
Add an
autoload: trueoption that works alongsidelitellmProxy: true. When both are set, OpenCode fetches the list of available models from the proxy's/modelsendpoint at startup. When onlylitellmProxy: trueis set (withoutautoload), models must be defined manually as before.Manual models only (
litellmProxy: true):{ "provider": { "MyLiteLLMProxy": { "npm": "@ai-sdk/openai-compatible", "name": "My LiteLLM Proxy", "options": { "baseURL": "https://litellm.example.com/v1", "litellmProxy": true }, "models": { "gpt-4": { "name": "GPT-4" }, "anthropic/claude-opus-4-6": { "name": "anthropic/claude-opus-4-6" }, "deepseek-chat": { "name": "DeepSeek Chat" } } } } }Auto-load all models (
litellmProxy: true+autoload: true):{ "provider": { "my-proxy": { "npm": "@ai-sdk/openai-compatible", "name": "My LiteLLM Proxy", "options": { "baseURL": "https://litellm.example.com/v1", "litellmProxy": true, "autoload": true, "apiKey": "sk-key" } } } }Key behaviors:
litellmProxy: truemarks the provider as a LiteLLM proxyautoload: trueopts into fetching models from the/modelsendpoint at startupautoload: true)Why this belongs in OpenCode
autoloadis opt-in alongsidelitellmProxy