Skip to content

Conversation

@ldc861117
Copy link
Owner

This commit implements comprehensive support for custom VLM and Embedding model configurations, including local providers that don't require API keys.

Key Changes:

  • Enhanced LLMProvider enum with Ollama, LocalAI, LlamaCPP, and Custom providers
  • Added is_api_key_optional() method for provider-aware API key validation
  • Unified validation and rollback mechanism for both VLM and Embedding clients
  • Improved settings.py to delegate API key validation to LLMClient
  • Added comprehensive LLM Configuration Guide with examples
  • Provided Ollama example configuration file

Benefits:

  • Support for local LLM providers without API keys (Ollama, LocalAI, etc.)
  • Graceful rollback on reinitialization failures
  • More flexible and extensible provider support
  • Better error messages and validation

Fixes: API key validation preventing use of local models
Addresses: User request for custom VLM and embedding model configuration

Description

Please include a concise summary, in clear English, of the changes in this pull request. If it closes an issue, please
mention it here.

Closes: #(issue)

🎯 PRs Should Target Issues

Before your create a PR, please check to see if there is an existing issue
for this change. If not, please create an issue before you create this PR, unless the fix is very small.

Not adhering to this guideline will result in the PR being closed.

This commit implements comprehensive support for custom VLM and Embedding model
configurations, including local providers that don't require API keys.

Key Changes:
- Enhanced LLMProvider enum with Ollama, LocalAI, LlamaCPP, and Custom providers
- Added is_api_key_optional() method for provider-aware API key validation
- Unified validation and rollback mechanism for both VLM and Embedding clients
- Improved settings.py to delegate API key validation to LLMClient
- Added comprehensive LLM Configuration Guide with examples
- Provided Ollama example configuration file

Benefits:
- Support for local LLM providers without API keys (Ollama, LocalAI, etc.)
- Graceful rollback on reinitialization failures
- More flexible and extensible provider support
- Better error messages and validation

Fixes: API key validation preventing use of local models
Addresses: User request for custom VLM and embedding model configuration

Co-authored-by: Droid <[email protected]>
@ldc861117 ldc861117 merged commit 67721e8 into main Oct 18, 2025
@ldc861117 ldc861117 deleted the feature/flexible-llm-config branch October 18, 2025 01:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants