-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
[FEAT]: Allow Multiple Connections to the Same Type of LLM Provider #4667
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Phase 1: Database schema and backend models - Add llm_connections table to support multiple provider configs - Add chatConnectionId/agentConnectionId fields to workspaces table - Implement LLMConnection model with full CRUD operations - Add LLMConfigEncryption utility for secure API key storage - Update Workspace model to support new connection fields - Add comprehensive unit tests for LLMConnection model - Maintain backward compatibility with existing chatProvider fields This enables workspaces to use different LLM provider instances (e.g., multiple LiteLLM proxies with different API keys) for access control and billing purposes. Related to: Mintplex-Labs#4493 π€ Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Phase 2: Provider Refactoring - Update getLLMProvider() to accept connectionId, connection, or config parameters - Refactor function to be async for database connection lookup - Update LiteLLM provider to accept config object as third parameter - Update Ollama provider to accept config object as third parameter - Add authToken encryption for Ollama provider - Maintain backward compatibility with environment variable mode - All 30+ providers now support optional config parameter Providers can now be instantiated in three ways: 1. NEW: With connectionId (loads from llm_connections table) 2. NEW: With config object (direct configuration) 3. LEGACY: With provider string (reads from environment variables) This enables workspaces to use different provider instances with their own credentials and endpoints. Related to: Mintplex-Labs#4493 π€ Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Phase 3: Backend API - Create /v1/llm-connections endpoints for CRUD operations - GET /v1/llm-connections - List all connections (admin only) - GET /v1/llm-connections/:id - Get single connection - POST /v1/llm-connections/new - Create new connection - POST /v1/llm-connections/:id/update - Update connection - DELETE /v1/llm-connections/:id - Soft delete connection - POST /v1/llm-connections/:id/set-default - Set as default - POST /v1/llm-connections/:id/test - Test connection - All endpoints protected with admin role validation - Sensitive fields (API keys) redacted in responses - Proper error handling and validation Admins can now manage LLM connections via REST API with encrypted credential storage and workspace protection. Related to: Mintplex-Labs#4493 π€ Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Implements the admin interface for creating, editing, and managing multiple LLM provider connections. Changes: - Add System.llmConnections API client with full CRUD operations - Create LLMConnections settings page with table view - Create ConnectionRow component for displaying connections - Create ConnectionModal for create/edit with dynamic forms - Create DynamicConfigForm for provider-specific configuration - Add /settings/llm-connections route to App.jsx - Add "LLM Connections" navigation item to settings sidebar - Add llmConnections() path to utils/paths.js Features: - List all LLM connections with provider, default status, and dates - Create new connections with provider-specific config forms - Edit existing connections (name and provider locked) - Delete connections (prevented for default connections) - Set connection as default for its provider - Test connection functionality - Support for LiteLLM, Ollama, OpenAI, and Anthropic providers - API key masking and secure handling π€ Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
β¦tch models - Removed separate LLM Connections page and merged into LLM Preference - Added unified table showing System Default + user connections - Implemented test connection button with model auto-discovery - Auto-fetch models when basePath is provided - Auto-select first model if none selected - Fixed authentication middleware (validApiKey β validatedRequest) - Standardized provider configs to use "basePath" consistently - Matched styling to ApiKeys page (border-white/10, text-xs) - Removed unused routes and menu items π€ Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
- fetchModels() now returns the models array for immediate checking - Test connection shows accurate success/fail status using returned value - Removed auto-fetch error toasts (only manual test shows errors) - 10 second timeout maintained for connection tests - Properly re-throws errors for handleTestConnection to catch π€ Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
LiteLLM requires the API key to be sent via the X-Litellm-Key header instead of the standard Authorization: Bearer header. Changes: - Updated liteLLMModels() to use X-Litellm-Key custom header - Updated LiteLLM provider class to use X-Litellm-Key for both connection-based config and legacy env var config - Uses dummy-key for OpenAI SDK requirement while actual auth happens via custom header This fixes test connection and model discovery for LiteLLM connections. π€ Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
- Test connection now debounced to 500ms to prevent rapid clicking
- Added persistent connection status indicator with emoji icons
- Shows success/fail with descriptive message explaining why
- Success shows: "β Connected β’ Successfully connected to {provider} β’ Found X models"
- Failure shows: "β Connection Failed β’ {error message}"
- Status clears when basePath or apiKey changes
- Green background for success, red for failure
- Uses CheckCircle/XCircle icons from phosphor
π€ Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
Major enhancements to the LLM connections management interface: Frontend improvements: - Add searchable provider selection dropdown with logos and descriptions - Support all 31 LLM providers (OpenAI, Anthropic, Ollama, LiteLLM, etc.) - Implement test connection button that validates before saving - Add visual connection status indicators with success/error feedback - Make connection modal form scrollable with improved spacing - Add side-by-side layout for model selection and token limit fields - Enable connection name editing for existing connections - Add provider-specific field configurations in separate module - Implement useSavedApiKey flag for editing connections with redacted keys Backend improvements: - Fix Prisma null parameter issue in llmConnection.where() method - Add support for using stored API keys when testing edited connections - Add connectionId and useSavedApiKey parameters to custom-models endpoint - Improve LiteLLM provider with self-signed certificate support - Auto-append /v1 path for LiteLLM endpoints - Add detailed logging for connection testing and model fetching Bug fixes: - Fix connections not appearing in list (Prisma take: null error) - Fix browser caching issues with cache-busting timestamps - Resolve circular dependency by extracting provider configs - Fix API key redaction when testing existing connections π€ Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Update workspace LLM settings to use the new LLM connections system: Features added: - Fetch and display available LLM connections in workspace settings - Show connections alongside "System Default" option in dropdown - Add "Create New Connection" option that opens connections page - Save chatConnectionId when selecting a connection - Display connection info with default model when selected - Add optional model override field for connections - Maintain backward compatibility with chatProvider for system LLMs UI improvements: - Loading state while fetching connections - Empty state when no connections available - Connection badge showing which connection is in use - Model override input for fine-grained control Backend integration: - Use chatConnectionId field from workspace schema - Clear chatProvider when connection is selected - Support chatModelOverride for connection-specific overrides π€ Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
- Add getLLMProviderForWorkspace utility to centralize connection lookup - Update all chat workflows (stream, embed, API) to use connections - Add agent support for agentConnectionId and agentModelOverride - Update workspace and agent LLM selection UI to show connections - Fix ***REDACTED*** encryption bug when editing connections π€ Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
π€ Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
- Install Jest v30.2.0 as dev dependency - Add jest.config.js with test environment and coverage settings - Add jest.setup.js to configure test environment variables and storage - Add "test" script to package.json - Update openaiCompatible.test.js to use getLLMProviderForWorkspace All 12 test suites now pass with 90 tests passing. π€ Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
- Remove debug console.log statements from endpoints/system.js - Convert TODO comment to informative note in llm-connections API - Apply linter formatting to modified files
- Update default provider from litellm to openai - Affects new connection creation in LLM connections UI π€ Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
|
I understand the want here, we know it is a desired feature but as you noted, its a pretty massive paradigm shift in how we manage the selected LLM. From a cursory review, there are a ton of unrelated changes to the core of what this PR should be:
Some other notes:
Either way, this is a really large PR because it has to be, but in its current state, this is such a massive change it is going to need some love for sure. Ill dive into this in the future and see what we can do |
|
Big thanks @timothycarambat! Have addressed a few of the easy comments you left. For migration testing - I'm happy to assist. If you had a checklist of scenarios I could follow that would be super handy. |
Pull Request Type
Relevant Issues
resolves #4493
What is in this change?
Firstly, I apologise for the size of this one. I appreciate that it is a bit of a paradigm shift.
This PR implements support for multiple LLM provider connections per workspace, resolving issue #4493.
Problem: AnythingLLM currently restricts users to a single instance per LLM provider type (e.g., one Ollama server, one LiteLLM proxy). Users must manually reconfigure settings to switch between different instances, creating friction for teams managing
multiple deployments.
Solution: Introduces a connection-based architecture that allows:
This is especially useful for enterprise use cases, as I use LiteLLM to:
Additional Information
Developer Validations
yarn lintfrom the root of the repo & committed changes