Skip to content

Conversation

@socialviolation
Copy link

@socialviolation socialviolation commented Nov 20, 2025

Pull Request Type

  • ✨ feat
  • πŸ› fix
  • ♻️ refactor
  • πŸ’„ style
  • πŸ”¨ chore
  • πŸ“ docs

Relevant Issues

resolves #4493

What is in this change?

Firstly, I apologise for the size of this one. I appreciate that it is a bit of a paradigm shift.

This PR implements support for multiple LLM provider connections per workspace, resolving issue #4493.

Problem: AnythingLLM currently restricts users to a single instance per LLM provider type (e.g., one Ollama server, one LiteLLM proxy). Users must manually reconfigure settings to switch between different instances, creating friction for teams managing
multiple deployments.

Solution: Introduces a connection-based architecture that allows:

  • Admins to pre-configure multiple LLM connections (e.g., multiple Ollama servers with different API keys/URLs)
  • Workspace managers to select specific connections for chat and agent models
  • Different workspaces to simultaneously use different instances of the same provider type
screenshot-2025-11-20_21-30-07

This is especially useful for enterprise use cases, as I use LiteLLM to:

  • abstract api keys away from teams
  • assign cost controls to teams
  • assign/hide MCP/tools to specific teams
  • Have the option of using multiple accounts from the same provider, meaning rate limits from one team don't affect all 'open ai' teams.

Additional Information

Developer Validations

  • I ran yarn lint from the root of the repo & committed changes
  • Relevant documentation has been updated
  • I have tested my code functionality
  • Docker build succeeds locally

socialviolation and others added 16 commits November 18, 2025 20:13
Phase 1: Database schema and backend models

- Add llm_connections table to support multiple provider configs
- Add chatConnectionId/agentConnectionId fields to workspaces table
- Implement LLMConnection model with full CRUD operations
- Add LLMConfigEncryption utility for secure API key storage
- Update Workspace model to support new connection fields
- Add comprehensive unit tests for LLMConnection model
- Maintain backward compatibility with existing chatProvider fields

This enables workspaces to use different LLM provider instances
(e.g., multiple LiteLLM proxies with different API keys) for
access control and billing purposes.

Related to: Mintplex-Labs#4493

πŸ€– Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Phase 2: Provider Refactoring

- Update getLLMProvider() to accept connectionId, connection, or config parameters
- Refactor function to be async for database connection lookup
- Update LiteLLM provider to accept config object as third parameter
- Update Ollama provider to accept config object as third parameter
- Add authToken encryption for Ollama provider
- Maintain backward compatibility with environment variable mode
- All 30+ providers now support optional config parameter

Providers can now be instantiated in three ways:
1. NEW: With connectionId (loads from llm_connections table)
2. NEW: With config object (direct configuration)
3. LEGACY: With provider string (reads from environment variables)

This enables workspaces to use different provider instances with
their own credentials and endpoints.

Related to: Mintplex-Labs#4493

πŸ€– Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Phase 3: Backend API

- Create /v1/llm-connections endpoints for CRUD operations
- GET /v1/llm-connections - List all connections (admin only)
- GET /v1/llm-connections/:id - Get single connection
- POST /v1/llm-connections/new - Create new connection
- POST /v1/llm-connections/:id/update - Update connection
- DELETE /v1/llm-connections/:id - Soft delete connection
- POST /v1/llm-connections/:id/set-default - Set as default
- POST /v1/llm-connections/:id/test - Test connection
- All endpoints protected with admin role validation
- Sensitive fields (API keys) redacted in responses
- Proper error handling and validation

Admins can now manage LLM connections via REST API with
encrypted credential storage and workspace protection.

Related to: Mintplex-Labs#4493

πŸ€– Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Implements the admin interface for creating, editing, and managing
multiple LLM provider connections.

Changes:
- Add System.llmConnections API client with full CRUD operations
- Create LLMConnections settings page with table view
- Create ConnectionRow component for displaying connections
- Create ConnectionModal for create/edit with dynamic forms
- Create DynamicConfigForm for provider-specific configuration
- Add /settings/llm-connections route to App.jsx
- Add "LLM Connections" navigation item to settings sidebar
- Add llmConnections() path to utils/paths.js

Features:
- List all LLM connections with provider, default status, and dates
- Create new connections with provider-specific config forms
- Edit existing connections (name and provider locked)
- Delete connections (prevented for default connections)
- Set connection as default for its provider
- Test connection functionality
- Support for LiteLLM, Ollama, OpenAI, and Anthropic providers
- API key masking and secure handling

πŸ€– Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
…tch models

- Removed separate LLM Connections page and merged into LLM Preference
- Added unified table showing System Default + user connections
- Implemented test connection button with model auto-discovery
- Auto-fetch models when basePath is provided
- Auto-select first model if none selected
- Fixed authentication middleware (validApiKey β†’ validatedRequest)
- Standardized provider configs to use "basePath" consistently
- Matched styling to ApiKeys page (border-white/10, text-xs)
- Removed unused routes and menu items

πŸ€– Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
- fetchModels() now returns the models array for immediate checking
- Test connection shows accurate success/fail status using returned value
- Removed auto-fetch error toasts (only manual test shows errors)
- 10 second timeout maintained for connection tests
- Properly re-throws errors for handleTestConnection to catch

πŸ€– Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
LiteLLM requires the API key to be sent via the X-Litellm-Key header
instead of the standard Authorization: Bearer header.

Changes:
- Updated liteLLMModels() to use X-Litellm-Key custom header
- Updated LiteLLM provider class to use X-Litellm-Key for both
  connection-based config and legacy env var config
- Uses dummy-key for OpenAI SDK requirement while actual auth
  happens via custom header

This fixes test connection and model discovery for LiteLLM connections.

πŸ€– Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
- Test connection now debounced to 500ms to prevent rapid clicking
- Added persistent connection status indicator with emoji icons
- Shows success/fail with descriptive message explaining why
- Success shows: "βœ“ Connected β€’ Successfully connected to {provider} β€’ Found X models"
- Failure shows: "βœ— Connection Failed β€’ {error message}"
- Status clears when basePath or apiKey changes
- Green background for success, red for failure
- Uses CheckCircle/XCircle icons from phosphor

πŸ€– Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Major enhancements to the LLM connections management interface:

Frontend improvements:
- Add searchable provider selection dropdown with logos and descriptions
- Support all 31 LLM providers (OpenAI, Anthropic, Ollama, LiteLLM, etc.)
- Implement test connection button that validates before saving
- Add visual connection status indicators with success/error feedback
- Make connection modal form scrollable with improved spacing
- Add side-by-side layout for model selection and token limit fields
- Enable connection name editing for existing connections
- Add provider-specific field configurations in separate module
- Implement useSavedApiKey flag for editing connections with redacted keys

Backend improvements:
- Fix Prisma null parameter issue in llmConnection.where() method
- Add support for using stored API keys when testing edited connections
- Add connectionId and useSavedApiKey parameters to custom-models endpoint
- Improve LiteLLM provider with self-signed certificate support
- Auto-append /v1 path for LiteLLM endpoints
- Add detailed logging for connection testing and model fetching

Bug fixes:
- Fix connections not appearing in list (Prisma take: null error)
- Fix browser caching issues with cache-busting timestamps
- Resolve circular dependency by extracting provider configs
- Fix API key redaction when testing existing connections

πŸ€– Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Update workspace LLM settings to use the new LLM connections system:

Features added:
- Fetch and display available LLM connections in workspace settings
- Show connections alongside "System Default" option in dropdown
- Add "Create New Connection" option that opens connections page
- Save chatConnectionId when selecting a connection
- Display connection info with default model when selected
- Add optional model override field for connections
- Maintain backward compatibility with chatProvider for system LLMs

UI improvements:
- Loading state while fetching connections
- Empty state when no connections available
- Connection badge showing which connection is in use
- Model override input for fine-grained control

Backend integration:
- Use chatConnectionId field from workspace schema
- Clear chatProvider when connection is selected
- Support chatModelOverride for connection-specific overrides

πŸ€– Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
- Add getLLMProviderForWorkspace utility to centralize connection lookup
- Update all chat workflows (stream, embed, API) to use connections
- Add agent support for agentConnectionId and agentModelOverride
- Update workspace and agent LLM selection UI to show connections
- Fix ***REDACTED*** encryption bug when editing connections

πŸ€– Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
- Install Jest v30.2.0 as dev dependency
- Add jest.config.js with test environment and coverage settings
- Add jest.setup.js to configure test environment variables and storage
- Add "test" script to package.json
- Update openaiCompatible.test.js to use getLLMProviderForWorkspace

All 12 test suites now pass with 90 tests passing.

πŸ€– Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
- Remove debug console.log statements from endpoints/system.js
- Convert TODO comment to informative note in llm-connections API
- Apply linter formatting to modified files
- Update default provider from litellm to openai
- Affects new connection creation in LLM connections UI

πŸ€– Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@timothycarambat
Copy link
Member

I understand the want here, we know it is a desired feature but as you noted, its a pretty massive paradigm shift in how we manage the selected LLM. From a cursory review, there are a ton of unrelated changes to the core of what this PR should be:

  • Testing setup config as opposed to just isolated unit tests
  • LiteLLM unsigned certificate changes

Some other notes:

  • This does not touch the onboarding screens, but I presume there must be some logic for migration or backward compatibility. This might work out of the box for a new install, but we have about 5M installs or more that might wind up with a broken instance if this feature is not backward compatible to where they can pull in and be instantly up to date.
  • UI looks OK from what I can currently see

Either way, this is a really large PR because it has to be, but in its current state, this is such a massive change it is going to need some love for sure. Ill dive into this in the future and see what we can do

@socialviolation
Copy link
Author

Big thanks @timothycarambat! Have addressed a few of the easy comments you left.

For migration testing - I'm happy to assist. If you had a checklist of scenarios I could follow that would be super handy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEAT]: Allow Multiple Connections to the Same Type of LLM Provider

2 participants