[FEAT]: Allow Multiple Connections to the Same Type of LLM Provider #4667
[FEAT]: Allow Multiple Connections to the Same Type of LLM Provider #4667socialviolation wants to merge 24 commits intoMintplex-Labs:masterfrom
Conversation
Phase 1: Database schema and backend models - Add llm_connections table to support multiple provider configs - Add chatConnectionId/agentConnectionId fields to workspaces table - Implement LLMConnection model with full CRUD operations - Add LLMConfigEncryption utility for secure API key storage - Update Workspace model to support new connection fields - Add comprehensive unit tests for LLMConnection model - Maintain backward compatibility with existing chatProvider fields This enables workspaces to use different LLM provider instances (e.g., multiple LiteLLM proxies with different API keys) for access control and billing purposes. Related to: Mintplex-Labs#4493 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Phase 2: Provider Refactoring - Update getLLMProvider() to accept connectionId, connection, or config parameters - Refactor function to be async for database connection lookup - Update LiteLLM provider to accept config object as third parameter - Update Ollama provider to accept config object as third parameter - Add authToken encryption for Ollama provider - Maintain backward compatibility with environment variable mode - All 30+ providers now support optional config parameter Providers can now be instantiated in three ways: 1. NEW: With connectionId (loads from llm_connections table) 2. NEW: With config object (direct configuration) 3. LEGACY: With provider string (reads from environment variables) This enables workspaces to use different provider instances with their own credentials and endpoints. Related to: Mintplex-Labs#4493 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Phase 3: Backend API - Create /v1/llm-connections endpoints for CRUD operations - GET /v1/llm-connections - List all connections (admin only) - GET /v1/llm-connections/:id - Get single connection - POST /v1/llm-connections/new - Create new connection - POST /v1/llm-connections/:id/update - Update connection - DELETE /v1/llm-connections/:id - Soft delete connection - POST /v1/llm-connections/:id/set-default - Set as default - POST /v1/llm-connections/:id/test - Test connection - All endpoints protected with admin role validation - Sensitive fields (API keys) redacted in responses - Proper error handling and validation Admins can now manage LLM connections via REST API with encrypted credential storage and workspace protection. Related to: Mintplex-Labs#4493 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Implements the admin interface for creating, editing, and managing multiple LLM provider connections. Changes: - Add System.llmConnections API client with full CRUD operations - Create LLMConnections settings page with table view - Create ConnectionRow component for displaying connections - Create ConnectionModal for create/edit with dynamic forms - Create DynamicConfigForm for provider-specific configuration - Add /settings/llm-connections route to App.jsx - Add "LLM Connections" navigation item to settings sidebar - Add llmConnections() path to utils/paths.js Features: - List all LLM connections with provider, default status, and dates - Create new connections with provider-specific config forms - Edit existing connections (name and provider locked) - Delete connections (prevented for default connections) - Set connection as default for its provider - Test connection functionality - Support for LiteLLM, Ollama, OpenAI, and Anthropic providers - API key masking and secure handling 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
…tch models - Removed separate LLM Connections page and merged into LLM Preference - Added unified table showing System Default + user connections - Implemented test connection button with model auto-discovery - Auto-fetch models when basePath is provided - Auto-select first model if none selected - Fixed authentication middleware (validApiKey → validatedRequest) - Standardized provider configs to use "basePath" consistently - Matched styling to ApiKeys page (border-white/10, text-xs) - Removed unused routes and menu items 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- fetchModels() now returns the models array for immediate checking - Test connection shows accurate success/fail status using returned value - Removed auto-fetch error toasts (only manual test shows errors) - 10 second timeout maintained for connection tests - Properly re-throws errors for handleTestConnection to catch 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
LiteLLM requires the API key to be sent via the X-Litellm-Key header instead of the standard Authorization: Bearer header. Changes: - Updated liteLLMModels() to use X-Litellm-Key custom header - Updated LiteLLM provider class to use X-Litellm-Key for both connection-based config and legacy env var config - Uses dummy-key for OpenAI SDK requirement while actual auth happens via custom header This fixes test connection and model discovery for LiteLLM connections. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Test connection now debounced to 500ms to prevent rapid clicking
- Added persistent connection status indicator with emoji icons
- Shows success/fail with descriptive message explaining why
- Success shows: "✓ Connected • Successfully connected to {provider} • Found X models"
- Failure shows: "✗ Connection Failed • {error message}"
- Status clears when basePath or apiKey changes
- Green background for success, red for failure
- Uses CheckCircle/XCircle icons from phosphor
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major enhancements to the LLM connections management interface: Frontend improvements: - Add searchable provider selection dropdown with logos and descriptions - Support all 31 LLM providers (OpenAI, Anthropic, Ollama, LiteLLM, etc.) - Implement test connection button that validates before saving - Add visual connection status indicators with success/error feedback - Make connection modal form scrollable with improved spacing - Add side-by-side layout for model selection and token limit fields - Enable connection name editing for existing connections - Add provider-specific field configurations in separate module - Implement useSavedApiKey flag for editing connections with redacted keys Backend improvements: - Fix Prisma null parameter issue in llmConnection.where() method - Add support for using stored API keys when testing edited connections - Add connectionId and useSavedApiKey parameters to custom-models endpoint - Improve LiteLLM provider with self-signed certificate support - Auto-append /v1 path for LiteLLM endpoints - Add detailed logging for connection testing and model fetching Bug fixes: - Fix connections not appearing in list (Prisma take: null error) - Fix browser caching issues with cache-busting timestamps - Resolve circular dependency by extracting provider configs - Fix API key redaction when testing existing connections 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Update workspace LLM settings to use the new LLM connections system: Features added: - Fetch and display available LLM connections in workspace settings - Show connections alongside "System Default" option in dropdown - Add "Create New Connection" option that opens connections page - Save chatConnectionId when selecting a connection - Display connection info with default model when selected - Add optional model override field for connections - Maintain backward compatibility with chatProvider for system LLMs UI improvements: - Loading state while fetching connections - Empty state when no connections available - Connection badge showing which connection is in use - Model override input for fine-grained control Backend integration: - Use chatConnectionId field from workspace schema - Clear chatProvider when connection is selected - Support chatModelOverride for connection-specific overrides 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add getLLMProviderForWorkspace utility to centralize connection lookup - Update all chat workflows (stream, embed, API) to use connections - Add agent support for agentConnectionId and agentModelOverride - Update workspace and agent LLM selection UI to show connections - Fix ***REDACTED*** encryption bug when editing connections 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Install Jest v30.2.0 as dev dependency - Add jest.config.js with test environment and coverage settings - Add jest.setup.js to configure test environment variables and storage - Add "test" script to package.json - Update openaiCompatible.test.js to use getLLMProviderForWorkspace All 12 test suites now pass with 90 tests passing. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Remove debug console.log statements from endpoints/system.js - Convert TODO comment to informative note in llm-connections API - Apply linter formatting to modified files
- Update default provider from litellm to openai - Affects new connection creation in LLM connections UI 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
|
I understand the want here, we know it is a desired feature but as you noted, its a pretty massive paradigm shift in how we manage the selected LLM. From a cursory review, there are a ton of unrelated changes to the core of what this PR should be:
Some other notes:
Either way, this is a really large PR because it has to be, but in its current state, this is such a massive change it is going to need some love for sure. Ill dive into this in the future and see what we can do |
|
Big thanks @timothycarambat! Have addressed a few of the easy comments you left. For migration testing - I'm happy to assist. If you had a checklist of scenarios I could follow that would be super handy. |
Brings in 23 commits from upstream including: - New LLM providers: ZAi, GiteeAI - New embedding engine: OpenRouter - New data connector: PaperlessNgx - Ollama className property and improvements - Anthropic, Bedrock, and agent provider updates - Various bug fixes and locale updates Conflicts resolved: - server/utils/AiProviders/ollama/index.js: merged config-based constructor with upstream's className property - server/utils/helpers/index.js: added new zai/giteeai providers using connection-based providerConfig pattern 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
The production deployment uses PostgreSQL, not SQLite. The schema must use PostgreSQL as the datasource provider for the Docker image to work correctly with DATABASE_URL environment variable. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
The migration lock file must match the datasource provider. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add http.Agent for HTTP URLs in liteLLMModels() for model fetching - Add http.Agent for HTTP URLs in LiteLLM class constructor for chat completions - Maintains https.Agent with rejectUnauthorized:false for HTTPS URLs - Allows internal service connections without TLS (e.g., http://litellm:4000)
Changes SummaryThis PR introduces a connection-based architecture for managing multiple LLM provider connections per workspace. It adds a new database model (llm_connections), API endpoints for CRUD operations on connections, frontend UI for managing connections, and modifies workspace settings to use connection IDs instead of legacy provider strings. Sensitive API keys are encrypted at rest using a dedicated LLMConfigEncryption utility. Type: feature Components Affected: Database schema (Prisma), LLMConnection model, API endpoints (/v1/llm-connections), Frontend LLM preference pages, Workspace settings, Agent configuration, Chat handlers (stream, embed, api), LLM provider helper utilities, LiteLLM and Ollama providers Architecture Impact
Risk Areas: Database migration complexity - adding foreign keys and new table requires careful rollout, PostgreSQL datasource change in Prisma schema may break existing SQLite deployments, Encryption key management - LLMConfigEncryption relies on EncryptionManager which needs proper key setup, API key redaction logic (REDACTED) must preserve original values correctly during updates, Connection deletion blocked when workspaces are using it - could create orphan handling issues, Backward compatibility with legacy provider/model fields during transition period, Admin-only access control on connection management endpoints Suggestions
Full review in progress... | Powered by diffray |
| const isHttps = baseURL?.startsWith('https://'); | ||
| return isHttps | ||
| ? new https.Agent({ rejectUnauthorized: false }) | ||
| : new http.Agent(); | ||
| }; |
There was a problem hiding this comment.
🟠 HIGH - HTTP agent allows insecure connections
Category: security
Description:
Creates HTTP agents for plaintext connections and sets rejectUnauthorized: false, disabling certificate validation which allows MITM attacks.
Suggestion:
Remove rejectUnauthorized: false from https.Agent configuration. For self-signed certs, use NODE_EXTRA_CA_CERTS or proper CA configuration.
Confidence: 95%
Rule: soc2_enforce_tls_and_hsts
| // Determine if HTTP or HTTPS and create appropriate agent | ||
| const isHttps = baseURL?.startsWith('https://'); | ||
| const agent = isHttps | ||
| ? new https.Agent({ rejectUnauthorized: false }) | ||
| : new http.Agent(); |
There was a problem hiding this comment.
🟠 HIGH - HTTP agent disables certificate validation
Category: security
Description:
liteLLMModels function creates HTTP agents with rejectUnauthorized: false for HTTPS connections, disabling SSL certificate validation.
Suggestion:
Remove rejectUnauthorized: false. For self-signed certificates, use proper CA configuration instead.
Confidence: 95%
Rule: soc2_enforce_tls_and_hsts
| * @param {{provider: string | null, model: string | null} | null} params - Initialize params for LLMs provider | ||
| * @returns {BaseLLMProvider} | ||
| */ | ||
| function getLLMProvider({ provider = null, model = null } = {}) { | ||
| const LLMSelection = provider ?? process.env.LLM_PROVIDER ?? "openai"; | ||
| async function getLLMProvider({ |
There was a problem hiding this comment.
🟡 MEDIUM - JSDoc description contradicts function behavior
Category: docs
Description:
JSDoc says 'via system or via defined provider' but function actually handles connectionId, connection, config, OR legacy provider (4 priority paths).
Suggestion:
Update JSDoc to: 'Returns the LLMProvider based on connection, config, or legacy provider settings, with its embedder attached.'
Confidence: 80%
Rule: ts_jsdoc_description_mismatch
| * @param {{provider: string | null, model: string | null} | null} params - Initialize params for LLMs provider | ||
| * @returns {BaseLLMProvider} | ||
| */ |
There was a problem hiding this comment.
🟡 MEDIUM - JSDoc @param doesn't match function signature
Category: docs
Description:
@param documents only 'provider' and 'model' but function signature has 5 destructured parameters.
Suggestion:
Add @param documentation for connectionId, connection, and config parameters.
Confidence: 90%
Rule: ts_jsdoc_param_mismatch
| /** | ||
| * Gets the LLM provider for a workspace, prioritizing connections over legacy provider settings | ||
| * @param {Object} workspace - The workspace object | ||
| * @param {string} [type='chat'] - The type of provider to get ('chat' or 'agent') | ||
| * @param {string} [modelOverride] - Optional model override (takes precedence over workspace settings) | ||
| * @returns {Promise<BaseLLMProvider>} | ||
| */ |
There was a problem hiding this comment.
🟡 MEDIUM - JSDoc @param missing parameters
Category: docs
Description:
@param documents only 'workspace' but function has 3 parameters: workspace, type='chat', and modelOverride=null.
Suggestion:
Add @param for type and modelOverride: @param {string} [type='chat'] @param {string} [modelOverride=null]
Confidence: 85%
Rule: ts_jsdoc_param_mismatch
| availableLLMs={allOptions} | ||
| settings={settings} | ||
| checked={selectedLLM === llm.value} | ||
| onClick={() => updateLLMChoice(llm.value)} |
There was a problem hiding this comment.
🔵 LOW - Inline function in loop creates new references per item
Category: performance
Description:
Inside filteredLLMs.map(), an inline arrow function onClick={() => updateLLMChoice(llm.value)} creates a new function reference for each list item on every render.
Suggestion:
For lists with many items, consider passing llm.value to a memoized handler or using a component that accepts value as a prop and handles the click internally.
Confidence: 62%
Rule: react_inline_function_props
| it("should create a connection with encrypted API key", async () => { | ||
| const mockConfig = { | ||
| basePath: "http://test.com", | ||
| apiKey: "sk-test-123", | ||
| defaultModel: "gpt-3.5-turbo", | ||
| }; | ||
|
|
||
| const mockCreatedConnection = { | ||
| id: 1, | ||
| name: "Test LiteLLM", | ||
| provider: "litellm", | ||
| config: JSON.stringify({ | ||
| basePath: "http://test.com", | ||
| apiKey: "encrypted-value", | ||
| apiKey_encrypted: true, | ||
| defaultModel: "gpt-3.5-turbo", | ||
| }), | ||
| isDefault: false, | ||
| isActive: true, | ||
| createdAt: new Date(), | ||
| lastUpdatedAt: new Date(), | ||
| }; | ||
|
|
||
| prisma.llm_connections.create.mockResolvedValue(mockCreatedConnection); | ||
| prisma.llm_connections.updateMany.mockResolvedValue({ count: 0 }); | ||
|
|
||
| const { connection, error } = await LLMConnection.create({ | ||
| name: "Test LiteLLM", | ||
| provider: "litellm", | ||
| config: mockConfig, | ||
| }); | ||
|
|
||
| expect(error).toBeNull(); | ||
| expect(connection).toBeDefined(); | ||
| expect(connection.name).toBe("Test LiteLLM"); | ||
| expect(prisma.llm_connections.create).toHaveBeenCalled(); | ||
|
|
||
| // Verify config was encrypted before saving | ||
| const createCall = prisma.llm_connections.create.mock.calls[0][0]; | ||
| const savedConfig = JSON.parse(createCall.data.config); | ||
| expect(savedConfig.apiKey).not.toBe("sk-test-123"); | ||
| }); |
There was a problem hiding this comment.
🟠 HIGH - Test doesn't verify encryption actually occurred
Category: quality
Description:
The test checks savedConfig.apiKey is not equal to plain text but doesn't verify apiKey_encrypted flag is set to true, or that the encryption format is valid.
Suggestion:
Add assertion: expect(savedConfig.apiKey_encrypted).toBe(true) to verify the encryption flag was properly set during create.
Confidence: 75%
Rule: test_missing_assertion
| it("should encrypt config when updating", async () => { | ||
| const mockExisting = { | ||
| id: 1, | ||
| provider: "litellm", | ||
| config: JSON.stringify({ basePath: "http://old.com" }), | ||
| }; | ||
|
|
||
| prisma.llm_connections.findUnique.mockResolvedValue(mockExisting); | ||
| prisma.llm_connections.update.mockResolvedValue({ | ||
| ...mockExisting, | ||
| config: JSON.stringify({ | ||
| basePath: "http://new.com", | ||
| apiKey: "encrypted-new", | ||
| apiKey_encrypted: true, | ||
| }), | ||
| }); | ||
|
|
||
| const { connection, error } = await LLMConnection.update(1, { | ||
| config: { | ||
| basePath: "http://new.com", | ||
| apiKey: "sk-new-key", | ||
| }, | ||
| }); | ||
|
|
||
| expect(error).toBeNull(); | ||
| const updateCall = prisma.llm_connections.update.mock.calls[0][0]; | ||
| const savedConfig = JSON.parse(updateCall.data.config); | ||
| expect(savedConfig.apiKey).not.toBe("sk-new-key"); | ||
| }); |
There was a problem hiding this comment.
🟠 HIGH - Weak assertion for encryption verification on update
Category: quality
Description:
Similar to create test, update test only verifies apiKey is not equal to plain text but doesn't verify apiKey_encrypted flag is set.
Suggestion:
Add assertion: expect(savedConfig.apiKey_encrypted).toBe(true) to verify the encryption flag was properly set during update.
Confidence: 75%
Rule: test_missing_assertion
| describe("get", () => { | ||
| it("should retrieve and decrypt a connection", async () => { | ||
| const mockConnection = { | ||
| id: 1, | ||
| name: "Test LiteLLM", | ||
| provider: "litellm", | ||
| config: JSON.stringify({ | ||
| basePath: "http://test.com", | ||
| apiKey: "encrypted-value", | ||
| apiKey_encrypted: true, | ||
| defaultModel: "gpt-3.5-turbo", | ||
| }), | ||
| isDefault: false, | ||
| isActive: true, | ||
| createdAt: new Date(), | ||
| lastUpdatedAt: new Date(), | ||
| }; | ||
|
|
||
| prisma.llm_connections.findUnique.mockResolvedValue(mockConnection); | ||
|
|
||
| const connection = await LLMConnection.get(1); | ||
|
|
||
| expect(connection).toBeDefined(); | ||
| expect(connection.id).toBe(1); | ||
| expect(connection.name).toBe("Test LiteLLM"); | ||
| expect(prisma.llm_connections.findUnique).toHaveBeenCalledWith({ | ||
| where: { id: 1 }, | ||
| }); | ||
| }); |
There was a problem hiding this comment.
🟡 MEDIUM - Test doesn't verify decryption actually occurred
Category: quality
Description:
The test retrieves a connection with encrypted config but only verifies id and name, not that config was properly decrypted.
Suggestion:
Add assertions to verify connection.config.apiKey is the decrypted value (not encrypted) and apiKey_encrypted flag is removed or false.
Confidence: 70%
Rule: test_missing_assertion
| it("should prevent deletion of connection in use", async () => { | ||
| prisma.workspaces.count | ||
| .mockResolvedValueOnce(2) // chatConnectionId count | ||
| .mockResolvedValueOnce(0); // agentConnectionId count | ||
|
|
||
| const { success, error } = await LLMConnection.delete(1); | ||
|
|
||
| expect(success).toBe(false); | ||
| expect(error).toContain("in use by 2 workspace"); | ||
| expect(prisma.llm_connections.update).not.toHaveBeenCalled(); | ||
| }); |
There was a problem hiding this comment.
🟡 MEDIUM - Missing test for agentConnectionId preventing deletion
Category: quality
Description:
The delete prevention test only checks chatConnectionId > 0, but implementation also checks agentConnectionId. Model code at lines 269-270 checks both.
Suggestion:
Add a test case where chatConnectionId count is 0 but agentConnectionId count is > 0 to verify deletion is prevented when only agent usage exists.
Confidence: 80%
Rule: test_optional_field_edge_cases
Review Summary
Validated 88 issues: 33 kept, 55 filtered Issues Found: 33See 25 individual line comment(s) for details. 📊 19 unique issue type(s) across 33 location(s) 📋 Full issue list (click to expand)🟠 HIGH - HTTP agent allows insecure connections (2 occurrences)Category: security 📍 View all locations
Rule: 🟠 HIGH - Floating promise in OllamaAILLM constructorCategory: bug File: Description: cacheContextWindows is an async method called without await in constructor. Errors will be silently swallowed. Suggestion: Add error handling: Confidence: 90% Rule: 🟠 HIGH - Module-level mutable state - modelContextWindowsCategory: bug File: Description: Static mutable object shared across all instances can cause race conditions in concurrent scenarios and test pollution. Suggestion: Add a clear/reset method for testing. Document that this is intentional shared state for caching. Confidence: 75% Rule: 🟠 HIGH - Test doesn't verify encryption actually occurred (4 occurrences)Category: quality 📍 View all locations
Rule: 🟡 MEDIUM - JSDoc description contradicts function behaviorCategory: docs File: Description: JSDoc says 'via system or via defined provider' but function actually handles connectionId, connection, config, OR legacy provider (4 priority paths). Suggestion: Update JSDoc to: 'Returns the LLMProvider based on connection, config, or legacy provider settings, with its embedder attached.' Confidence: 80% Rule: 🟡 MEDIUM - JSDoc @param doesn't match function signature (2 occurrences)Category: docs 📍 View all locations
Rule: 🟡 MEDIUM - JSDoc @returns doesn't match actual return type (2 occurrences)Category: docs 📍 View all locations
Rule: 🟡 MEDIUM - Sequential database count queries for deletion check (3 occurrences)Category: performance 📍 View all locations
Rule: 🟡 MEDIUM - N+1 query pattern in whereWithUsersCategory: performance File: Description: The whereWithUsers method iterates over workspaces and executes a separate database query for each workspace to fetch user IDs. For N workspaces, this executes N+1 queries. Suggestion: Use Prisma's include option to eager-load workspace_users in the initial query, reducing from O(N) queries to O(1). Confidence: 95% Rule: 🟡 MEDIUM - Encryption manager instantiated in loopCategory: performance File: Description: In the where method, a new LLMConfigEncryption instance is created outside the map but is only used inside map. However, looking at actual code, the encryptionManager is created at line 117 BEFORE the map call, so this is NOT instantiated in the loop. Suggestion: The current code already has the encryptionManager outside the map function. No change needed. Rule: 🟡 MEDIUM - Synchronous file write in request handler (3 occurrences)Category: performance 📍 View all locations
Rule: 🟡 MEDIUM - Incorrect HTTP status code for POST creation (2 occurrences)Category: quality 📍 View all locations
Rule: 🟡 MEDIUM - Missing dependency in useEffectCategory: bug File: Description: The useEffect hook references 'onConnectionStatusChange' in the hook body but doesn't include it in the dependency array. This can cause stale closures where the effect uses an outdated version of the function. Suggestion: Add 'onConnectionStatusChange' to the dependency array: [config.basePath, config.apiKey, onConnectionStatusChange] Confidence: 80% Rule: 🟡 MEDIUM - API key retrieval pattern should ensure proper access control (2 occurrences)Category: security 📍 View all locations
Rule: 🟡 MEDIUM - Missing test for agentConnectionId preventing deletionCategory: quality File: Description: The delete prevention test only checks chatConnectionId > 0, but implementation also checks agentConnectionId. Model code at lines 269-270 checks both. Suggestion: Add a test case where chatConnectionId count is 0 but agentConnectionId count is > 0 to verify deletion is prevented when only agent usage exists. Confidence: 80% Rule: 🔵 LOW - Console.error in production codeCategory: quality File: Description: Console.error used for error logging without structured logging. Error is re-thrown so handling exists, but logging could be improved. Suggestion: Consider using a structured logging service or remove console.error since error is re-thrown. Confidence: 70% Rule: 🔵 LOW - Hardcoded URL path suffix (2 occurrences)Category: quality 📍 View all locations
Rule: 🔵 LOW - Magic string constant for redacted API key checkCategory: quality File: Description: Line 40 checks if config.apiKey === 'REDACTED' to determine if the saved API key should be used. This magic string comparison is fragile. Suggestion: Use a defined constant for the redacted value: const REDACTED_PLACEHOLDER = 'REDACTED'; Confidence: 70% Rule: 🔵 LOW - Inline function in loop creates new references per item (2 occurrences)Category: performance 📍 View all locations
Rule: Powered by diffray - AI Code Review Agent |
Pull Request Type
Relevant Issues
resolves #4493
What is in this change?
Firstly, I apologise for the size of this one. I appreciate that it is a bit of a paradigm shift.
This PR implements support for multiple LLM provider connections per workspace, resolving issue #4493.
Problem: AnythingLLM currently restricts users to a single instance per LLM provider type (e.g., one Ollama server, one LiteLLM proxy). Users must manually reconfigure settings to switch between different instances, creating friction for teams managing
multiple deployments.
Solution: Introduces a connection-based architecture that allows:
This is especially useful for enterprise use cases, as I use LiteLLM to:
Additional Information
Developer Validations
yarn lintfrom the root of the repo & committed changes