Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

[FEAT]: Allow Multiple Connections to the Same Type of LLM Provider #4667

Open
socialviolation wants to merge 24 commits intoMintplex-Labs:masterfrom
socialviolation:master
Open

[FEAT]: Allow Multiple Connections to the Same Type of LLM Provider #4667
socialviolation wants to merge 24 commits intoMintplex-Labs:masterfrom
socialviolation:master

Conversation

@socialviolation
Copy link

@socialviolation socialviolation commented Nov 20, 2025

Pull Request Type

  • ✨ feat
  • 🐛 fix
  • ♻️ refactor
  • 💄 style
  • 🔨 chore
  • 📝 docs

Relevant Issues

resolves #4493

What is in this change?

Firstly, I apologise for the size of this one. I appreciate that it is a bit of a paradigm shift.

This PR implements support for multiple LLM provider connections per workspace, resolving issue #4493.

Problem: AnythingLLM currently restricts users to a single instance per LLM provider type (e.g., one Ollama server, one LiteLLM proxy). Users must manually reconfigure settings to switch between different instances, creating friction for teams managing
multiple deployments.

Solution: Introduces a connection-based architecture that allows:

  • Admins to pre-configure multiple LLM connections (e.g., multiple Ollama servers with different API keys/URLs)
  • Workspace managers to select specific connections for chat and agent models
  • Different workspaces to simultaneously use different instances of the same provider type
screenshot-2025-11-20_21-30-07

This is especially useful for enterprise use cases, as I use LiteLLM to:

  • abstract api keys away from teams
  • assign cost controls to teams
  • assign/hide MCP/tools to specific teams
  • Have the option of using multiple accounts from the same provider, meaning rate limits from one team don't affect all 'open ai' teams.

Additional Information

Developer Validations

  • I ran yarn lint from the root of the repo & committed changes
  • Relevant documentation has been updated
  • I have tested my code functionality
  • Docker build succeeds locally

socialviolation and others added 16 commits November 18, 2025 20:13
Phase 1: Database schema and backend models

- Add llm_connections table to support multiple provider configs
- Add chatConnectionId/agentConnectionId fields to workspaces table
- Implement LLMConnection model with full CRUD operations
- Add LLMConfigEncryption utility for secure API key storage
- Update Workspace model to support new connection fields
- Add comprehensive unit tests for LLMConnection model
- Maintain backward compatibility with existing chatProvider fields

This enables workspaces to use different LLM provider instances
(e.g., multiple LiteLLM proxies with different API keys) for
access control and billing purposes.

Related to: Mintplex-Labs#4493

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Phase 2: Provider Refactoring

- Update getLLMProvider() to accept connectionId, connection, or config parameters
- Refactor function to be async for database connection lookup
- Update LiteLLM provider to accept config object as third parameter
- Update Ollama provider to accept config object as third parameter
- Add authToken encryption for Ollama provider
- Maintain backward compatibility with environment variable mode
- All 30+ providers now support optional config parameter

Providers can now be instantiated in three ways:
1. NEW: With connectionId (loads from llm_connections table)
2. NEW: With config object (direct configuration)
3. LEGACY: With provider string (reads from environment variables)

This enables workspaces to use different provider instances with
their own credentials and endpoints.

Related to: Mintplex-Labs#4493

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Phase 3: Backend API

- Create /v1/llm-connections endpoints for CRUD operations
- GET /v1/llm-connections - List all connections (admin only)
- GET /v1/llm-connections/:id - Get single connection
- POST /v1/llm-connections/new - Create new connection
- POST /v1/llm-connections/:id/update - Update connection
- DELETE /v1/llm-connections/:id - Soft delete connection
- POST /v1/llm-connections/:id/set-default - Set as default
- POST /v1/llm-connections/:id/test - Test connection
- All endpoints protected with admin role validation
- Sensitive fields (API keys) redacted in responses
- Proper error handling and validation

Admins can now manage LLM connections via REST API with
encrypted credential storage and workspace protection.

Related to: Mintplex-Labs#4493

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Implements the admin interface for creating, editing, and managing
multiple LLM provider connections.

Changes:
- Add System.llmConnections API client with full CRUD operations
- Create LLMConnections settings page with table view
- Create ConnectionRow component for displaying connections
- Create ConnectionModal for create/edit with dynamic forms
- Create DynamicConfigForm for provider-specific configuration
- Add /settings/llm-connections route to App.jsx
- Add "LLM Connections" navigation item to settings sidebar
- Add llmConnections() path to utils/paths.js

Features:
- List all LLM connections with provider, default status, and dates
- Create new connections with provider-specific config forms
- Edit existing connections (name and provider locked)
- Delete connections (prevented for default connections)
- Set connection as default for its provider
- Test connection functionality
- Support for LiteLLM, Ollama, OpenAI, and Anthropic providers
- API key masking and secure handling

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
…tch models

- Removed separate LLM Connections page and merged into LLM Preference
- Added unified table showing System Default + user connections
- Implemented test connection button with model auto-discovery
- Auto-fetch models when basePath is provided
- Auto-select first model if none selected
- Fixed authentication middleware (validApiKey → validatedRequest)
- Standardized provider configs to use "basePath" consistently
- Matched styling to ApiKeys page (border-white/10, text-xs)
- Removed unused routes and menu items

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- fetchModels() now returns the models array for immediate checking
- Test connection shows accurate success/fail status using returned value
- Removed auto-fetch error toasts (only manual test shows errors)
- 10 second timeout maintained for connection tests
- Properly re-throws errors for handleTestConnection to catch

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
LiteLLM requires the API key to be sent via the X-Litellm-Key header
instead of the standard Authorization: Bearer header.

Changes:
- Updated liteLLMModels() to use X-Litellm-Key custom header
- Updated LiteLLM provider class to use X-Litellm-Key for both
  connection-based config and legacy env var config
- Uses dummy-key for OpenAI SDK requirement while actual auth
  happens via custom header

This fixes test connection and model discovery for LiteLLM connections.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Test connection now debounced to 500ms to prevent rapid clicking
- Added persistent connection status indicator with emoji icons
- Shows success/fail with descriptive message explaining why
- Success shows: "✓ Connected • Successfully connected to {provider} • Found X models"
- Failure shows: "✗ Connection Failed • {error message}"
- Status clears when basePath or apiKey changes
- Green background for success, red for failure
- Uses CheckCircle/XCircle icons from phosphor

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Major enhancements to the LLM connections management interface:

Frontend improvements:
- Add searchable provider selection dropdown with logos and descriptions
- Support all 31 LLM providers (OpenAI, Anthropic, Ollama, LiteLLM, etc.)
- Implement test connection button that validates before saving
- Add visual connection status indicators with success/error feedback
- Make connection modal form scrollable with improved spacing
- Add side-by-side layout for model selection and token limit fields
- Enable connection name editing for existing connections
- Add provider-specific field configurations in separate module
- Implement useSavedApiKey flag for editing connections with redacted keys

Backend improvements:
- Fix Prisma null parameter issue in llmConnection.where() method
- Add support for using stored API keys when testing edited connections
- Add connectionId and useSavedApiKey parameters to custom-models endpoint
- Improve LiteLLM provider with self-signed certificate support
- Auto-append /v1 path for LiteLLM endpoints
- Add detailed logging for connection testing and model fetching

Bug fixes:
- Fix connections not appearing in list (Prisma take: null error)
- Fix browser caching issues with cache-busting timestamps
- Resolve circular dependency by extracting provider configs
- Fix API key redaction when testing existing connections

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Update workspace LLM settings to use the new LLM connections system:

Features added:
- Fetch and display available LLM connections in workspace settings
- Show connections alongside "System Default" option in dropdown
- Add "Create New Connection" option that opens connections page
- Save chatConnectionId when selecting a connection
- Display connection info with default model when selected
- Add optional model override field for connections
- Maintain backward compatibility with chatProvider for system LLMs

UI improvements:
- Loading state while fetching connections
- Empty state when no connections available
- Connection badge showing which connection is in use
- Model override input for fine-grained control

Backend integration:
- Use chatConnectionId field from workspace schema
- Clear chatProvider when connection is selected
- Support chatModelOverride for connection-specific overrides

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Add getLLMProviderForWorkspace utility to centralize connection lookup
- Update all chat workflows (stream, embed, API) to use connections
- Add agent support for agentConnectionId and agentModelOverride
- Update workspace and agent LLM selection UI to show connections
- Fix ***REDACTED*** encryption bug when editing connections

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Install Jest v30.2.0 as dev dependency
- Add jest.config.js with test environment and coverage settings
- Add jest.setup.js to configure test environment variables and storage
- Add "test" script to package.json
- Update openaiCompatible.test.js to use getLLMProviderForWorkspace

All 12 test suites now pass with 90 tests passing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Remove debug console.log statements from endpoints/system.js
- Convert TODO comment to informative note in llm-connections API
- Apply linter formatting to modified files
- Update default provider from litellm to openai
- Affects new connection creation in LLM connections UI

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@timothycarambat
Copy link
Member

I understand the want here, we know it is a desired feature but as you noted, its a pretty massive paradigm shift in how we manage the selected LLM. From a cursory review, there are a ton of unrelated changes to the core of what this PR should be:

  • Testing setup config as opposed to just isolated unit tests
  • LiteLLM unsigned certificate changes

Some other notes:

  • This does not touch the onboarding screens, but I presume there must be some logic for migration or backward compatibility. This might work out of the box for a new install, but we have about 5M installs or more that might wind up with a broken instance if this feature is not backward compatible to where they can pull in and be instantly up to date.
  • UI looks OK from what I can currently see

Either way, this is a really large PR because it has to be, but in its current state, this is such a massive change it is going to need some love for sure. Ill dive into this in the future and see what we can do

@socialviolation
Copy link
Author

Big thanks @timothycarambat! Have addressed a few of the easy comments you left.

For migration testing - I'm happy to assist. If you had a checklist of scenarios I could follow that would be super handy.

socialviolation and others added 5 commits November 26, 2025 20:36
Brings in 23 commits from upstream including:
- New LLM providers: ZAi, GiteeAI
- New embedding engine: OpenRouter
- New data connector: PaperlessNgx
- Ollama className property and improvements
- Anthropic, Bedrock, and agent provider updates
- Various bug fixes and locale updates

Conflicts resolved:
- server/utils/AiProviders/ollama/index.js: merged config-based
  constructor with upstream's className property
- server/utils/helpers/index.js: added new zai/giteeai providers
  using connection-based providerConfig pattern

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
The production deployment uses PostgreSQL, not SQLite. The schema must
use PostgreSQL as the datasource provider for the Docker image to work
correctly with DATABASE_URL environment variable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
The migration lock file must match the datasource provider.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Add http.Agent for HTTP URLs in liteLLMModels() for model fetching
- Add http.Agent for HTTP URLs in LiteLLM class constructor for chat completions
- Maintains https.Agent with rejectUnauthorized:false for HTTPS URLs
- Allows internal service connections without TLS (e.g., http://litellm:4000)
@diffray-bot
Copy link

Changes Summary

This PR introduces a connection-based architecture for managing multiple LLM provider connections per workspace. It adds a new database model (llm_connections), API endpoints for CRUD operations on connections, frontend UI for managing connections, and modifies workspace settings to use connection IDs instead of legacy provider strings. Sensitive API keys are encrypted at rest using a dedicated LLMConfigEncryption utility.

Type: feature

Components Affected: Database schema (Prisma), LLMConnection model, API endpoints (/v1/llm-connections), Frontend LLM preference pages, Workspace settings, Agent configuration, Chat handlers (stream, embed, api), LLM provider helper utilities, LiteLLM and Ollama providers

Architecture Impact
  • New Patterns: Connection-based LLM provider management, Encrypted configuration storage with LLMConfigEncryption, Soft-delete pattern for connections, Priority-based provider resolution (connection > legacy provider > env vars)
  • Dependencies: Modified: Prisma schema with new llm_connections table, Modified: workspaces table with new chatConnectionId and agentConnectionId foreign keys
  • Coupling: Workspaces now have optional foreign key relationships to llm_connections for both chat and agent providers. Legacy chatProvider/agentProvider fields are deprecated but maintained for backward compatibility.
  • Breaking Changes: Database migration required (new llm_connections table, workspace columns), Changed from SQLite to PostgreSQL as default Prisma datasource provider

Risk Areas: Database migration complexity - adding foreign keys and new table requires careful rollout, PostgreSQL datasource change in Prisma schema may break existing SQLite deployments, Encryption key management - LLMConfigEncryption relies on EncryptionManager which needs proper key setup, API key redaction logic (REDACTED) must preserve original values correctly during updates, Connection deletion blocked when workspaces are using it - could create orphan handling issues, Backward compatibility with legacy provider/model fields during transition period, Admin-only access control on connection management endpoints

Suggestions
  • Consider adding a migration path for existing SQLite users given the datasource change to PostgreSQL
  • Add API rate limiting on the connection test endpoint to prevent abuse
  • Consider adding connection health monitoring/status tracking
  • Document the encryption key requirements and rotation process
  • Add integration tests for the full workspace->connection->provider flow

Full review in progress... | Powered by diffray

Comment on lines +20 to +24
const isHttps = baseURL?.startsWith('https://');
return isHttps
? new https.Agent({ rejectUnauthorized: false })
: new http.Agent();
};

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 HIGH - HTTP agent allows insecure connections
Category: security

Description:
Creates HTTP agents for plaintext connections and sets rejectUnauthorized: false, disabling certificate validation which allows MITM attacks.

Suggestion:
Remove rejectUnauthorized: false from https.Agent configuration. For self-signed certs, use NODE_EXTRA_CA_CERTS or proper CA configuration.

Confidence: 95%
Rule: soc2_enforce_tls_and_hsts

Comment on lines +316 to +320
// Determine if HTTP or HTTPS and create appropriate agent
const isHttps = baseURL?.startsWith('https://');
const agent = isHttps
? new https.Agent({ rejectUnauthorized: false })
: new http.Agent();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 HIGH - HTTP agent disables certificate validation
Category: security

Description:
liteLLMModels function creates HTTP agents with rejectUnauthorized: false for HTTPS connections, disabling SSL certificate validation.

Suggestion:
Remove rejectUnauthorized: false. For self-signed certificates, use proper CA configuration instead.

Confidence: 95%
Rule: soc2_enforce_tls_and_hsts

Comment on lines 127 to +130
* @param {{provider: string | null, model: string | null} | null} params - Initialize params for LLMs provider
* @returns {BaseLLMProvider}
*/
function getLLMProvider({ provider = null, model = null } = {}) {
const LLMSelection = provider ?? process.env.LLM_PROVIDER ?? "openai";
async function getLLMProvider({

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 MEDIUM - JSDoc description contradicts function behavior
Category: docs

Description:
JSDoc says 'via system or via defined provider' but function actually handles connectionId, connection, config, OR legacy provider (4 priority paths).

Suggestion:
Update JSDoc to: 'Returns the LLMProvider based on connection, config, or legacy provider settings, with its embedder attached.'

Confidence: 80%
Rule: ts_jsdoc_description_mismatch

Comment on lines 127 to 129
* @param {{provider: string | null, model: string | null} | null} params - Initialize params for LLMs provider
* @returns {BaseLLMProvider}
*/

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 MEDIUM - JSDoc @param doesn't match function signature
Category: docs

Description:
@param documents only 'provider' and 'model' but function signature has 5 destructured parameters.

Suggestion:
Add @param documentation for connectionId, connection, and config parameters.

Confidence: 90%
Rule: ts_jsdoc_param_mismatch

Comment on lines +537 to +543
/**
* Gets the LLM provider for a workspace, prioritizing connections over legacy provider settings
* @param {Object} workspace - The workspace object
* @param {string} [type='chat'] - The type of provider to get ('chat' or 'agent')
* @param {string} [modelOverride] - Optional model override (takes precedence over workspace settings)
* @returns {Promise<BaseLLMProvider>}
*/

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 MEDIUM - JSDoc @param missing parameters
Category: docs

Description:
@param documents only 'workspace' but function has 3 parameters: workspace, type='chat', and modelOverride=null.

Suggestion:
Add @param for type and modelOverride: @param {string} [type='chat'] @param {string} [modelOverride=null]

Confidence: 85%
Rule: ts_jsdoc_param_mismatch

availableLLMs={allOptions}
settings={settings}
checked={selectedLLM === llm.value}
onClick={() => updateLLMChoice(llm.value)}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔵 LOW - Inline function in loop creates new references per item
Category: performance

Description:
Inside filteredLLMs.map(), an inline arrow function onClick={() => updateLLMChoice(llm.value)} creates a new function reference for each list item on every render.

Suggestion:
For lists with many items, consider passing llm.value to a memoized handler or using a component that accepts value as a prop and handles the click internally.

Confidence: 62%
Rule: react_inline_function_props

Comment on lines +31 to +72
it("should create a connection with encrypted API key", async () => {
const mockConfig = {
basePath: "http://test.com",
apiKey: "sk-test-123",
defaultModel: "gpt-3.5-turbo",
};

const mockCreatedConnection = {
id: 1,
name: "Test LiteLLM",
provider: "litellm",
config: JSON.stringify({
basePath: "http://test.com",
apiKey: "encrypted-value",
apiKey_encrypted: true,
defaultModel: "gpt-3.5-turbo",
}),
isDefault: false,
isActive: true,
createdAt: new Date(),
lastUpdatedAt: new Date(),
};

prisma.llm_connections.create.mockResolvedValue(mockCreatedConnection);
prisma.llm_connections.updateMany.mockResolvedValue({ count: 0 });

const { connection, error } = await LLMConnection.create({
name: "Test LiteLLM",
provider: "litellm",
config: mockConfig,
});

expect(error).toBeNull();
expect(connection).toBeDefined();
expect(connection.name).toBe("Test LiteLLM");
expect(prisma.llm_connections.create).toHaveBeenCalled();

// Verify config was encrypted before saving
const createCall = prisma.llm_connections.create.mock.calls[0][0];
const savedConfig = JSON.parse(createCall.data.config);
expect(savedConfig.apiKey).not.toBe("sk-test-123");
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 HIGH - Test doesn't verify encryption actually occurred
Category: quality

Description:
The test checks savedConfig.apiKey is not equal to plain text but doesn't verify apiKey_encrypted flag is set to true, or that the encryption format is valid.

Suggestion:
Add assertion: expect(savedConfig.apiKey_encrypted).toBe(true) to verify the encryption flag was properly set during create.

Confidence: 75%
Rule: test_missing_assertion

Comment on lines +211 to +239
it("should encrypt config when updating", async () => {
const mockExisting = {
id: 1,
provider: "litellm",
config: JSON.stringify({ basePath: "http://old.com" }),
};

prisma.llm_connections.findUnique.mockResolvedValue(mockExisting);
prisma.llm_connections.update.mockResolvedValue({
...mockExisting,
config: JSON.stringify({
basePath: "http://new.com",
apiKey: "encrypted-new",
apiKey_encrypted: true,
}),
});

const { connection, error } = await LLMConnection.update(1, {
config: {
basePath: "http://new.com",
apiKey: "sk-new-key",
},
});

expect(error).toBeNull();
const updateCall = prisma.llm_connections.update.mock.calls[0][0];
const savedConfig = JSON.parse(updateCall.data.config);
expect(savedConfig.apiKey).not.toBe("sk-new-key");
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 HIGH - Weak assertion for encryption verification on update
Category: quality

Description:
Similar to create test, update test only verifies apiKey is not equal to plain text but doesn't verify apiKey_encrypted flag is set.

Suggestion:
Add assertion: expect(savedConfig.apiKey_encrypted).toBe(true) to verify the encryption flag was properly set during update.

Confidence: 75%
Rule: test_missing_assertion

Comment on lines +110 to +138
describe("get", () => {
it("should retrieve and decrypt a connection", async () => {
const mockConnection = {
id: 1,
name: "Test LiteLLM",
provider: "litellm",
config: JSON.stringify({
basePath: "http://test.com",
apiKey: "encrypted-value",
apiKey_encrypted: true,
defaultModel: "gpt-3.5-turbo",
}),
isDefault: false,
isActive: true,
createdAt: new Date(),
lastUpdatedAt: new Date(),
};

prisma.llm_connections.findUnique.mockResolvedValue(mockConnection);

const connection = await LLMConnection.get(1);

expect(connection).toBeDefined();
expect(connection.id).toBe(1);
expect(connection.name).toBe("Test LiteLLM");
expect(prisma.llm_connections.findUnique).toHaveBeenCalledWith({
where: { id: 1 },
});
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 MEDIUM - Test doesn't verify decryption actually occurred
Category: quality

Description:
The test retrieves a connection with encrypted config but only verifies id and name, not that config was properly decrypted.

Suggestion:
Add assertions to verify connection.config.apiKey is the decrypted value (not encrypted) and apiKey_encrypted flag is removed or false.

Confidence: 70%
Rule: test_missing_assertion

Comment on lines +263 to +273
it("should prevent deletion of connection in use", async () => {
prisma.workspaces.count
.mockResolvedValueOnce(2) // chatConnectionId count
.mockResolvedValueOnce(0); // agentConnectionId count

const { success, error } = await LLMConnection.delete(1);

expect(success).toBe(false);
expect(error).toContain("in use by 2 workspace");
expect(prisma.llm_connections.update).not.toHaveBeenCalled();
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 MEDIUM - Missing test for agentConnectionId preventing deletion
Category: quality

Description:
The delete prevention test only checks chatConnectionId > 0, but implementation also checks agentConnectionId. Model code at lines 269-270 checks both.

Suggestion:
Add a test case where chatConnectionId count is 0 but agentConnectionId count is > 0 to verify deletion is prevented when only agent usage exists.

Confidence: 80%
Rule: test_optional_field_edge_cases

@diffray-bot
Copy link

Review Summary

Free public review - Want AI code reviews on your PRs? Check out diffray.ai

Validated 88 issues: 33 kept, 55 filtered

Issues Found: 33

See 25 individual line comment(s) for details.

📊 19 unique issue type(s) across 33 location(s)

📋 Full issue list (click to expand)

🟠 HIGH - HTTP agent allows insecure connections (2 occurrences)

Category: security

📍 View all locations
File Description Suggestion Confidence
server/utils/AiProviders/liteLLM/index.js:20-24 Creates HTTP agents for plaintext connections and sets rejectUnauthorized: false, disabling certific... Remove rejectUnauthorized: false from https.Agent configuration. For self-signed certs, use NODE_EXT... 95%
server/utils/helpers/customModels.js:316-320 liteLLMModels function creates HTTP agents with rejectUnauthorized: false for HTTPS connections, dis... Remove rejectUnauthorized: false. For self-signed certificates, use proper CA configuration instead.... 95%

Rule: soc2_enforce_tls_and_hsts


🟠 HIGH - Floating promise in OllamaAILLM constructor

Category: bug

File: server/utils/AiProviders/ollama/index.js:62

Description: cacheContextWindows is an async method called without await in constructor. Errors will be silently swallowed.

Suggestion: Add error handling: OllamaAILLM.cacheContextWindows(true).catch(e => this.#log('Failed to cache context windows', e));

Confidence: 90%

Rule: node_floating_promise


🟠 HIGH - Module-level mutable state - modelContextWindows

Category: bug

File: server/utils/AiProviders/ollama/index.js:16

Description: Static mutable object shared across all instances can cause race conditions in concurrent scenarios and test pollution.

Suggestion: Add a clear/reset method for testing. Document that this is intentional shared state for caching.

Confidence: 75%

Rule: node_module_level_state


🟠 HIGH - Test doesn't verify encryption actually occurred (4 occurrences)

Category: quality

📍 View all locations
File Description Suggestion Confidence
server/__tests__/models/llmConnection.test.js:31-72 The test checks savedConfig.apiKey is not equal to plain text but doesn't verify apiKey_encrypted fl... Add assertion: expect(savedConfig.apiKey_encrypted).toBe(true) to verify the encryption flag was pro... 75%
server/__tests__/models/llmConnection.test.js:211-239 Similar to create test, update test only verifies apiKey is not equal to plain text but doesn't veri... Add assertion: expect(savedConfig.apiKey_encrypted).toBe(true) to verify the encryption flag was pro... 75%
server/__tests__/models/llmConnection.test.js:110-138 The test retrieves a connection with encrypted config but only verifies id and name, not that config... Add assertions to verify connection.config.apiKey is the decrypted value (not encrypted) and apiKey_... 70%
server/__tests__/utils/chats/openaiCompatible.test.js:137-163 After calling chatSync with regular text messages, the test only asserts 'expect(result).toBeTruthy(... Use the same detailed assertions as the multimodal test (lines 122-134): expect(result).toEqual(expe... 85%

Rule: test_missing_assertion


🟡 MEDIUM - JSDoc description contradicts function behavior

Category: docs

File: server/utils/helpers/index.js:126-129

Description: JSDoc says 'via system or via defined provider' but function actually handles connectionId, connection, config, OR legacy provider (4 priority paths).

Suggestion: Update JSDoc to: 'Returns the LLMProvider based on connection, config, or legacy provider settings, with its embedder attached.'

Confidence: 80%

Rule: ts_jsdoc_description_mismatch


🟡 MEDIUM - JSDoc @param doesn't match function signature (2 occurrences)

Category: docs

📍 View all locations
File Description Suggestion Confidence
server/utils/helpers/index.js:126-128 @param documents only 'provider' and 'model' but function signature has 5 destructured parameters. Add @param documentation for connectionId, connection, and config parameters. 90%
server/utils/helpers/index.js:537-543 @param documents only 'workspace' but function has 3 parameters: workspace, type='chat', and modelOv... Add @param for type and modelOverride: @param {string} [type='chat'] @param {string} [modelOverride=... 85%

Rule: ts_jsdoc_param_mismatch


🟡 MEDIUM - JSDoc @returns doesn't match actual return type (2 occurrences)

Category: docs

📍 View all locations
File Description Suggestion Confidence
server/utils/helpers/customModels.js:815 JSDoc documents 'organization' property but actual return only includes 'id' and 'name'. Update JSDoc to: @returns {Promise<{models: Array<{id: string, name: string}>, error: string | null... 90%
server/utils/agents/ephemeral.js:456 JSDoc documents textResponse as string, but it can be null if no textResponse message found. Update JSDoc to: @returns {{thoughts: string[], textResponse: string | null}} 85%

Rule: ts_jsdoc_returns_mismatch


🟡 MEDIUM - Sequential database count queries for deletion check (3 occurrences)

Category: performance

📍 View all locations
File Description Suggestion Confidence
server/models/llmConnection.js:266-271 Two independent count queries executed sequentially could be parallelized with Promise.all. Use Promise.all: const [workspacesUsingChat, workspacesUsingAgent] = await Promise.all([prisma.works... 90%
server/models/llmConnection.js:184-214 When updating config, performs two sequential queries: findUnique then update. The read is necessary... This is architecturally necessary for config merging logic. Consider documenting why sequential quer... 60%
server/models/llmConnection.js:302-307 The hardDelete method executes two sequential count queries that could be parallelized using Promise... Extract the usage check into a shared method that executes both count queries in parallel using Prom... 85%

Rule: perf_sequential_api_calls


🟡 MEDIUM - N+1 query pattern in whereWithUsers

Category: performance

File: server/models/workspace.js:471-476

Description: The whereWithUsers method iterates over workspaces and executes a separate database query for each workspace to fetch user IDs. For N workspaces, this executes N+1 queries.

Suggestion: Use Prisma's include option to eager-load workspace_users in the initial query, reducing from O(N) queries to O(1).

Confidence: 95%

Rule: sql_n_plus_one_query


🟡 MEDIUM - Encryption manager instantiated in loop

Category: performance

File: server/models/llmConnection.js:117-124

Description: In the where method, a new LLMConfigEncryption instance is created outside the map but is only used inside map. However, looking at actual code, the encryptionManager is created at line 117 BEFORE the map call, so this is NOT instantiated in the loop.

Suggestion: The current code already has the encryptionManager outside the map function. No change needed.

Rule: perf_expensive_in_loop


🟡 MEDIUM - Synchronous file write in request handler (3 occurrences)

Category: performance

📍 View all locations
File Description Suggestion Confidence
server/utils/chats/apiChatHandler.js:78-81 The processDocumentAttachments function uses fs.writeFileSync() within a request handler flow. This ... Replace with async fs.promises.writeFile(): await fs.promises.writeFile(filePath, buffer); 80%
server/endpoints/system.js:719 The profile picture upload endpoint uses fs.existsSync() and fs.unlinkSync() in the request handler,... Replace with async alternatives using fs.promises.access() and fs.promises.unlink(). 75%
server/endpoints/system.js:806 The remove profile picture endpoint uses fs.existsSync() and fs.unlinkSync() synchronously in the re... Replace with async alternatives using fs.promises.access() and fs.promises.unlink(). 75%

Rule: node_blocking_sync_operations


🟡 MEDIUM - Incorrect HTTP status code for POST creation (2 occurrences)

Category: quality

📍 View all locations
File Description Suggestion Confidence
server/endpoints/api/llm-connections/index.js:170 The POST endpoint for creating a new LLM connection returns 200 OK instead of 201 Created. REST conv... Change response.status(200).json(...) to response.status(201).json(...) 85%
server/endpoints/api/llm-connections/index.js:230 The DELETE endpoint returns 200 OK with a JSON body. While 204 No Content is more RESTful, returning... Consider using response.sendStatus(204) or response.status(204).end() if strict REST conventions are... 60%

Rule: api_wrong_status_code


🟡 MEDIUM - Missing dependency in useEffect

Category: bug

File: frontend/src/pages/GeneralSettings/LLMPreference/DynamicConfigForm/index.jsx:20-32

Description: The useEffect hook references 'onConnectionStatusChange' in the hook body but doesn't include it in the dependency array. This can cause stale closures where the effect uses an outdated version of the function.

Suggestion: Add 'onConnectionStatusChange' to the dependency array: [config.basePath, config.apiKey, onConnectionStatusChange]

Confidence: 80%

Rule: react_missing_dependency


🟡 MEDIUM - API key retrieval pattern should ensure proper access control (2 occurrences)

Category: security

📍 View all locations
File Description Suggestion Confidence
server/endpoints/system.js:1034-1040 The /system/custom-models endpoint retrieves stored API keys when useSavedApiKey=true and connection... Consider adding audit logging for API key retrievals. The current implementation is acceptable for a... 65%
server/utils/helpers/customModels.js:224-226 Multiple functions save validated API keys to process.env after successful model listing. This is in... Consider using an in-memory cache object instead of process.env for storing validated API keys. This... 60%

Rule: sec_missing_authorization_after_auth


🟡 MEDIUM - Missing test for agentConnectionId preventing deletion

Category: quality

File: server/__tests__/models/llmConnection.test.js:263-273

Description: The delete prevention test only checks chatConnectionId > 0, but implementation also checks agentConnectionId. Model code at lines 269-270 checks both.

Suggestion: Add a test case where chatConnectionId count is 0 but agentConnectionId count is > 0 to verify deletion is prevented when only agent usage exists.

Confidence: 80%

Rule: test_optional_field_edge_cases


🔵 LOW - Console.error in production code

Category: quality

File: frontend/src/pages/GeneralSettings/LLMPreference/DynamicConfigForm/index.jsx:59

Description: Console.error used for error logging without structured logging. Error is re-thrown so handling exists, but logging could be improved.

Suggestion: Consider using a structured logging service or remove console.error since error is re-thrown.

Confidence: 70%

Rule: fe_console_in_production


🔵 LOW - Hardcoded URL path suffix (2 occurrences)

Category: quality

📍 View all locations
File Description Suggestion Confidence
server/utils/AiProviders/liteLLM/index.js:36-38 The LiteLLM path suffix '/v1' is hardcoded in two places (liteLLM/index.js and customModels.js). Thi... Extract the '/v1' path suffix to a shared configuration constant to avoid duplication 65%
server/utils/helpers/customModels.js:312-314 The LiteLLM path suffix '/v1' is hardcoded, duplicating the logic in LiteLLM/index.js. Extract the '/v1' path suffix to a shared configuration constant 65%

Rule: qual_hardcoded_urls_js


🔵 LOW - Magic string constant for redacted API key check

Category: quality

File: frontend/src/pages/GeneralSettings/LLMPreference/DynamicConfigForm/index.jsx:40

Description: Line 40 checks if config.apiKey === 'REDACTED' to determine if the saved API key should be used. This magic string comparison is fragile.

Suggestion: Use a defined constant for the redacted value: const REDACTED_PLACEHOLDER = 'REDACTED';

Confidence: 70%

Rule: ts_add_null_checks_before_accessing_propert


🔵 LOW - Inline function in loop creates new references per item (2 occurrences)

Category: performance

📍 View all locations
File Description Suggestion Confidence
frontend/src/pages/WorkspaceSettings/AgentConfig/AgentLLMSelection/index.jsx:260 Inside filteredLLMs.map(), an inline arrow function onClick={() => updateLLMChoice(llm.value)} cre... For lists with many items, consider passing llm.value to a memoized handler or using a component tha... 62%
frontend/src/pages/WorkspaceSettings/ChatSettings/WorkspaceLLMSelection/index.jsx:207 Inside filteredLLMs.map(), an inline arrow function onClick={() => updateLLMChoice(llm.value)} cre... For lists with many items, consider passing llm.value to a memoized handler or using a component tha... 62%

Rule: react_inline_function_props


Powered by diffray - AI Code Review Agent

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEAT]: Allow Multiple Connections to the Same Type of LLM Provider

3 participants