Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions backend/src/api/admin/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ import { adminEmbeddings } from "./embeddings";
import { adminModels } from "./models";
import { adminProviders } from "./providers";
import { adminRateLimits } from "./rateLimits";
import { adminStats } from "./stats";
import { adminUpstream } from "./upstream";
import { adminUsage } from "./usage";

Expand All @@ -27,6 +28,7 @@ export const routes = new Elysia({
.use(adminProviders)
.use(adminModels)
.use(adminEmbeddings)
.use(adminStats)
.get("/", () => true, {
detail: { description: "Check whether the admin secret is valid." },
})
Expand Down
255 changes: 173 additions & 82 deletions backend/src/api/admin/providers.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,170 @@ import {
updateProvider,
listModelsByProvider,
} from "@/db";
import type { ProviderTypeEnumType } from "@/db/schema";

// ============================================
// Provider Test Strategy Pattern
// ============================================

export interface ProviderTestResult {
success: boolean;
message?: string;
models: { id: string; owned_by?: string }[];
}

interface Provider {
id: number;
name: string;
type: ProviderTypeEnumType;
baseUrl: string;
apiKey: string | null;
apiVersion: string | null;
}

type ProviderTestFn = (provider: Provider) => Promise<ProviderTestResult>;

/**
* Check if an error indicates that the OpenAI models endpoint is unavailable.
* This helper detects 404/405 errors which indicate the endpoint doesn't exist
* but the connection itself may be working.
*/
function isModelEndpointUnavailable(error: Error & { status?: number }): boolean {
const errorMessage = error.message || "";
return (
error.status === 404 ||
error.status === 405 ||
errorMessage.includes("404") ||
errorMessage.includes("405") ||
errorMessage.includes("Not Found") ||
errorMessage.includes("Method Not Allowed")
);
}

/**
* Test Anthropic provider connection by sending a minimal messages request.
* Anthropic doesn't have a /models endpoint, so we test auth via messages API.
*/
async function testAnthropicConnection(
provider: Provider,
): Promise<ProviderTestResult> {
const baseUrl = provider.baseUrl.endsWith("/")
? provider.baseUrl.slice(0, -1)
: provider.baseUrl;

const response = await fetch(`${baseUrl}/messages`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"anthropic-version": provider.apiVersion || "2023-06-01",
...(provider.apiKey && { "x-api-key": provider.apiKey }),
},
body: JSON.stringify({
model: "claude-3-haiku-20240307", // Use a common model for testing
messages: [{ role: "user", content: "Hi" }],
max_tokens: 1,
}),
});
Comment on lines +62 to +74
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

缺少请求超时设置,可能导致连接测试无限挂起。

fetch 调用没有设置超时,如果目标服务器无响应,请求可能会无限期挂起,影响用户体验和系统资源。

🔧 建议添加 AbortController 实现超时
 async function testAnthropicConnection(
   provider: Provider,
 ): Promise<ProviderTestResult> {
   const baseUrl = provider.baseUrl.endsWith("/")
     ? provider.baseUrl.slice(0, -1)
     : provider.baseUrl;

+  const controller = new AbortController();
+  const timeoutId = setTimeout(() => controller.abort(), 10000); // 10 second timeout
+
+  try {
     const response = await fetch(`${baseUrl}/messages`, {
       method: "POST",
       headers: {
         "Content-Type": "application/json",
         "anthropic-version": provider.apiVersion || "2023-06-01",
         ...(provider.apiKey && { "x-api-key": provider.apiKey }),
       },
       body: JSON.stringify({
         model: "claude-3-haiku-20240307",
         messages: [{ role: "user", content: "Hi" }],
         max_tokens: 1,
       }),
+      signal: controller.signal,
     });
+  } finally {
+    clearTimeout(timeoutId);
+  }


if (!response.ok) {
const text = await response.text();
// Check if the error is just about invalid model (which means auth is working)
if (response.status === 400 && text.includes("model")) {
return {
success: true,
message: "Connection successful (API key valid)",
models: [],
};
}
throw new Error(`API error: ${response.status} ${text}`);
Comment on lines +76 to +86
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

错误检测逻辑较为脆弱。

第 79 行通过检查 text.includes("model") 来判断认证是否成功。这种方式过于宽泛,任何包含 "model" 字样的 400 错误都会被误判为认证成功。建议使用更精确的错误码或错误类型匹配。

🔧 建议更精确的错误检测
   if (!response.ok) {
     const text = await response.text();
-    // Check if the error is just about invalid model (which means auth is working)
-    if (response.status === 400 && text.includes("model")) {
+    // Check for specific Anthropic error types that indicate auth is working
+    // Error types: invalid_request_error with model-related messages
+    if (response.status === 400) {
+      try {
+        const errorBody = JSON.parse(text);
+        if (errorBody.error?.type === "invalid_request_error" && 
+            errorBody.error?.message?.toLowerCase().includes("model")) {
+          return {
+            success: true,
+            message: "Connection successful (API key valid)",
+            models: [],
+          };
+        }
+      } catch {
+        // If JSON parsing fails, fall through to throw
+      }
+    }
-      return {
-        success: true,
-        message: "Connection successful (API key valid)",
-        models: [],
-      };
-    }
     throw new Error(`API error: ${response.status} ${text}`);
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (!response.ok) {
const text = await response.text();
// Check if the error is just about invalid model (which means auth is working)
if (response.status === 400 && text.includes("model")) {
return {
success: true,
message: "Connection successful (API key valid)",
models: [],
};
}
throw new Error(`API error: ${response.status} ${text}`);
if (!response.ok) {
const text = await response.text();
// Check for specific Anthropic error types that indicate auth is working
// Error types: invalid_request_error with model-related messages
if (response.status === 400) {
try {
const errorBody = JSON.parse(text);
if (errorBody.error?.type === "invalid_request_error" &&
errorBody.error?.message?.toLowerCase().includes("model")) {
return {
success: true,
message: "Connection successful (API key valid)",
models: [],
};
}
} catch {
// If JSON parsing fails, fall through to throw
}
}
throw new Error(`API error: ${response.status} ${text}`);
}

}

return {
success: true,
message: "Connection successful",
models: [],
};
}

/**
* Test OpenAI Responses provider connection.
* The /models endpoint might not be available for all deployments.
*/
async function testOpenAIResponsesConnection(
provider: Provider,
): Promise<ProviderTestResult> {
const client = new OpenAI({
baseURL: provider.baseUrl,
apiKey: provider.apiKey || "not-required",
});

try {
const models = await client.models.list();
return {
success: true,
models: models.data.map((m) => ({
id: m.id,
owned_by: m.owned_by,
})),
};
} catch (e) {
// Check if it's a 404/405 (endpoint not available) vs real connection error
const error = e as Error & { status?: number };

if (isModelEndpointUnavailable(error)) {
return {
success: true,
message: "Connection configured (models endpoint not available)",
models: [],
};
}

// Re-throw actual connection errors to be handled by outer catch
throw e;
}
Comment on lines +118 to +131
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error handling logic to detect if a models endpoint is unavailable (checking for 404/405 errors) is duplicated here and in the /remote-models endpoint handler (lines 353-372). To improve maintainability and reduce code duplication, you could extract this logic into a shared helper function.

For example:

function isOpenAIModelEndpointError(error: Error & { status?: number }): boolean {
  const errorMessage = error.message || "";
  return (
    error.status === 404 ||
    error.status === 405 ||
    errorMessage.includes("404") ||
    errorMessage.includes("405") ||
    errorMessage.includes("Not Found") ||
    errorMessage.includes("Method Not Allowed")
  );
}

You could then use this helper in both places to simplify the code.

}

/**
* Test standard OpenAI-compatible provider connection (openai, azure, ollama).
* Uses the /models endpoint to verify connection and list available models.
*/
async function testDefaultOpenAIConnection(
provider: Provider,
): Promise<ProviderTestResult> {
const client = new OpenAI({
baseURL: provider.baseUrl,
apiKey: provider.apiKey || "not-required",
});

const models = await client.models.list();
return {
success: true,
models: models.data.map((m) => ({
id: m.id,
owned_by: m.owned_by,
})),
};
}
Comment on lines +134 to +154
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

标准 OpenAI 兼容 provider 未处理端点不可用情况。

testDefaultOpenAIConnection 用于 openaiazureollama 等 provider,但未像 testOpenAIResponsesConnection 那样处理 /models 端点不可用的情况。如果这些 provider 的 /models 端点不可用,会直接返回 502 错误。

建议统一处理逻辑:

🔧 建议的修改
 async function testDefaultOpenAIConnection(
   provider: Provider,
 ): Promise<ProviderTestResult> {
   const client = new OpenAI({
     baseURL: provider.baseUrl,
     apiKey: provider.apiKey || "not-required",
   });

-  const models = await client.models.list();
-  return {
-    success: true,
-    models: models.data.map((m) => ({
-      id: m.id,
-      owned_by: m.owned_by,
-    })),
-  };
+  try {
+    const models = await client.models.list();
+    return {
+      success: true,
+      models: models.data.map((m) => ({
+        id: m.id,
+        owned_by: m.owned_by,
+      })),
+    };
+  } catch (e) {
+    const error = e as Error & { status?: number };
+    if (isModelEndpointUnavailable(error)) {
+      return {
+        success: true,
+        message: "Connection configured (models endpoint not available)",
+        models: [],
+      };
+    }
+    throw e;
+  }
 }
🤖 Prompt for AI Agents
In `@backend/src/api/admin/providers.ts` around lines 134 - 154, The
testDefaultOpenAIConnection function doesn't handle the case where the /models
endpoint is unavailable (client.models.list), causing a 502 to bubble up; update
testDefaultOpenAIConnection to mirror the error handling in
testOpenAIResponsesConnection by wrapping the client.models.list call in a
try/catch, detect endpoint-unavailable errors and return a ProviderTestResult
with success: false and a clear reason/message (and any relevant error details)
instead of throwing; ensure you reference the OpenAI instance creation and the
client.models.list call so the catch covers that operation and returns the
standardized failure shape used elsewhere.


/**
* Map provider types to their specific test functions.
* Providers not in this map will use the default OpenAI test.
*/
const providerTestHandlers: Partial<Record<ProviderTypeEnumType, ProviderTestFn>> = {
anthropic: testAnthropicConnection,
"openai-responses": testOpenAIResponsesConnection,
};

/**
* Get the appropriate test function for a provider type.
*/
function getProviderTestFn(type: ProviderTypeEnumType): ProviderTestFn {
return providerTestHandlers[type] ?? testDefaultOpenAIConnection;
}

// ============================================
// Provider Routes
// ============================================

export const adminProviders = new Elysia({ prefix: "/providers" })
// List all providers
Expand Down Expand Up @@ -144,87 +308,8 @@ export const adminProviders = new Elysia({ prefix: "/providers" })
}

try {
// For Anthropic, send a minimal messages request to test the connection
if (provider.type === "anthropic") {
const baseUrl = provider.baseUrl.endsWith("/")
? provider.baseUrl.slice(0, -1)
: provider.baseUrl;

const response = await fetch(`${baseUrl}/messages`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"anthropic-version": provider.apiVersion || "2023-06-01",
...(provider.apiKey && { "x-api-key": provider.apiKey }),
},
body: JSON.stringify({
model: "claude-3-haiku-20240307", // Use a common model for testing
messages: [{ role: "user", content: "Hi" }],
max_tokens: 1,
}),
});

if (!response.ok) {
const text = await response.text();
// Check if the error is just about invalid model (which means auth is working)
if (response.status === 400 && text.includes("model")) {
return {
success: true,
message: "Connection successful (API key valid)",
models: [],
};
}
throw new Error(`API error: ${response.status} ${text}`);
}

return {
success: true,
message: "Connection successful",
models: [],
};
}

// For openai-responses, try the standard /models endpoint first
// since most deployments share the same OpenAI account
if (provider.type === "openai-responses") {
const client = new OpenAI({
baseURL: provider.baseUrl,
apiKey: provider.apiKey || "not-required",
});

try {
const models = await client.models.list();
return {
success: true,
models: models.data.map((m) => ({
id: m.id,
owned_by: m.owned_by,
})),
};
} catch {
// If /models doesn't work, just report success for connection test
return {
success: true,
message: "Connection configured (models endpoint not available)",
models: [],
};
}
}

// For other types (openai, azure, ollama), use the standard approach
const client = new OpenAI({
baseURL: provider.baseUrl,
apiKey: provider.apiKey || "not-required",
});

const models = await client.models.list();
return {
success: true,
models: models.data.map((m) => ({
id: m.id,
owned_by: m.owned_by,
})),
};
const testFn = getProviderTestFn(provider.type);
return await testFn(provider);
} catch (e) {
return status(502, {
success: false,
Expand Down Expand Up @@ -274,13 +359,19 @@ export const adminProviders = new Elysia({ prefix: "/providers" })
})),
};
} catch (e) {
const error = e as Error & { status?: number };

// For openai-responses, the /models endpoint might not be available
if (provider.type === "openai-responses") {
if (
provider.type === "openai-responses" &&
isModelEndpointUnavailable(error)
) {
return status(400, {
error: "Models list endpoint not available for this provider. Please configure models manually.",
unsupported: true,
});
}

return status(502, {
error: e instanceof Error ? e.message : "Unknown error",
});
Comment on lines 361 to 377
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

rg -n "isModelEndpointUnavailable" --type ts -B 2 -A 2

Repository: EM-GeekLab/NexusGate

Length of output: 1122


🏁 Script executed:

# Also search for the definition of isModelEndpointUnavailable
rg -n "isModelEndpointUnavailable" --type ts

Repository: EM-GeekLab/NexusGate

Length of output: 351


🏁 Script executed:

# Check the broader context around the provider handling
fd "providers.ts" --type f

Repository: EM-GeekLab/NexusGate

Length of output: 177


🏁 Script executed:

cat -n backend/src/api/admin/providers.ts | sed -n '36,50p'

Repository: EM-GeekLab/NexusGate

Length of output: 661


🏁 Script executed:

cat -n backend/src/api/admin/providers.ts | sed -n '115,135p'

Repository: EM-GeekLab/NexusGate

Length of output: 745


🏁 Script executed:

cat -n backend/src/api/admin/providers.ts | sed -n '330,390p'

Repository: EM-GeekLab/NexusGate

Length of output: 2253


🏁 Script executed:

# Let me check what provider types are defined
rg -n "provider.type" backend/src/api/admin/providers.ts | head -20

Repository: EM-GeekLab/NexusGate

Length of output: 347


🏁 Script executed:

# Check for provider type definitions/constants
fd "provider" --type f | grep -E "(constant|type|enum)" | head -10

Repository: EM-GeekLab/NexusGate

Length of output: 99


🏁 Script executed:

# Look at the broader test function to understand the pattern
cat -n backend/src/api/admin/providers.ts | sed -n '100,135p'

Repository: EM-GeekLab/NexusGate

Length of output: 1243


🏁 Script executed:

# Look at the test function for standard OpenAI providers
cat -n backend/src/api/admin/providers.ts | sed -n '135,180p'

Repository: EM-GeekLab/NexusGate

Length of output: 1694


🏁 Script executed:

# Check the getProviderTestFn to understand which test function is used for each provider type
cat -n backend/src/api/admin/providers.ts | sed -n '155,200p'

Repository: EM-GeekLab/NexusGate

Length of output: 1604


🏁 Script executed:

# Let me check if there are any comments or documentation about why openai-responses is treated specially
cat -n backend/src/api/admin/providers.ts | sed -n '1,100p'

Repository: EM-GeekLab/NexusGate

Length of output: 3580


🏁 Script executed:

# Check what provider types exist
rg -n "ProviderTypeEnum" --type ts -A 15

Repository: EM-GeekLab/NexusGate

Length of output: 6184


🏁 Script executed:

# Let me check if there's any documentation about why only openai-responses gets this special treatment
cat -n backend/src/api/admin/providers.ts | sed -n '96,100p'

Repository: EM-GeekLab/NexusGate

Length of output: 266


🏁 Script executed:

# Check the database schema to understand the intent
cat -n backend/src/db/schema.ts | sed -n '156,165p'

Repository: EM-GeekLab/NexusGate

Length of output: 557


其他 OpenAI 兼容 provider 的 /models 端点不可用时会返回 502 错误。

当前仅对 openai-responsesanthropic 类型进行了端点不可用的特殊处理(返回 400)。如果其他 OpenAI 兼容 provider(如 openaiazureollama)的 /models 端点不可用或返回 404/405,将返回 502 错误而非更友好的 400 提示。建议统一处理所有 OpenAI 兼容 provider 的端点不可用情况,或在代码注释中说明为何某些 provider 被豁免。

🤖 Prompt for AI Agents
In `@backend/src/api/admin/providers.ts` around lines 361 - 377, Current logic
only treats provider.type === "openai-responses" and "anthropic" specially when
isModelEndpointUnavailable(error) is true, causing other OpenAI-compatible
providers to return 502; change the conditional to detect all OpenAI-compatible
providers (e.g., using or adding a helper like isOpenAICompatible(provider.type)
that returns true for "openai", "azure", "ollama", "openai-responses", etc.)
and, when isModelEndpointUnavailable(error) is true, return status(400,
{...unsupported:true}) instead of falling through to the status(502) branch;
alternatively, if any providers must be exempted, add a clear code comment
explaining why those types are excluded and explicitly list them in the
condition.

Expand Down
Loading