feat(providers): add Azure OpenAI provider#219
feat(providers): add Azure OpenAI provider#219nagarjunr wants to merge 1 commit intorohitg00:mainfrom
Conversation
|
Someone is attempting to deploy a commit to the rohitg00's projects Team on Vercel. A member of the Team first needs to authorize it. |
📝 WalkthroughWalkthroughAdds Azure OpenAI provider support to the system. Updates configuration detection to recognize Azure OpenAI environment variables, implements a new Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 6/8 reviews remaining, refill in 11 minutes and 27 seconds.Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/config.ts`:
- Around line 90-101: The fallback provider list is filtering out "azure-openai"
because VALID_PROVIDERS doesn't include it; update VALID_PROVIDERS to include
"azure-openai" so loadFallbackConfig will honor
FALLBACK_PROVIDERS="azure-openai". Locate the VALID_PROVIDERS constant and add
the string "azure-openai" (and ensure any related enum/array used by
loadFallbackConfig and detectProvider is updated consistently) so that
loadFallbackConfig and detectProvider treat Azure as a valid fallback provider.
In `@src/providers/azure-openai.ts`:
- Around line 38-52: The fetch call that sets response (the POST to url with
headers using this.apiKey and body containing model: this.deploymentName and
max_tokens: this.maxTokens) needs a request timeout to avoid hanging; wrap the
fetch in an AbortController, start a timer (configurable or a new
this.requestTimeout) that calls controller.abort() after the timeout, pass
signal: controller.signal into fetch, and clear the timer after fetch completes;
also ensure the calling method handles the abort/timeout error (AbortError)
appropriately.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c8f28d4a-c694-410d-9771-0ac2dc18e39b
📒 Files selected for processing (5)
src/config.tssrc/providers/azure-openai.tssrc/providers/index.tssrc/types.tssrc/viewer/index.html
| if ( | ||
| hasRealValue(env["AZURE_OPENAI_API_KEY"]) && | ||
| hasRealValue(env["AZURE_OPENAI_ENDPOINT"]) && | ||
| hasRealValue(env["AZURE_OPENAI_DEPLOYMENT"]) | ||
| ) { | ||
| return { | ||
| provider: "azure-openai", | ||
| model: env["AZURE_OPENAI_DEPLOYMENT"], | ||
| maxTokens, | ||
| baseURL: env["AZURE_OPENAI_ENDPOINT"], | ||
| }; | ||
| } |
There was a problem hiding this comment.
Add azure-openai to fallback allowlist.
detectProvider now supports Azure, but loadFallbackConfig() still filters fallback providers through VALID_PROVIDERS without "azure-openai", so FALLBACK_PROVIDERS=azure-openai is silently dropped.
Suggested fix
const VALID_PROVIDERS = new Set([
"anthropic",
+ "azure-openai",
"gemini",
"openrouter",
"agent-sdk",
"minimax",
]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/config.ts` around lines 90 - 101, The fallback provider list is filtering
out "azure-openai" because VALID_PROVIDERS doesn't include it; update
VALID_PROVIDERS to include "azure-openai" so loadFallbackConfig will honor
FALLBACK_PROVIDERS="azure-openai". Locate the VALID_PROVIDERS constant and add
the string "azure-openai" (and ensure any related enum/array used by
loadFallbackConfig and detectProvider is updated consistently) so that
loadFallbackConfig and detectProvider treat Azure as a valid fallback provider.
| const response = await fetch(url, { | ||
| method: "POST", | ||
| headers: { | ||
| "Content-Type": "application/json", | ||
| "api-key": this.apiKey, | ||
| }, | ||
| body: JSON.stringify({ | ||
| model: this.deploymentName, | ||
| max_tokens: this.maxTokens, | ||
| messages: [ | ||
| { role: "system", content: systemPrompt }, | ||
| { role: "user", content: userPrompt }, | ||
| ], | ||
| }), | ||
| }); |
There was a problem hiding this comment.
Add a request timeout for Azure API calls.
External calls here can hang indefinitely; this can block compression/summarization paths when Azure stalls.
Suggested fix
export class AzureOpenAIProvider implements MemoryProvider {
name = "azure-openai";
+ private readonly requestTimeoutMs = 30_000;
private apiKey: string;
private endpoint: string;
private deploymentName: string;
private apiVersion: string;
private maxTokens: number;
@@
- const response = await fetch(url, {
- method: "POST",
- headers: {
- "Content-Type": "application/json",
- "api-key": this.apiKey,
- },
- body: JSON.stringify({
- model: this.deploymentName,
- max_tokens: this.maxTokens,
- messages: [
- { role: "system", content: systemPrompt },
- { role: "user", content: userPrompt },
- ],
- }),
- });
+ const controller = new AbortController();
+ const timeout = setTimeout(() => controller.abort(), this.requestTimeoutMs);
+ let response: Response;
+ try {
+ response = await fetch(url, {
+ method: "POST",
+ headers: {
+ "Content-Type": "application/json",
+ "api-key": this.apiKey,
+ },
+ signal: controller.signal,
+ body: JSON.stringify({
+ model: this.deploymentName,
+ max_tokens: this.maxTokens,
+ messages: [
+ { role: "system", content: systemPrompt },
+ { role: "user", content: userPrompt },
+ ],
+ }),
+ });
+ } catch (error) {
+ if (error instanceof Error && error.name === "AbortError") {
+ throw new Error(`azure-openai request timed out after ${this.requestTimeoutMs}ms`);
+ }
+ throw error;
+ } finally {
+ clearTimeout(timeout);
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/providers/azure-openai.ts` around lines 38 - 52, The fetch call that sets
response (the POST to url with headers using this.apiKey and body containing
model: this.deploymentName and max_tokens: this.maxTokens) needs a request
timeout to avoid hanging; wrap the fetch in an AbortController, start a timer
(configurable or a new this.requestTimeout) that calls controller.abort() after
the timeout, pass signal: controller.signal into fetch, and clear the timer
after fetch completes; also ensure the calling method handles the abort/timeout
error (AbortError) appropriately.
Summary
Adds support for Azure OpenAI Service as an LLM provider for compression, summarization, and graph extraction.
Changes
src/providers/azure-openai.ts(new) —AzureOpenAIProviderclass. Constructs the Azure-style URL (/openai/deployments/{name}/chat/completions?api-version=...), usesapi-keyheader instead ofAuthorization: Bearer, otherwise follows the same OpenAI-compatible request/response shape as the existingOpenRouterProvider.src/types.ts— adds"azure-openai"to theProviderTypeunion.src/config.ts— detects all three required Azure env vars and returnsprovider: "azure-openai"config; updatesdetectLlmProviderKind()to return"llm"when Azure vars are set; updates the "no key found" error message.src/providers/index.ts— wires thecase "azure-openai"branch intocreateBaseProvider.src/viewer/index.html— updates the "No LLM provider key set" flag banner to show both Anthropic and Azure setup instructions.Usage
Add to
~/.agentmemory/.env:Then restart:
npx @agentmemory/agentmemoryTest plan
providershows asazure-openaiin health endpoint responseAZURE_OPENAI_ENDPOINTorAZURE_OPENAI_DEPLOYMENTfalls through to noop (no partial config accepted)Summary by CodeRabbit
New Features
Documentation