Skip to content

feat(providers): add Azure OpenAI provider#219

Open
nagarjunr wants to merge 1 commit intorohitg00:mainfrom
nagarjunr:feat/azure-openai-provider
Open

feat(providers): add Azure OpenAI provider#219
nagarjunr wants to merge 1 commit intorohitg00:mainfrom
nagarjunr:feat/azure-openai-provider

Conversation

@nagarjunr
Copy link
Copy Markdown

@nagarjunr nagarjunr commented Apr 30, 2026

Summary

Adds support for Azure OpenAI Service as an LLM provider for compression, summarization, and graph extraction.

Changes

  • src/providers/azure-openai.ts (new) — AzureOpenAIProvider class. Constructs the Azure-style URL (/openai/deployments/{name}/chat/completions?api-version=...), uses api-key header instead of Authorization: Bearer, otherwise follows the same OpenAI-compatible request/response shape as the existing OpenRouterProvider.

  • src/types.ts — adds "azure-openai" to the ProviderType union.

  • src/config.ts — detects all three required Azure env vars and returns provider: "azure-openai" config; updates detectLlmProviderKind() to return "llm" when Azure vars are set; updates the "no key found" error message.

  • src/providers/index.ts — wires the case "azure-openai" branch into createBaseProvider.

  • src/viewer/index.html — updates the "No LLM provider key set" flag banner to show both Anthropic and Azure setup instructions.

Usage

Add to ~/.agentmemory/.env:

AZURE_OPENAI_API_KEY=<your-azure-api-key>
AZURE_OPENAI_ENDPOINT=https://<resource-name>.openai.azure.com
AZURE_OPENAI_DEPLOYMENT=<deployment-name>

# Optional — defaults to 2024-08-01-preview
AZURE_OPENAI_API_VERSION=2024-08-01-preview

Then restart: npx @agentmemory/agentmemory

Test plan

  • Server starts without error when all three Azure env vars are set
  • provider shows as azure-openai in health endpoint response
  • Compression runs successfully against an Azure-hosted deployment
  • Missing AZURE_OPENAI_ENDPOINT or AZURE_OPENAI_DEPLOYMENT falls through to noop (no partial config accepted)
  • Viewer flag banner shows Azure setup instructions when provider is noop

Summary by CodeRabbit

  • New Features

    • Added Azure OpenAI as a supported LLM provider option
  • Documentation

    • Updated configuration guidance to include Azure OpenAI environment variables as a valid setup alternative

@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 30, 2026

Someone is attempting to deploy a commit to the rohitg00's projects Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 30, 2026

📝 Walkthrough

Walkthrough

Adds Azure OpenAI provider support to the system. Updates configuration detection to recognize Azure OpenAI environment variables, implements a new AzureOpenAIProvider class for making API calls, integrates the provider into the factory function, extends the ProviderType union, and updates the UI warning banner to mention Azure OpenAI as a configuration option.

Changes

Cohort / File(s) Summary
Type definitions and configuration
src/types.ts, src/config.ts
Added "azure-openai" to ProviderType union. Updated detectProvider to recognize Azure OpenAI environment variables (AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_DEPLOYMENT) and create corresponding provider config. Updated detectLlmProviderKind to identify Azure OpenAI env vars as evidence of an available LLM provider.
Provider implementation
src/providers/azure-openai.ts, src/providers/index.ts
Added new AzureOpenAIProvider class implementing MemoryProvider, handling Azure OpenAI API authentication and chat completions requests. Updated createBaseProvider to instantiate AzureOpenAIProvider when provider type is "azure-openai".
UI updates
src/viewer/index.html
Updated "No LLM provider key set" warning banner to advertise both Anthropic and Azure OpenAI as valid configuration options, including setup instructions for Azure OpenAI environment variables.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

🐰✨ Azure skies now open wide,
A new provider hops inside!
Config detection, API calls flow—
Another LLM's ready to go! 🌈

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding Azure OpenAI as a new LLM provider across the codebase.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 6/8 reviews remaining, refill in 11 minutes and 27 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/config.ts`:
- Around line 90-101: The fallback provider list is filtering out "azure-openai"
because VALID_PROVIDERS doesn't include it; update VALID_PROVIDERS to include
"azure-openai" so loadFallbackConfig will honor
FALLBACK_PROVIDERS="azure-openai". Locate the VALID_PROVIDERS constant and add
the string "azure-openai" (and ensure any related enum/array used by
loadFallbackConfig and detectProvider is updated consistently) so that
loadFallbackConfig and detectProvider treat Azure as a valid fallback provider.

In `@src/providers/azure-openai.ts`:
- Around line 38-52: The fetch call that sets response (the POST to url with
headers using this.apiKey and body containing model: this.deploymentName and
max_tokens: this.maxTokens) needs a request timeout to avoid hanging; wrap the
fetch in an AbortController, start a timer (configurable or a new
this.requestTimeout) that calls controller.abort() after the timeout, pass
signal: controller.signal into fetch, and clear the timer after fetch completes;
also ensure the calling method handles the abort/timeout error (AbortError)
appropriately.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c8f28d4a-c694-410d-9771-0ac2dc18e39b

📥 Commits

Reviewing files that changed from the base of the PR and between 94fc119 and f87612f.

📒 Files selected for processing (5)
  • src/config.ts
  • src/providers/azure-openai.ts
  • src/providers/index.ts
  • src/types.ts
  • src/viewer/index.html

Comment thread src/config.ts
Comment on lines +90 to +101
if (
hasRealValue(env["AZURE_OPENAI_API_KEY"]) &&
hasRealValue(env["AZURE_OPENAI_ENDPOINT"]) &&
hasRealValue(env["AZURE_OPENAI_DEPLOYMENT"])
) {
return {
provider: "azure-openai",
model: env["AZURE_OPENAI_DEPLOYMENT"],
maxTokens,
baseURL: env["AZURE_OPENAI_ENDPOINT"],
};
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Add azure-openai to fallback allowlist.

detectProvider now supports Azure, but loadFallbackConfig() still filters fallback providers through VALID_PROVIDERS without "azure-openai", so FALLBACK_PROVIDERS=azure-openai is silently dropped.

Suggested fix
 const VALID_PROVIDERS = new Set([
   "anthropic",
+  "azure-openai",
   "gemini",
   "openrouter",
   "agent-sdk",
   "minimax",
 ]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config.ts` around lines 90 - 101, The fallback provider list is filtering
out "azure-openai" because VALID_PROVIDERS doesn't include it; update
VALID_PROVIDERS to include "azure-openai" so loadFallbackConfig will honor
FALLBACK_PROVIDERS="azure-openai". Locate the VALID_PROVIDERS constant and add
the string "azure-openai" (and ensure any related enum/array used by
loadFallbackConfig and detectProvider is updated consistently) so that
loadFallbackConfig and detectProvider treat Azure as a valid fallback provider.

Comment on lines +38 to +52
const response = await fetch(url, {
method: "POST",
headers: {
"Content-Type": "application/json",
"api-key": this.apiKey,
},
body: JSON.stringify({
model: this.deploymentName,
max_tokens: this.maxTokens,
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: userPrompt },
],
}),
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Add a request timeout for Azure API calls.

External calls here can hang indefinitely; this can block compression/summarization paths when Azure stalls.

Suggested fix
 export class AzureOpenAIProvider implements MemoryProvider {
   name = "azure-openai";
+  private readonly requestTimeoutMs = 30_000;
   private apiKey: string;
   private endpoint: string;
   private deploymentName: string;
   private apiVersion: string;
   private maxTokens: number;
@@
-    const response = await fetch(url, {
-      method: "POST",
-      headers: {
-        "Content-Type": "application/json",
-        "api-key": this.apiKey,
-      },
-      body: JSON.stringify({
-        model: this.deploymentName,
-        max_tokens: this.maxTokens,
-        messages: [
-          { role: "system", content: systemPrompt },
-          { role: "user", content: userPrompt },
-        ],
-      }),
-    });
+    const controller = new AbortController();
+    const timeout = setTimeout(() => controller.abort(), this.requestTimeoutMs);
+    let response: Response;
+    try {
+      response = await fetch(url, {
+        method: "POST",
+        headers: {
+          "Content-Type": "application/json",
+          "api-key": this.apiKey,
+        },
+        signal: controller.signal,
+        body: JSON.stringify({
+          model: this.deploymentName,
+          max_tokens: this.maxTokens,
+          messages: [
+            { role: "system", content: systemPrompt },
+            { role: "user", content: userPrompt },
+          ],
+        }),
+      });
+    } catch (error) {
+      if (error instanceof Error && error.name === "AbortError") {
+        throw new Error(`azure-openai request timed out after ${this.requestTimeoutMs}ms`);
+      }
+      throw error;
+    } finally {
+      clearTimeout(timeout);
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/azure-openai.ts` around lines 38 - 52, The fetch call that sets
response (the POST to url with headers using this.apiKey and body containing
model: this.deploymentName and max_tokens: this.maxTokens) needs a request
timeout to avoid hanging; wrap the fetch in an AbortController, start a timer
(configurable or a new this.requestTimeout) that calls controller.abort() after
the timeout, pass signal: controller.signal into fetch, and clear the timer
after fetch completes; also ensure the calling method handles the abort/timeout
error (AbortError) appropriately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant