Skip to content

fix(providers): support Azure AI Foundry (Anthropic) endpoint and corporate proxy#220

Open
nagarjunr wants to merge 6 commits intorohitg00:mainfrom
nagarjunr:local/all-fixes
Open

fix(providers): support Azure AI Foundry (Anthropic) endpoint and corporate proxy#220
nagarjunr wants to merge 6 commits intorohitg00:mainfrom
nagarjunr:local/all-fixes

Conversation

@nagarjunr
Copy link
Copy Markdown

@nagarjunr nagarjunr commented Apr 30, 2026

Summary

  • Azure OpenAI / Azure AI Foundry provider: New LLM provider for Azure-hosted models. Auto-detects deployment type from endpoint URL — standard Azure OpenAI ({resource}.openai.azure.com) uses OpenAI chat completions format; Azure AI Foundry Anthropic ({resource}.services.ai.azure.com/anthropic) uses Anthropic Messages API format with x-api-key / anthropic-version headers.
  • Corporate proxy support: Node.js 18+ built-in fetch ignores HTTP_PROXY/HTTPS_PROXY, causing DNS failures in corporate networks. Provider tunnels through HTTP CONNECT proxy via tunnel-agent + node-fetch when proxy env vars are present. Proxy credentials (user:pass@host) are extracted from the URL and forwarded as Proxy-Authorization.
  • Foundry URL double-suffix guard: Endpoint normalized so /v1/messages is not appended when the endpoint already ends with that path.
  • VALID_PROVIDERS gap: azure-openai was missing from the runtime allowlist used by loadFallbackConfig(), so FALLBACK_PROVIDERS=azure-openai was silently dropped despite ProviderType including the value.
  • Viewer tab bar / graph canvas fixes: Tab bar no longer crushes content area; graph canvas fills viewport height correctly.
  • JSDoc coverage: Full JSDoc on all exported and private members of AzureOpenAIProvider to satisfy the 80% docstring coverage check.

Changes

File What changed
src/providers/azure-openai.ts New provider: proxy tunnel, dual-format request/response, Foundry URL normalization, full JSDoc
src/providers/index.ts Wire up AzureOpenAIProvider
src/config.ts Azure env var detection; azure-openai added to VALID_PROVIDERS
src/types.ts ProviderType union includes azure-openai
src/viewer/index.html Tab bar overflow fix; graph canvas height fix
README.md Azure OpenAI + Foundry rows in provider table; AZURE_OPENAI_* env vars with proxy note
CHANGELOG.md [Unreleased] entries for all changes

Environment variables

AZURE_OPENAI_API_KEY=...
AZURE_OPENAI_ENDPOINT=https://<resource>.openai.azure.com          # Standard Azure OpenAI
# or
AZURE_OPENAI_ENDPOINT=https://<resource>.services.ai.azure.com/anthropic  # Azure AI Foundry (Anthropic)
AZURE_OPENAI_DEPLOYMENT=<deployment-name>
AZURE_OPENAI_API_VERSION=2024-08-01-preview   # optional, Azure OpenAI only

# Corporate proxy (optional — auto-detected from env)
HTTPS_PROXY=http://user:pass@proxy.corp.example.com:3128

Test plan

  • Graph extraction works in corporate proxy environment (HTTP_PROXY/HTTPS_PROXY set)
  • Authenticated proxy (user:pass@host:port) forwards Proxy-Authorization correctly
  • Azure AI Foundry Anthropic endpoint returns correct responses
  • Standard Azure OpenAI endpoint works unchanged
  • FALLBACK_PROVIDERS=azure-openai is not silently dropped
  • Endpoint with full /v1/messages suffix does not double-append
  • Viewer graph canvas renders nodes/edges without overflow
  • Tab bar does not crush when many sessions open

Nagarjun Rajendran added 3 commits April 30, 2026 12:02
- tunnel-agent + node-fetch proxy tunnel when HTTP_PROXY/HTTPS_PROXY set
- detect Foundry Anthropic endpoints via isFoundry getter
- use Anthropic Messages API format for Foundry, OpenAI format otherwise
- handle both response content shapes
@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 30, 2026

Someone is attempting to deploy a commit to the rohitg00's projects Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 30, 2026

📝 Walkthrough

Walkthrough

Adds Azure OpenAI as a new LLM provider: environment-variable detection (AZURE_OPENAI_*), a new AzureOpenAIProvider supporting Azure Chat and Anthropic/Foundry message shapes with proxy-aware fetch, provider registration, docs/CHANGELOG/README updates, and viewer UI layout and banner text tweaks.

Changes

Cohort / File(s) Summary
Type System
src/types.ts
Added "azure-openai" to the ProviderType union.
Configuration & Detection
src/config.ts
detectProvider now recognizes Azure when AZURE_OPENAI_DEPLOYMENT, AZURE_OPENAI_ENDPOINT, and AZURE_OPENAI_API_KEY are set; stderr guidance updated; detectLlmProviderKind treats the Azure triad as "llm"; VALID_PROVIDERS includes "azure-openai".
Provider Implementation
src/providers/azure-openai.ts
New exported AzureOpenAIProvider implementing MemoryProvider; normalizes endpoint, stores API key/deployment/max tokens/api version, auto-detects Foundry/Anthropic endpoints, builds requests for Anthropic Messages or Azure chat completions, uses proxy-aware fetch via tunnel-agent+node-fetch when HTTP_PROXY/HTTPS_PROXY present, validates responses, and extracts text content or surfaces detailed errors.
Provider Factory
src/providers/index.ts
createBaseProvider gains an "azure-openai" branch that reads env/config (endpoint, deployment, apiVersion, apiKey, maxTokens) and constructs AzureOpenAIProvider.
UI / Viewer
src/viewer/index.html
Feature-flag banner updated to list Azure environment variables and an Anthropic export flow; CSS tweaks (flex-shrink: 0, graph container sizing/overflow) to prevent header/tab/footer collapsing and ensure the graph fills available height.
Docs & Changelog
README.md, CHANGELOG.md
Documented Azure OpenAI and Azure Foundry (Anthropic) configuration, example .env entries including optional AZURE_OPENAI_API_VERSION, proxy guidance, and noted viewer/UI fixes.

Sequence Diagram

sequenceDiagram
    participant App as Application
    participant Config as Config Detection
    participant Factory as Provider Factory
    participant Provider as AzureOpenAIProvider
    participant Service as Azure/Foundry

    App->>Config: detectProvider()
    activate Config
    Config->>Config: check AZURE_OPENAI_DEPLOYMENT/ENDPOINT/API_KEY
    alt Azure vars present
        Config-->>App: provider="azure-openai" + baseURL & model
    end
    deactivate Config

    App->>Factory: createBaseProvider(config)
    activate Factory
    Factory->>Provider: new AzureOpenAIProvider(apiKey, endpoint, deployment, maxTokens, apiVersion?)
    Factory-->>App: AzureOpenAIProvider instance
    deactivate Factory

    App->>Provider: compress(systemPrompt, userPrompt)
    activate Provider
    Provider->>Provider: buildRequest(Foundry or Azure chat shape)
    Provider->>Provider: buildFetchOptions(include proxy agent if set)
    Provider->>Service: POST /v1/messages or /openai/deployments/.../chat/completions?api-version=...
    Service-->>Provider: Response (2xx or error)
    Provider->>Provider: extractContent() or throw with payload snippet
    Provider-->>App: compressed/summarized string
    deactivate Provider
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐇 I found the keys and hopped inside,
Endpoints hummed where models hide,
I tunneled through the proxy mist,
Shaped messages and shortened gist,
Hoppy bytes — compress and glide! ✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately reflects the main changes: adding Azure AI Foundry (Anthropic) endpoint support and corporate proxy tunneling. It is specific, concise, and directly related to the primary objectives of the changeset.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/config.ts`:
- Around line 90-101: The fallback allowlist omits "azure-openai" so
loadFallbackConfig() filters out FALLBACK_PROVIDERS entries even though
ProviderType and createBaseProvider() support it; update the VALID_PROVIDERS
set/array (used by loadFallbackConfig) to include "azure-openai" so that
specifying FALLBACK_PROVIDERS=azure-openai actually registers the provider, and
ensure any related validation logic around loadFallbackConfig() and
VALID_PROVIDERS acknowledges the new string value.

In `@src/providers/azure-openai.ts`:
- Around line 59-75: The buildRequest method unconditionally appends
"/v1/messages" when this.isFoundry is true which can produce a double-suffixed
URL; update buildRequest to normalize this.endpoint for Foundry by trimming
trailing slashes and checking if it already ends with "v1/messages" (or
"/anthropic/v1/messages") and only append "/v1/messages" when missing, then
construct the url accordingly; use the existing symbols (buildRequest,
this.isFoundry, this.endpoint, url) to locate and change the logic so both forms
(base "/anthropic" or full "/anthropic/v1/messages") are handled without
producing duplicate suffixes.
- Around line 6-23: The proxy handling in buildFetchOptions silently falls back
when requiring node-fetch (ERR_REQUIRE_ESM) and does not support proxy
authentication; update buildFetchOptions to (1) avoid requiring ESM-only
node-fetch—use the global fetch/undici or dynamic import strategy instead so the
configured agent is actually used, (2) replace tunnel-agent with the maintained
tunnel package (or another maintained tunneling library) that accepts proxyAuth,
and (3) extract proxy credentials from proxyUrl (URL.username/URL.password) or
env vars and pass them as proxyAuth into the tunnel/tunneling options so
authenticated corporate proxies work; keep returning { fetchFn, agent } from
buildFetchOptions and document that proxy support is best-effort if a suitable
tunneling lib cannot be loaded.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c3e8465e-b1aa-4a8f-b437-9e76cc06acc3

📥 Commits

Reviewing files that changed from the base of the PR and between 94fc119 and 725c731.

📒 Files selected for processing (5)
  • src/config.ts
  • src/providers/azure-openai.ts
  • src/providers/index.ts
  • src/types.ts
  • src/viewer/index.html

Comment thread src/config.ts
Comment on lines +6 to +23
function buildFetchOptions(): { fetchFn: typeof fetch; agent?: unknown } {
const proxyUrl = process.env.HTTPS_PROXY || process.env.https_proxy ||
process.env.HTTP_PROXY || process.env.http_proxy;
if (!proxyUrl) return { fetchFn: fetch };
try {
const tunnel = require("tunnel-agent") as {
httpsOverHttp: (opts: { proxy: { host: string; port: number } }) => unknown;
};
const nodeFetch = require("node-fetch") as typeof fetch;
const parsed = new URL(proxyUrl);
const agent = tunnel.httpsOverHttp({
proxy: { host: parsed.hostname, port: parseInt(parsed.port || "3128") },
});
return { fetchFn: nodeFetch, agent };
} catch {
return { fetchFn: fetch };
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check if file exists and examine package.json configuration
cat -n package.json | head -20

Repository: rohitg00/agentmemory

Length of output: 962


🏁 Script executed:

# Examine the actual file to see the complete context
cat -n src/providers/azure-openai.ts

Repository: rohitg00/agentmemory

Length of output: 5095


🏁 Script executed:

# Check for any dependencies and versions in package.json
rg '"(node-fetch|tunnel-agent)"' package.json

Repository: rohitg00/agentmemory

Length of output: 46


🏁 Script executed:

# Try different search patterns for dependencies
fd package.json | head -10

Repository: rohitg00/agentmemory

Length of output: 169


🏁 Script executed:

# Look for lock files that show dependencies
fd -e "lock.json" -e "yarn.lock" -e "pnpm-lock.yaml" | head -5

Repository: rohitg00/agentmemory

Length of output: 46


🏁 Script executed:

# Search more broadly in package.json
cat package.json | grep -A 50 "dependencies"

Repository: rohitg00/agentmemory

Length of output: 617


🌐 Web query:

node-fetch 3.x require() CommonJS compatibility ESM

💡 Result:

node-fetch 3.x is an ESM-only module and does not support direct import via require in CommonJS environments. Attempting const fetch = require('node-fetch') will throw ERR_REQUIRE_ESM. Official recommendation: Use node-fetch@2 for CommonJS compatibility, which receives critical bug fixes. Workaround for using 3.x in CommonJS: dynamic async import. const fetch = (...args) => import('node-fetch').then(({default: fetch}) => fetch(...args)); Note: Node.js 18+ has native fetch, making node-fetch unnecessary in modern environments.

Citations:


🌐 Web query:

tunnel-agent httpsOverHttp HTTPS proxy authentication support

💡 Result:

The npm package tunnel-agent (https://www.npmjs.com/package/tunnel-agent), a standalone HTTP proxy tunneling agent formerly part of the request module, does not explicitly document support for HTTPS over HTTP proxy authentication in its README or official sources. Its README is minimal, lacking usage examples or configuration details for proxy auth. However, practical usage in projects shows it is used for HTTPS over HTTP tunneling, e.g.: const agent = require('tunnel-agent').httpsOverHttp({ proxy: { host: 'proxyhost', port: 8080 } }); No explicit proxyAuth option like 'user:password' is shown in docs for tunnel-agent. In contrast, the tunnel package (https://www.npmjs.com/package/tunnel, koichik/node-tunnel) explicitly supports httpsOverHttp with proxy authentication via proxy: { ..., proxyAuth: 'user:password' } for Basic auth, as documented in its README and GitHub repo. Example: var tunnelingAgent = tunnel.httpsOverHttp({ proxy: { host: proxyHost, port: proxyPort, proxyAuth: 'user:password' } }); This adds Proxy-Authorization header for the CONNECT method to the HTTP proxy. tunnel-agent (last updated 2017, still widely downloaded) likely supports proxy auth implicitly by passing auth via headers or options to Node's http.request for the CONNECT, but lacks explicit documentation. For clear HTTPS over HTTP proxy auth support, use tunnel package instead. Note: tunnel-agent repo has open issues and minimal docs; consider modern alternatives like https-proxy-agent for better maintenance and features.

Citations:


Proxy support will silently degrade and lacks authentication credentials.

The code attempts require("node-fetch") which fails with ERR_REQUIRE_ESM since node-fetch 3.x is ESM-only; this silently falls back to native fetch without the configured proxy agent, causing requests to bypass the proxy entirely. Additionally, the tunnel-agent library (last updated 2017, minimally documented) receives only host and port—it has no support for proxy authentication credentials, making it unsuitable for authenticated corporate proxies. For environments requiring proxy auth, the fallback to native fetch is unreliable unless CONNECT tunneling works without credentials (rare in corporate settings). Consider using the tunnel package instead, which explicitly supports proxyAuth, or clarify that this proxy handling is best-effort and document the limitations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/azure-openai.ts` around lines 6 - 23, The proxy handling in
buildFetchOptions silently falls back when requiring node-fetch
(ERR_REQUIRE_ESM) and does not support proxy authentication; update
buildFetchOptions to (1) avoid requiring ESM-only node-fetch—use the global
fetch/undici or dynamic import strategy instead so the configured agent is
actually used, (2) replace tunnel-agent with the maintained tunnel package (or
another maintained tunneling library) that accepts proxyAuth, and (3) extract
proxy credentials from proxyUrl (URL.username/URL.password) or env vars and pass
them as proxyAuth into the tunnel/tunneling options so authenticated corporate
proxies work; keep returning { fetchFn, agent } from buildFetchOptions and
document that proxy support is best-effort if a suitable tunneling lib cannot be
loaded.

Comment thread src/providers/azure-openai.ts
…S gap

- Extract proxy credentials from URL (username:password) and pass as
  proxyAuth to tunnel-agent so authenticated corporate proxies work
- Normalize Foundry endpoint: skip appending /v1/messages if endpoint
  already ends with that path to avoid double-suffixed URLs
- Add azure-openai to VALID_PROVIDERS so FALLBACK_PROVIDERS=azure-openai
  is not silently filtered out (ProviderType already included it)
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/providers/azure-openai.ts (1)

10-30: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Proxy fallback can silently disable proxying in common ESM/HTTPS-proxy setups.

When proxy env vars are set, this path can still bypass the proxy: require("node-fetch") may fail in ESM/node-fetch@3 environments, and httpsOverHttp is hardcoded even if the proxy URL is https://. The catch then silently falls back to direct fetch.

🔧 Suggested fix
-function buildFetchOptions(): { fetchFn: typeof fetch; agent?: unknown } {
+async function buildFetchOptions(): Promise<{ fetchFn: typeof fetch; agent?: unknown }> {
   const proxyUrl = process.env.HTTPS_PROXY || process.env.https_proxy ||
                    process.env.HTTP_PROXY  || process.env.http_proxy;
   if (!proxyUrl) return { fetchFn: fetch };
   try {
     const tunnel = require("tunnel-agent") as {
       httpsOverHttp: (opts: { proxy: { host: string; port: number; proxyAuth?: string } }) => unknown;
+      httpsOverHttps: (opts: { proxy: { host: string; port: number; proxyAuth?: string } }) => unknown;
     };
-    const nodeFetch = require("node-fetch") as typeof fetch;
+    const { default: nodeFetch } = (await import("node-fetch")) as { default: typeof fetch };
     const parsed = new URL(proxyUrl);
     const proxyAuth =
       parsed.username
         ? `${decodeURIComponent(parsed.username)}:${decodeURIComponent(parsed.password)}`
         : undefined;
-    const agent = tunnel.httpsOverHttp({
+    const makeTunnel = parsed.protocol === "https:" ? tunnel.httpsOverHttps : tunnel.httpsOverHttp;
+    const agent = makeTunnel({
       proxy: {
         host: parsed.hostname,
         port: parseInt(parsed.port || "3128"),
         ...(proxyAuth ? { proxyAuth } : {}),
       },
     });
     return { fetchFn: nodeFetch, agent };
   } catch {
     return { fetchFn: fetch };
   }
 }
@@
-    const { fetchFn, agent } = buildFetchOptions();
+    const { fetchFn, agent } = await buildFetchOptions();
#!/bin/bash
set -euo pipefail

echo "== dependency versions =="
fd -HI 'package.json' -x sh -c 'echo "--- $1"; jq -r \
  ".dependencies[\"node-fetch\"], .devDependencies[\"node-fetch\"], .dependencies[\"tunnel-agent\"], .devDependencies[\"tunnel-agent\"]" "$1"' sh {}

echo "== lockfile resolved node-fetch versions (if npm lock exists) =="
fd -HI 'package-lock.json' -x jq -r '.. | objects | select(has("node_modules/node-fetch")) | .["node_modules/node-fetch"].version' {}

echo "== proxy implementation usage =="
rg -n -C2 'buildFetchOptions|require\("node-fetch"\)|httpsOverHttp|httpsOverHttps' src/providers/azure-openai.ts

As per coding guidelines, "Use TypeScript and ESM only with "type": "module" in package.json".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/azure-openai.ts` around lines 10 - 30, The current proxy setup
can silently bypass proxies because require("node-fetch") may fail in ESM and
httpsOverHttp is hardcoded; update the block that handles proxyUrl to (1)
attempt a dynamic import of node-fetch via await
import("node-fetch").then(m=>m.default||m) if require fails, falling back only
after imports fail, (2) choose the tunnel method based on parsed.protocol (use
tunnel.httpsOverHttps for "https:" and tunnel.httpsOverHttp for "http:"), (3)
preserve proxyAuth and port logic when building agent, and (4) if any
import/agent construction fails, surface a warning/error instead of silently
returning { fetchFn: fetch } so callers know proxying was disabled; refer to the
symbols tunnel, nodeFetch, parsed, proxyAuth, agent, and fetchFn in
azure-openai.ts when making these changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/providers/azure-openai.ts`:
- Around line 113-122: The call to fetch in the private async call(systemPrompt,
userPrompt) method has no timeout, so wrap the outbound request with an
AbortController: create an AbortController, start a setTimeout to call
controller.abort() after a reasonable timeout, pass controller.signal into the
fetch options (alongside agent), and clear the timer after fetch resolves;
ensure the fetch invocation from buildFetchOptions() uses that signal and
surface an appropriate error when aborted so summarize()/compress() callers fail
fast instead of hanging.

---

Duplicate comments:
In `@src/providers/azure-openai.ts`:
- Around line 10-30: The current proxy setup can silently bypass proxies because
require("node-fetch") may fail in ESM and httpsOverHttp is hardcoded; update the
block that handles proxyUrl to (1) attempt a dynamic import of node-fetch via
await import("node-fetch").then(m=>m.default||m) if require fails, falling back
only after imports fail, (2) choose the tunnel method based on parsed.protocol
(use tunnel.httpsOverHttps for "https:" and tunnel.httpsOverHttp for "http:"),
(3) preserve proxyAuth and port logic when building agent, and (4) if any
import/agent construction fails, surface a warning/error instead of silently
returning { fetchFn: fetch } so callers know proxying was disabled; refer to the
symbols tunnel, nodeFetch, parsed, proxyAuth, agent, and fetchFn in
azure-openai.ts when making these changes.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 02067ebc-ee08-4ecf-8672-6cdd27b961fa

📥 Commits

Reviewing files that changed from the base of the PR and between 49d4acf and 093a35c.

📒 Files selected for processing (2)
  • src/config.ts
  • src/providers/azure-openai.ts

Comment on lines +113 to +122
private async call(systemPrompt: string, userPrompt: string): Promise<string> {
const { url, headers, body } = this.buildRequest(systemPrompt, userPrompt);

const { fetchFn, agent } = buildFetchOptions();
const response = await (fetchFn as (url: string, opts: Record<string, unknown>) => Promise<Response>)(url, {
method: "POST",
headers,
body: JSON.stringify(body),
...(agent ? { agent } : {}),
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail
echo "Node engine constraint:"
jq -r '.engines.node // "engines.node not set"' package.json
echo
echo "Fetch call sites and timeout/signal usage in provider:"
rg -n -C2 'fetch\(|signal:|AbortController|AbortSignal\.timeout' src/providers/azure-openai.ts

Repository: rohitg00/agentmemory

Length of output: 153


🏁 Script executed:

wc -l src/providers/azure-openai.ts

Repository: rohitg00/agentmemory

Length of output: 98


🏁 Script executed:

cat -n src/providers/azure-openai.ts | head -150

Repository: rohitg00/agentmemory

Length of output: 5506


🏁 Script executed:

rg -n 'AbortController|AbortSignal|timeout|signal' src/providers/ -A 2 -B 2

Repository: rohitg00/agentmemory

Length of output: 46


Add a request timeout for outbound model calls.

This external call has no timeout/signal, so a stalled upstream can hang summarize() or compress() calls indefinitely. Use AbortController with a timeout to prevent this.

⏱️ Timeout pattern
   private async call(systemPrompt: string, userPrompt: string): Promise<string> {
     const { url, headers, body } = this.buildRequest(systemPrompt, userPrompt);

     const { fetchFn, agent } = buildFetchOptions();
+    const controller = new AbortController();
+    const timeout = setTimeout(() => controller.abort(), 30_000);
     const response = await (fetchFn as (url: string, opts: Record<string, unknown>) => Promise<Response>)(url, {
       method: "POST",
       headers,
       body: JSON.stringify(body),
+      signal: controller.signal,
       ...(agent ? { agent } : {}),
     });
+    clearTimeout(timeout);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/azure-openai.ts` around lines 113 - 122, The call to fetch in
the private async call(systemPrompt, userPrompt) method has no timeout, so wrap
the outbound request with an AbortController: create an AbortController, start a
setTimeout to call controller.abort() after a reasonable timeout, pass
controller.signal into the fetch options (alongside agent), and clear the timer
after fetch resolves; ensure the fetch invocation from buildFetchOptions() uses
that signal and surface an appropriate error when aborted so
summarize()/compress() callers fail fast instead of hanging.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
src/providers/azure-openai.ts (1)

165-174: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Restore the outbound request timeout.

This fetch call still has no AbortController/timeout, so a stalled Azure endpoint or proxy can hang compress()/summarize() indefinitely. This is the same failure mode already called out in the previous review and it remains unresolved here.

⏱️ Suggested fix
   private async call(systemPrompt: string, userPrompt: string): Promise<string> {
     const { url, headers, body } = this.buildRequest(systemPrompt, userPrompt);

     const { fetchFn, agent } = buildFetchOptions();
+    const controller = new AbortController();
+    const timeout = setTimeout(() => controller.abort(), 30_000);
     const response = await (fetchFn as (url: string, opts: Record<string, unknown>) => Promise<Response>)(url, {
       method: "POST",
       headers,
       body: JSON.stringify(body),
+      signal: controller.signal,
       ...(agent ? { agent } : {}),
     });
+    clearTimeout(timeout);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/azure-openai.ts` around lines 165 - 174, The outbound fetch in
AzureOpenAIProvider.call is missing request timeout handling, which can hang
compress()/summarize(); wrap the POST call with an AbortController and a timeout
(e.g., setTimeout that calls controller.abort()) and pass controller.signal into
the fetch options; use the existing buildFetchOptions() result (agent) and
include signal alongside headers/body, and clear the timeout on success to avoid
leaks. Ensure the controller is created inside call() so each request has its
own timeout and propagate any abort errors appropriately.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/providers/azure-openai.ts`:
- Around line 27-36: The proxy agent creation hard-codes port "3128" when
parsed.port is empty; update the logic around parsed and the
tunnel.httpsOverHttp call to compute a defaultPort based on parsed.protocol (use
'443' for 'https:' and '80' otherwise) and use parseInt(parsed.port ||
defaultPort) for the proxy port; adjust the proxy object passed to
tunnel.httpsOverHttp (and any related variables like proxyAuth and parsed) so
the agent uses the protocol-derived default instead of 3128.
- Around line 151-158: The extractContent method currently returns only the
first Anthropic text block when this.isFoundry is true; update the logic in
extractContent (the isFoundry branch that reads the local variable content) to
collect all entries where b.type === "text" and concatenate them (in order) into
a single string (e.g., join their .text values with an empty string or a
newline) instead of using .find(); preserve the existing return type (string |
undefined) by returning undefined when there are no text blocks and the
concatenated string when there are.

---

Duplicate comments:
In `@src/providers/azure-openai.ts`:
- Around line 165-174: The outbound fetch in AzureOpenAIProvider.call is missing
request timeout handling, which can hang compress()/summarize(); wrap the POST
call with an AbortController and a timeout (e.g., setTimeout that calls
controller.abort()) and pass controller.signal into the fetch options; use the
existing buildFetchOptions() result (agent) and include signal alongside
headers/body, and clear the timeout on success to avoid leaks. Ensure the
controller is created inside call() so each request has its own timeout and
propagate any abort errors appropriately.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 3cfb9b63-402b-4fad-838f-8d297a3e01bc

📥 Commits

Reviewing files that changed from the base of the PR and between 093a35c and a0329c7.

📒 Files selected for processing (1)
  • src/providers/azure-openai.ts

Comment on lines +27 to +36
const parsed = new URL(proxyUrl);
const proxyAuth =
parsed.username
? `${decodeURIComponent(parsed.username)}:${decodeURIComponent(parsed.password)}`
: undefined;
const agent = tunnel.httpsOverHttp({
proxy: {
host: parsed.hostname,
port: parseInt(parsed.port || "3128"),
...(proxyAuth ? { proxyAuth } : {}),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

fd -t f azure-openai.ts

Repository: rohitg00/agentmemory

Length of output: 94


🏁 Script executed:

cat -n src/providers/azure-openai.ts | sed -n '1,60p'

Repository: rohitg00/agentmemory

Length of output: 2883


🏁 Script executed:

cat -n src/providers/azure-openai.ts | sed -n '18,43p'

Repository: rohitg00/agentmemory

Length of output: 1215


🏁 Script executed:

rg -n "3128|proxyUrl|proxy" src/providers/azure-openai.ts

Repository: rohitg00/agentmemory

Length of output: 939


Derive the proxy port default from the URL scheme instead of hard-coding 3128.

Line 35 defaults to 3128 when the proxy URL omits an explicit port. This breaks compatibility with corporate proxies using standard defaults (port 80 for http://, port 443 for https://). The correct approach is:

const defaultPort = parsed.protocol === 'https:' ? '443' : '80';
port: parseInt(parsed.port || defaultPort)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/azure-openai.ts` around lines 27 - 36, The proxy agent creation
hard-codes port "3128" when parsed.port is empty; update the logic around parsed
and the tunnel.httpsOverHttp call to compute a defaultPort based on
parsed.protocol (use '443' for 'https:' and '80' otherwise) and use
parseInt(parsed.port || defaultPort) for the proxy port; adjust the proxy object
passed to tunnel.httpsOverHttp (and any related variables like proxyAuth and
parsed) so the agent uses the protocol-derived default instead of 3128.

Comment on lines +151 to +158
private extractContent(data: Record<string, unknown>): string | undefined {
if (this.isFoundry) {
const content = data.content as Array<{ type: string; text: string }> | undefined;
return content?.find((b) => b.type === "text")?.text;
}
const choices = data.choices as Array<{ message: { content: string } }> | undefined;
return choices?.[0]?.message?.content;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n src/providers/azure-openai.ts | head -180

Repository: rohitg00/agentmemory

Length of output: 8153


🏁 Script executed:

# Check remaining lines of the file
sed -n '180,$p' src/providers/azure-openai.ts | head -30

Repository: rohitg00/agentmemory

Length of output: 357


🏁 Script executed:

# Search for test files related to this provider
find . -type f -name "*.test.*" -o -name "*.spec.*" | grep -i azure

Repository: rohitg00/agentmemory

Length of output: 46


🏁 Script executed:

# Search for any references to extractContent or response handling
rg "extractContent|content.*text|Anthropic.*response" --type ts -A 2 -B 2

Repository: rohitg00/agentmemory

Length of output: 23026


🏁 Script executed:

cat -n src/providers/anthropic.ts

Repository: rohitg00/agentmemory

Length of output: 2223


🏁 Script executed:

# Search for any Azure AI Foundry or Anthropic API response documentation in comments
rg "content.*block|text.*block|anthropic.*response" -i --type ts -B 2 -A 2

Repository: rohitg00/agentmemory

Length of output: 2801


🏁 Script executed:

# Check if there are any type definitions or SDK imports that show the response shape
rg "type.*content.*array|Array.*type.*text" --type ts

Repository: rohitg00/agentmemory

Length of output: 596


🏁 Script executed:

# Let's check if package.json has anthropic SDK version
cat package.json | grep -A 5 -B 5 anthropic

Repository: rohitg00/agentmemory

Length of output: 372


🏁 Script executed:

# Search for any test or mock responses from Azure Foundry/Anthropic
rg "foundry|isFoundry" --type ts -B 5 -A 10

Repository: rohitg00/agentmemory

Length of output: 3120


🏁 Script executed:

# Check if there's any documentation or comments about response format
rg "Messages API|content\[" --type ts -i

Repository: rohitg00/agentmemory

Length of output: 2307


🌐 Web query:

Anthropic Messages API response content multiple text blocks

💡 Result:

In Anthropic's Messages API, the response content is an array of content blocks (Vec). Each block has a type, such as "text" for generated text. Currently, responses primarily contain "text" blocks, but can include multiple blocks like "tool_use" when tools are involved. For example: [{"type": "text", "text": "Hi, I'm Claude."}]. With structured outputs, the JSON is returned in response.content[0].text as a single text block. To extract text, iterate over the array, filter by type "text", and concatenate the texts.

Citations:


🏁 Script executed:

# Check the return type signature and how it's used
rg "extractContent" --type ts -B 2 -A 5

Repository: rohitg00/agentmemory

Length of output: 1204


Join all text blocks from Anthropic responses instead of returning only the first.

The Anthropic Messages API response can contain multiple content blocks with type === "text", especially when combined with tool use or other features. The current implementation using .find() returns only the first text block, discarding subsequent ones. Per Anthropic API documentation, all text blocks should be concatenated.

🔧 Suggested fix
  private extractContent(data: Record<string, unknown>): string | undefined {
    if (this.isFoundry) {
      const content = data.content as Array<{ type: string; text: string }> | undefined;
-     return content?.find((b) => b.type === "text")?.text;
+     return content
+       ?.filter((b) => b.type === "text")
+       .map((b) => b.text)
+       .join("");
    }
    const choices = data.choices as Array<{ message: { content: string } }> | undefined;
    return choices?.[0]?.message?.content;
  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/azure-openai.ts` around lines 151 - 158, The extractContent
method currently returns only the first Anthropic text block when this.isFoundry
is true; update the logic in extractContent (the isFoundry branch that reads the
local variable content) to collect all entries where b.type === "text" and
concatenate them (in order) into a single string (e.g., join their .text values
with an empty string or a newline) instead of using .find(); preserve the
existing return type (string | undefined) by returning undefined when there are
no text blocks and the concatenated string when there are.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant