fix(providers): support Azure AI Foundry (Anthropic) endpoint and corporate proxy#220
fix(providers): support Azure AI Foundry (Anthropic) endpoint and corporate proxy#220nagarjunr wants to merge 6 commits intorohitg00:mainfrom
Conversation
- tunnel-agent + node-fetch proxy tunnel when HTTP_PROXY/HTTPS_PROXY set - detect Foundry Anthropic endpoints via isFoundry getter - use Anthropic Messages API format for Foundry, OpenAI format otherwise - handle both response content shapes
|
Someone is attempting to deploy a commit to the rohitg00's projects Team on Vercel. A member of the Team first needs to authorize it. |
📝 WalkthroughWalkthroughAdds Azure OpenAI as a new LLM provider: environment-variable detection (AZURE_OPENAI_*), a new AzureOpenAIProvider supporting Azure Chat and Anthropic/Foundry message shapes with proxy-aware fetch, provider registration, docs/CHANGELOG/README updates, and viewer UI layout and banner text tweaks. Changes
Sequence DiagramsequenceDiagram
participant App as Application
participant Config as Config Detection
participant Factory as Provider Factory
participant Provider as AzureOpenAIProvider
participant Service as Azure/Foundry
App->>Config: detectProvider()
activate Config
Config->>Config: check AZURE_OPENAI_DEPLOYMENT/ENDPOINT/API_KEY
alt Azure vars present
Config-->>App: provider="azure-openai" + baseURL & model
end
deactivate Config
App->>Factory: createBaseProvider(config)
activate Factory
Factory->>Provider: new AzureOpenAIProvider(apiKey, endpoint, deployment, maxTokens, apiVersion?)
Factory-->>App: AzureOpenAIProvider instance
deactivate Factory
App->>Provider: compress(systemPrompt, userPrompt)
activate Provider
Provider->>Provider: buildRequest(Foundry or Azure chat shape)
Provider->>Provider: buildFetchOptions(include proxy agent if set)
Provider->>Service: POST /v1/messages or /openai/deployments/.../chat/completions?api-version=...
Service-->>Provider: Response (2xx or error)
Provider->>Provider: extractContent() or throw with payload snippet
Provider-->>App: compressed/summarized string
deactivate Provider
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/config.ts`:
- Around line 90-101: The fallback allowlist omits "azure-openai" so
loadFallbackConfig() filters out FALLBACK_PROVIDERS entries even though
ProviderType and createBaseProvider() support it; update the VALID_PROVIDERS
set/array (used by loadFallbackConfig) to include "azure-openai" so that
specifying FALLBACK_PROVIDERS=azure-openai actually registers the provider, and
ensure any related validation logic around loadFallbackConfig() and
VALID_PROVIDERS acknowledges the new string value.
In `@src/providers/azure-openai.ts`:
- Around line 59-75: The buildRequest method unconditionally appends
"/v1/messages" when this.isFoundry is true which can produce a double-suffixed
URL; update buildRequest to normalize this.endpoint for Foundry by trimming
trailing slashes and checking if it already ends with "v1/messages" (or
"/anthropic/v1/messages") and only append "/v1/messages" when missing, then
construct the url accordingly; use the existing symbols (buildRequest,
this.isFoundry, this.endpoint, url) to locate and change the logic so both forms
(base "/anthropic" or full "/anthropic/v1/messages") are handled without
producing duplicate suffixes.
- Around line 6-23: The proxy handling in buildFetchOptions silently falls back
when requiring node-fetch (ERR_REQUIRE_ESM) and does not support proxy
authentication; update buildFetchOptions to (1) avoid requiring ESM-only
node-fetch—use the global fetch/undici or dynamic import strategy instead so the
configured agent is actually used, (2) replace tunnel-agent with the maintained
tunnel package (or another maintained tunneling library) that accepts proxyAuth,
and (3) extract proxy credentials from proxyUrl (URL.username/URL.password) or
env vars and pass them as proxyAuth into the tunnel/tunneling options so
authenticated corporate proxies work; keep returning { fetchFn, agent } from
buildFetchOptions and document that proxy support is best-effort if a suitable
tunneling lib cannot be loaded.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c3e8465e-b1aa-4a8f-b437-9e76cc06acc3
📒 Files selected for processing (5)
src/config.tssrc/providers/azure-openai.tssrc/providers/index.tssrc/types.tssrc/viewer/index.html
| function buildFetchOptions(): { fetchFn: typeof fetch; agent?: unknown } { | ||
| const proxyUrl = process.env.HTTPS_PROXY || process.env.https_proxy || | ||
| process.env.HTTP_PROXY || process.env.http_proxy; | ||
| if (!proxyUrl) return { fetchFn: fetch }; | ||
| try { | ||
| const tunnel = require("tunnel-agent") as { | ||
| httpsOverHttp: (opts: { proxy: { host: string; port: number } }) => unknown; | ||
| }; | ||
| const nodeFetch = require("node-fetch") as typeof fetch; | ||
| const parsed = new URL(proxyUrl); | ||
| const agent = tunnel.httpsOverHttp({ | ||
| proxy: { host: parsed.hostname, port: parseInt(parsed.port || "3128") }, | ||
| }); | ||
| return { fetchFn: nodeFetch, agent }; | ||
| } catch { | ||
| return { fetchFn: fetch }; | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check if file exists and examine package.json configuration
cat -n package.json | head -20Repository: rohitg00/agentmemory
Length of output: 962
🏁 Script executed:
# Examine the actual file to see the complete context
cat -n src/providers/azure-openai.tsRepository: rohitg00/agentmemory
Length of output: 5095
🏁 Script executed:
# Check for any dependencies and versions in package.json
rg '"(node-fetch|tunnel-agent)"' package.jsonRepository: rohitg00/agentmemory
Length of output: 46
🏁 Script executed:
# Try different search patterns for dependencies
fd package.json | head -10Repository: rohitg00/agentmemory
Length of output: 169
🏁 Script executed:
# Look for lock files that show dependencies
fd -e "lock.json" -e "yarn.lock" -e "pnpm-lock.yaml" | head -5Repository: rohitg00/agentmemory
Length of output: 46
🏁 Script executed:
# Search more broadly in package.json
cat package.json | grep -A 50 "dependencies"Repository: rohitg00/agentmemory
Length of output: 617
🌐 Web query:
node-fetch 3.x require() CommonJS compatibility ESM
💡 Result:
node-fetch 3.x is an ESM-only module and does not support direct import via require in CommonJS environments. Attempting const fetch = require('node-fetch') will throw ERR_REQUIRE_ESM. Official recommendation: Use node-fetch@2 for CommonJS compatibility, which receives critical bug fixes. Workaround for using 3.x in CommonJS: dynamic async import. const fetch = (...args) => import('node-fetch').then(({default: fetch}) => fetch(...args)); Note: Node.js 18+ has native fetch, making node-fetch unnecessary in modern environments.
Citations:
- 1: https://www.npmjs.com/package/node-fetch/v/3.3.2
- 2: Loading node-fetch@3 in CJS and ESM node-fetch/node-fetch#1279
- 3: https://github.com/node-fetch/node-fetch/tree/v3.1.1
- 4: https://github.com/node-fetch/node-fetch/blob/main/docs/v3-UPGRADE-GUIDE.md
- 5: https://www.npmjs.com/package/node-fetch?activeTab=readme
- 6: https://github.com/node-fetch/node-fetch
🌐 Web query:
tunnel-agent httpsOverHttp HTTPS proxy authentication support
💡 Result:
The npm package tunnel-agent (https://www.npmjs.com/package/tunnel-agent), a standalone HTTP proxy tunneling agent formerly part of the request module, does not explicitly document support for HTTPS over HTTP proxy authentication in its README or official sources. Its README is minimal, lacking usage examples or configuration details for proxy auth. However, practical usage in projects shows it is used for HTTPS over HTTP tunneling, e.g.: const agent = require('tunnel-agent').httpsOverHttp({ proxy: { host: 'proxyhost', port: 8080 } }); No explicit proxyAuth option like 'user:password' is shown in docs for tunnel-agent. In contrast, the tunnel package (https://www.npmjs.com/package/tunnel, koichik/node-tunnel) explicitly supports httpsOverHttp with proxy authentication via proxy: { ..., proxyAuth: 'user:password' } for Basic auth, as documented in its README and GitHub repo. Example: var tunnelingAgent = tunnel.httpsOverHttp({ proxy: { host: proxyHost, port: proxyPort, proxyAuth: 'user:password' } }); This adds Proxy-Authorization header for the CONNECT method to the HTTP proxy. tunnel-agent (last updated 2017, still widely downloaded) likely supports proxy auth implicitly by passing auth via headers or options to Node's http.request for the CONNECT, but lacks explicit documentation. For clear HTTPS over HTTP proxy auth support, use tunnel package instead. Note: tunnel-agent repo has open issues and minimal docs; consider modern alternatives like https-proxy-agent for better maintenance and features.
Citations:
- 1: https://github.com/request/tunnel-agent
- 2: https://www.npmjs.com/package/tunnel-agent
- 3: https://www.npmjs.com/package/tunnel
- 4: https://github.com/koichik/node-tunnel
- 5: Need to be able to proxy request.ts get and post calls the call through tunnel Unleash/unleash-node-sdk#443
Proxy support will silently degrade and lacks authentication credentials.
The code attempts require("node-fetch") which fails with ERR_REQUIRE_ESM since node-fetch 3.x is ESM-only; this silently falls back to native fetch without the configured proxy agent, causing requests to bypass the proxy entirely. Additionally, the tunnel-agent library (last updated 2017, minimally documented) receives only host and port—it has no support for proxy authentication credentials, making it unsuitable for authenticated corporate proxies. For environments requiring proxy auth, the fallback to native fetch is unreliable unless CONNECT tunneling works without credentials (rare in corporate settings). Consider using the tunnel package instead, which explicitly supports proxyAuth, or clarify that this proxy handling is best-effort and document the limitations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/providers/azure-openai.ts` around lines 6 - 23, The proxy handling in
buildFetchOptions silently falls back when requiring node-fetch
(ERR_REQUIRE_ESM) and does not support proxy authentication; update
buildFetchOptions to (1) avoid requiring ESM-only node-fetch—use the global
fetch/undici or dynamic import strategy instead so the configured agent is
actually used, (2) replace tunnel-agent with the maintained tunnel package (or
another maintained tunneling library) that accepts proxyAuth, and (3) extract
proxy credentials from proxyUrl (URL.username/URL.password) or env vars and pass
them as proxyAuth into the tunnel/tunneling options so authenticated corporate
proxies work; keep returning { fetchFn, agent } from buildFetchOptions and
document that proxy support is best-effort if a suitable tunneling lib cannot be
loaded.
…S gap - Extract proxy credentials from URL (username:password) and pass as proxyAuth to tunnel-agent so authenticated corporate proxies work - Normalize Foundry endpoint: skip appending /v1/messages if endpoint already ends with that path to avoid double-suffixed URLs - Add azure-openai to VALID_PROVIDERS so FALLBACK_PROVIDERS=azure-openai is not silently filtered out (ProviderType already included it)
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/providers/azure-openai.ts (1)
10-30:⚠️ Potential issue | 🟠 Major | ⚡ Quick winProxy fallback can silently disable proxying in common ESM/HTTPS-proxy setups.
When proxy env vars are set, this path can still bypass the proxy:
require("node-fetch")may fail in ESM/node-fetch@3 environments, andhttpsOverHttpis hardcoded even if the proxy URL ishttps://. The catch then silently falls back to directfetch.🔧 Suggested fix
-function buildFetchOptions(): { fetchFn: typeof fetch; agent?: unknown } { +async function buildFetchOptions(): Promise<{ fetchFn: typeof fetch; agent?: unknown }> { const proxyUrl = process.env.HTTPS_PROXY || process.env.https_proxy || process.env.HTTP_PROXY || process.env.http_proxy; if (!proxyUrl) return { fetchFn: fetch }; try { const tunnel = require("tunnel-agent") as { httpsOverHttp: (opts: { proxy: { host: string; port: number; proxyAuth?: string } }) => unknown; + httpsOverHttps: (opts: { proxy: { host: string; port: number; proxyAuth?: string } }) => unknown; }; - const nodeFetch = require("node-fetch") as typeof fetch; + const { default: nodeFetch } = (await import("node-fetch")) as { default: typeof fetch }; const parsed = new URL(proxyUrl); const proxyAuth = parsed.username ? `${decodeURIComponent(parsed.username)}:${decodeURIComponent(parsed.password)}` : undefined; - const agent = tunnel.httpsOverHttp({ + const makeTunnel = parsed.protocol === "https:" ? tunnel.httpsOverHttps : tunnel.httpsOverHttp; + const agent = makeTunnel({ proxy: { host: parsed.hostname, port: parseInt(parsed.port || "3128"), ...(proxyAuth ? { proxyAuth } : {}), }, }); return { fetchFn: nodeFetch, agent }; } catch { return { fetchFn: fetch }; } } @@ - const { fetchFn, agent } = buildFetchOptions(); + const { fetchFn, agent } = await buildFetchOptions();#!/bin/bash set -euo pipefail echo "== dependency versions ==" fd -HI 'package.json' -x sh -c 'echo "--- $1"; jq -r \ ".dependencies[\"node-fetch\"], .devDependencies[\"node-fetch\"], .dependencies[\"tunnel-agent\"], .devDependencies[\"tunnel-agent\"]" "$1"' sh {} echo "== lockfile resolved node-fetch versions (if npm lock exists) ==" fd -HI 'package-lock.json' -x jq -r '.. | objects | select(has("node_modules/node-fetch")) | .["node_modules/node-fetch"].version' {} echo "== proxy implementation usage ==" rg -n -C2 'buildFetchOptions|require\("node-fetch"\)|httpsOverHttp|httpsOverHttps' src/providers/azure-openai.tsAs per coding guidelines, "Use TypeScript and ESM only with "type": "module" in package.json".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/providers/azure-openai.ts` around lines 10 - 30, The current proxy setup can silently bypass proxies because require("node-fetch") may fail in ESM and httpsOverHttp is hardcoded; update the block that handles proxyUrl to (1) attempt a dynamic import of node-fetch via await import("node-fetch").then(m=>m.default||m) if require fails, falling back only after imports fail, (2) choose the tunnel method based on parsed.protocol (use tunnel.httpsOverHttps for "https:" and tunnel.httpsOverHttp for "http:"), (3) preserve proxyAuth and port logic when building agent, and (4) if any import/agent construction fails, surface a warning/error instead of silently returning { fetchFn: fetch } so callers know proxying was disabled; refer to the symbols tunnel, nodeFetch, parsed, proxyAuth, agent, and fetchFn in azure-openai.ts when making these changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/providers/azure-openai.ts`:
- Around line 113-122: The call to fetch in the private async call(systemPrompt,
userPrompt) method has no timeout, so wrap the outbound request with an
AbortController: create an AbortController, start a setTimeout to call
controller.abort() after a reasonable timeout, pass controller.signal into the
fetch options (alongside agent), and clear the timer after fetch resolves;
ensure the fetch invocation from buildFetchOptions() uses that signal and
surface an appropriate error when aborted so summarize()/compress() callers fail
fast instead of hanging.
---
Duplicate comments:
In `@src/providers/azure-openai.ts`:
- Around line 10-30: The current proxy setup can silently bypass proxies because
require("node-fetch") may fail in ESM and httpsOverHttp is hardcoded; update the
block that handles proxyUrl to (1) attempt a dynamic import of node-fetch via
await import("node-fetch").then(m=>m.default||m) if require fails, falling back
only after imports fail, (2) choose the tunnel method based on parsed.protocol
(use tunnel.httpsOverHttps for "https:" and tunnel.httpsOverHttp for "http:"),
(3) preserve proxyAuth and port logic when building agent, and (4) if any
import/agent construction fails, surface a warning/error instead of silently
returning { fetchFn: fetch } so callers know proxying was disabled; refer to the
symbols tunnel, nodeFetch, parsed, proxyAuth, agent, and fetchFn in
azure-openai.ts when making these changes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 02067ebc-ee08-4ecf-8672-6cdd27b961fa
📒 Files selected for processing (2)
src/config.tssrc/providers/azure-openai.ts
| private async call(systemPrompt: string, userPrompt: string): Promise<string> { | ||
| const { url, headers, body } = this.buildRequest(systemPrompt, userPrompt); | ||
|
|
||
| const { fetchFn, agent } = buildFetchOptions(); | ||
| const response = await (fetchFn as (url: string, opts: Record<string, unknown>) => Promise<Response>)(url, { | ||
| method: "POST", | ||
| headers, | ||
| body: JSON.stringify(body), | ||
| ...(agent ? { agent } : {}), | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Node engine constraint:"
jq -r '.engines.node // "engines.node not set"' package.json
echo
echo "Fetch call sites and timeout/signal usage in provider:"
rg -n -C2 'fetch\(|signal:|AbortController|AbortSignal\.timeout' src/providers/azure-openai.tsRepository: rohitg00/agentmemory
Length of output: 153
🏁 Script executed:
wc -l src/providers/azure-openai.tsRepository: rohitg00/agentmemory
Length of output: 98
🏁 Script executed:
cat -n src/providers/azure-openai.ts | head -150Repository: rohitg00/agentmemory
Length of output: 5506
🏁 Script executed:
rg -n 'AbortController|AbortSignal|timeout|signal' src/providers/ -A 2 -B 2Repository: rohitg00/agentmemory
Length of output: 46
Add a request timeout for outbound model calls.
This external call has no timeout/signal, so a stalled upstream can hang summarize() or compress() calls indefinitely. Use AbortController with a timeout to prevent this.
⏱️ Timeout pattern
private async call(systemPrompt: string, userPrompt: string): Promise<string> {
const { url, headers, body } = this.buildRequest(systemPrompt, userPrompt);
const { fetchFn, agent } = buildFetchOptions();
+ const controller = new AbortController();
+ const timeout = setTimeout(() => controller.abort(), 30_000);
const response = await (fetchFn as (url: string, opts: Record<string, unknown>) => Promise<Response>)(url, {
method: "POST",
headers,
body: JSON.stringify(body),
+ signal: controller.signal,
...(agent ? { agent } : {}),
});
+ clearTimeout(timeout);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/providers/azure-openai.ts` around lines 113 - 122, The call to fetch in
the private async call(systemPrompt, userPrompt) method has no timeout, so wrap
the outbound request with an AbortController: create an AbortController, start a
setTimeout to call controller.abort() after a reasonable timeout, pass
controller.signal into the fetch options (alongside agent), and clear the timer
after fetch resolves; ensure the fetch invocation from buildFetchOptions() uses
that signal and surface an appropriate error when aborted so
summarize()/compress() callers fail fast instead of hanging.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/providers/azure-openai.ts (1)
165-174:⚠️ Potential issue | 🟠 Major | ⚡ Quick winRestore the outbound request timeout.
This fetch call still has no
AbortController/timeout, so a stalled Azure endpoint or proxy can hangcompress()/summarize()indefinitely. This is the same failure mode already called out in the previous review and it remains unresolved here.⏱️ Suggested fix
private async call(systemPrompt: string, userPrompt: string): Promise<string> { const { url, headers, body } = this.buildRequest(systemPrompt, userPrompt); const { fetchFn, agent } = buildFetchOptions(); + const controller = new AbortController(); + const timeout = setTimeout(() => controller.abort(), 30_000); const response = await (fetchFn as (url: string, opts: Record<string, unknown>) => Promise<Response>)(url, { method: "POST", headers, body: JSON.stringify(body), + signal: controller.signal, ...(agent ? { agent } : {}), }); + clearTimeout(timeout);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/providers/azure-openai.ts` around lines 165 - 174, The outbound fetch in AzureOpenAIProvider.call is missing request timeout handling, which can hang compress()/summarize(); wrap the POST call with an AbortController and a timeout (e.g., setTimeout that calls controller.abort()) and pass controller.signal into the fetch options; use the existing buildFetchOptions() result (agent) and include signal alongside headers/body, and clear the timeout on success to avoid leaks. Ensure the controller is created inside call() so each request has its own timeout and propagate any abort errors appropriately.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/providers/azure-openai.ts`:
- Around line 27-36: The proxy agent creation hard-codes port "3128" when
parsed.port is empty; update the logic around parsed and the
tunnel.httpsOverHttp call to compute a defaultPort based on parsed.protocol (use
'443' for 'https:' and '80' otherwise) and use parseInt(parsed.port ||
defaultPort) for the proxy port; adjust the proxy object passed to
tunnel.httpsOverHttp (and any related variables like proxyAuth and parsed) so
the agent uses the protocol-derived default instead of 3128.
- Around line 151-158: The extractContent method currently returns only the
first Anthropic text block when this.isFoundry is true; update the logic in
extractContent (the isFoundry branch that reads the local variable content) to
collect all entries where b.type === "text" and concatenate them (in order) into
a single string (e.g., join their .text values with an empty string or a
newline) instead of using .find(); preserve the existing return type (string |
undefined) by returning undefined when there are no text blocks and the
concatenated string when there are.
---
Duplicate comments:
In `@src/providers/azure-openai.ts`:
- Around line 165-174: The outbound fetch in AzureOpenAIProvider.call is missing
request timeout handling, which can hang compress()/summarize(); wrap the POST
call with an AbortController and a timeout (e.g., setTimeout that calls
controller.abort()) and pass controller.signal into the fetch options; use the
existing buildFetchOptions() result (agent) and include signal alongside
headers/body, and clear the timeout on success to avoid leaks. Ensure the
controller is created inside call() so each request has its own timeout and
propagate any abort errors appropriately.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3cfb9b63-402b-4fad-838f-8d297a3e01bc
📒 Files selected for processing (1)
src/providers/azure-openai.ts
| const parsed = new URL(proxyUrl); | ||
| const proxyAuth = | ||
| parsed.username | ||
| ? `${decodeURIComponent(parsed.username)}:${decodeURIComponent(parsed.password)}` | ||
| : undefined; | ||
| const agent = tunnel.httpsOverHttp({ | ||
| proxy: { | ||
| host: parsed.hostname, | ||
| port: parseInt(parsed.port || "3128"), | ||
| ...(proxyAuth ? { proxyAuth } : {}), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -t f azure-openai.tsRepository: rohitg00/agentmemory
Length of output: 94
🏁 Script executed:
cat -n src/providers/azure-openai.ts | sed -n '1,60p'Repository: rohitg00/agentmemory
Length of output: 2883
🏁 Script executed:
cat -n src/providers/azure-openai.ts | sed -n '18,43p'Repository: rohitg00/agentmemory
Length of output: 1215
🏁 Script executed:
rg -n "3128|proxyUrl|proxy" src/providers/azure-openai.tsRepository: rohitg00/agentmemory
Length of output: 939
Derive the proxy port default from the URL scheme instead of hard-coding 3128.
Line 35 defaults to 3128 when the proxy URL omits an explicit port. This breaks compatibility with corporate proxies using standard defaults (port 80 for http://, port 443 for https://). The correct approach is:
const defaultPort = parsed.protocol === 'https:' ? '443' : '80';
port: parseInt(parsed.port || defaultPort)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/providers/azure-openai.ts` around lines 27 - 36, The proxy agent creation
hard-codes port "3128" when parsed.port is empty; update the logic around parsed
and the tunnel.httpsOverHttp call to compute a defaultPort based on
parsed.protocol (use '443' for 'https:' and '80' otherwise) and use
parseInt(parsed.port || defaultPort) for the proxy port; adjust the proxy object
passed to tunnel.httpsOverHttp (and any related variables like proxyAuth and
parsed) so the agent uses the protocol-derived default instead of 3128.
| private extractContent(data: Record<string, unknown>): string | undefined { | ||
| if (this.isFoundry) { | ||
| const content = data.content as Array<{ type: string; text: string }> | undefined; | ||
| return content?.find((b) => b.type === "text")?.text; | ||
| } | ||
| const choices = data.choices as Array<{ message: { content: string } }> | undefined; | ||
| return choices?.[0]?.message?.content; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/providers/azure-openai.ts | head -180Repository: rohitg00/agentmemory
Length of output: 8153
🏁 Script executed:
# Check remaining lines of the file
sed -n '180,$p' src/providers/azure-openai.ts | head -30Repository: rohitg00/agentmemory
Length of output: 357
🏁 Script executed:
# Search for test files related to this provider
find . -type f -name "*.test.*" -o -name "*.spec.*" | grep -i azureRepository: rohitg00/agentmemory
Length of output: 46
🏁 Script executed:
# Search for any references to extractContent or response handling
rg "extractContent|content.*text|Anthropic.*response" --type ts -A 2 -B 2Repository: rohitg00/agentmemory
Length of output: 23026
🏁 Script executed:
cat -n src/providers/anthropic.tsRepository: rohitg00/agentmemory
Length of output: 2223
🏁 Script executed:
# Search for any Azure AI Foundry or Anthropic API response documentation in comments
rg "content.*block|text.*block|anthropic.*response" -i --type ts -B 2 -A 2Repository: rohitg00/agentmemory
Length of output: 2801
🏁 Script executed:
# Check if there are any type definitions or SDK imports that show the response shape
rg "type.*content.*array|Array.*type.*text" --type tsRepository: rohitg00/agentmemory
Length of output: 596
🏁 Script executed:
# Let's check if package.json has anthropic SDK version
cat package.json | grep -A 5 -B 5 anthropicRepository: rohitg00/agentmemory
Length of output: 372
🏁 Script executed:
# Search for any test or mock responses from Azure Foundry/Anthropic
rg "foundry|isFoundry" --type ts -B 5 -A 10Repository: rohitg00/agentmemory
Length of output: 3120
🏁 Script executed:
# Check if there's any documentation or comments about response format
rg "Messages API|content\[" --type ts -iRepository: rohitg00/agentmemory
Length of output: 2307
🌐 Web query:
Anthropic Messages API response content multiple text blocks
💡 Result:
In Anthropic's Messages API, the response content is an array of content blocks (Vec). Each block has a type, such as "text" for generated text. Currently, responses primarily contain "text" blocks, but can include multiple blocks like "tool_use" when tools are involved. For example: [{"type": "text", "text": "Hi, I'm Claude."}]. With structured outputs, the JSON is returned in response.content[0].text as a single text block. To extract text, iterate over the array, filter by type "text", and concatenate the texts.
Citations:
- 1: https://docs.anthropic.com/claude/reference/messages
- 2: https://docs.rs/anthropic/latest/anthropic/types/struct.MessagesResponse.html
- 3: https://docs.anthropic.com/claude/reference/complete
- 4: https://docs.anthropic.com/en/docs/build-with-claude/structured-outputs
- 5: https://github.com/codewithpassion/typescript-anthropic-tool-use-example/blob/main/Post.md
🏁 Script executed:
# Check the return type signature and how it's used
rg "extractContent" --type ts -B 2 -A 5Repository: rohitg00/agentmemory
Length of output: 1204
Join all text blocks from Anthropic responses instead of returning only the first.
The Anthropic Messages API response can contain multiple content blocks with type === "text", especially when combined with tool use or other features. The current implementation using .find() returns only the first text block, discarding subsequent ones. Per Anthropic API documentation, all text blocks should be concatenated.
🔧 Suggested fix
private extractContent(data: Record<string, unknown>): string | undefined {
if (this.isFoundry) {
const content = data.content as Array<{ type: string; text: string }> | undefined;
- return content?.find((b) => b.type === "text")?.text;
+ return content
+ ?.filter((b) => b.type === "text")
+ .map((b) => b.text)
+ .join("");
}
const choices = data.choices as Array<{ message: { content: string } }> | undefined;
return choices?.[0]?.message?.content;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/providers/azure-openai.ts` around lines 151 - 158, The extractContent
method currently returns only the first Anthropic text block when this.isFoundry
is true; update the logic in extractContent (the isFoundry branch that reads the
local variable content) to collect all entries where b.type === "text" and
concatenate them (in order) into a single string (e.g., join their .text values
with an empty string or a newline) instead of using .find(); preserve the
existing return type (string | undefined) by returning undefined when there are
no text blocks and the concatenated string when there are.
Summary
{resource}.openai.azure.com) uses OpenAI chat completions format; Azure AI Foundry Anthropic ({resource}.services.ai.azure.com/anthropic) uses Anthropic Messages API format withx-api-key/anthropic-versionheaders.fetchignoresHTTP_PROXY/HTTPS_PROXY, causing DNS failures in corporate networks. Provider tunnels through HTTP CONNECT proxy viatunnel-agent+node-fetchwhen proxy env vars are present. Proxy credentials (user:pass@host) are extracted from the URL and forwarded asProxy-Authorization./v1/messagesis not appended when the endpoint already ends with that path.VALID_PROVIDERSgap:azure-openaiwas missing from the runtime allowlist used byloadFallbackConfig(), soFALLBACK_PROVIDERS=azure-openaiwas silently dropped despiteProviderTypeincluding the value.AzureOpenAIProviderto satisfy the 80% docstring coverage check.Changes
src/providers/azure-openai.tssrc/providers/index.tsAzureOpenAIProvidersrc/config.tsazure-openaiadded toVALID_PROVIDERSsrc/types.tsProviderTypeunion includesazure-openaisrc/viewer/index.htmlREADME.mdAZURE_OPENAI_*env vars with proxy noteCHANGELOG.md[Unreleased]entries for all changesEnvironment variables
Test plan
HTTP_PROXY/HTTPS_PROXYset)user:pass@host:port) forwardsProxy-AuthorizationcorrectlyFALLBACK_PROVIDERS=azure-openaiis not silently dropped/v1/messagessuffix does not double-append