From 3737be89edef6409dea24a1fe783f2f1ec224679 Mon Sep 17 00:00:00 2001 From: Akhil Madhu Menon Date: Mon, 4 May 2026 12:47:14 +0530 Subject: [PATCH 1/5] Updated Azure Foundry docs to include suported target UI formats --- integrations/llms/azure-foundry.mdx | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/integrations/llms/azure-foundry.mdx b/integrations/llms/azure-foundry.mdx index c825e0a4..b480fd8b 100644 --- a/integrations/llms/azure-foundry.mdx +++ b/integrations/llms/azure-foundry.mdx @@ -207,6 +207,17 @@ You can Learn more about these [Azure Entra Resources here](https://learn.micros +### Supported Target URI formats + +Use the **Target URI** from your Azure AI Foundry deployment details as the Azure Foundry URL in Portkey. Portkey supports the following endpoint formats: + +- **AI Services**: `https://your-resource-name.services.ai.azure.com/models` +- **Project-scoped OpenAI v1**: `https://your-resource-name.services.ai.azure.com/api/projects/your-project-name/openai/v1` +- **Managed**: `https://your-model-name.region.inference.ml.azure.com/score` +- **Serverless**: `https://your-model-name.region.models.ai.azure.com` + +For project-scoped OpenAI deployments, enter the full `/api/projects//openai/v1` URL exactly as shown in Azure AI Foundry. Portkey appends the request route, such as `/chat/completions`, when sending traffic to the deployment. + ## Adding Multiple Models to Your Azure AI Foundry Provider You can deploy multiple models through a single Azure AI Foundry provider by using Portkey's custom models feature. From e1211892b3b685d514ac4ce459c847f27939a4a8 Mon Sep 17 00:00:00 2001 From: Akhil Madhu Menon Date: Mon, 4 May 2026 13:36:09 +0530 Subject: [PATCH 2/5] Removed redundant changes --- integrations/llms/azure-foundry.mdx | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/integrations/llms/azure-foundry.mdx b/integrations/llms/azure-foundry.mdx index b480fd8b..34e48d42 100644 --- a/integrations/llms/azure-foundry.mdx +++ b/integrations/llms/azure-foundry.mdx @@ -163,6 +163,7 @@ Required parameters: - **Azure Managed ClientID**: Your managed client ID - **Azure Foundry URL**: The base endpoint URL for your deployment, formatted according to your deployment type: - For AI Services: `https://your-resource-name.services.ai.azure.com/models` + - For project-scoped OpenAI v1 deployments: `https://your-resource-name.services.ai.azure.com/api/projects/your-project-name/openai/v1` - For Managed: `https://your-model-name.region.inference.ml.azure.com/score` - For Serverless: `https://your-model-name.region.models.ai.azure.com` @@ -188,6 +189,7 @@ Required parameters: - **Azure Entra Tenant ID**: Your tenant ID - **Azure Foundry URL**: The base endpoint URL for your deployment, formatted according to your deployment type: - For AI Services: `https://your-resource-name.services.ai.azure.com/models` + - For project-scoped OpenAI v1 deployments: `https://your-resource-name.services.ai.azure.com/api/projects/your-project-name/openai/v1` - For Managed: `https://your-model-name.region.inference.ml.azure.com/score` - For Serverless: `https://your-model-name.region.models.ai.azure.com` @@ -207,15 +209,6 @@ You can Learn more about these [Azure Entra Resources here](https://learn.micros -### Supported Target URI formats - -Use the **Target URI** from your Azure AI Foundry deployment details as the Azure Foundry URL in Portkey. Portkey supports the following endpoint formats: - -- **AI Services**: `https://your-resource-name.services.ai.azure.com/models` -- **Project-scoped OpenAI v1**: `https://your-resource-name.services.ai.azure.com/api/projects/your-project-name/openai/v1` -- **Managed**: `https://your-model-name.region.inference.ml.azure.com/score` -- **Serverless**: `https://your-model-name.region.models.ai.azure.com` - For project-scoped OpenAI deployments, enter the full `/api/projects//openai/v1` URL exactly as shown in Azure AI Foundry. Portkey appends the request route, such as `/chat/completions`, when sending traffic to the deployment. ## Adding Multiple Models to Your Azure AI Foundry Provider From ef8ffa19e727602986806b6e9560fafb1ca89391 Mon Sep 17 00:00:00 2001 From: Akhil Madhu Menon Date: Mon, 4 May 2026 14:57:48 +0530 Subject: [PATCH 3/5] Added Project Scoped OpenAI endpoint --- integrations/llms/azure-foundry.mdx | 1 + 1 file changed, 1 insertion(+) diff --git a/integrations/llms/azure-foundry.mdx b/integrations/llms/azure-foundry.mdx index 34e48d42..af4449c0 100644 --- a/integrations/llms/azure-foundry.mdx +++ b/integrations/llms/azure-foundry.mdx @@ -143,6 +143,7 @@ Portkey supports three authentication methods for Azure AI Foundry. For most use 2. Click on the deployment to view details 3. Copy the **API Key** from the authentication section 4. Copy the **Target URI** - this is your endpoint URL + - For project-scoped OpenAI v1 deployments, this can be the full `https://your-resource-name.services.ai.azure.com/api/projects/your-project-name/openai/v1` URL. 5. Note the **API Version** from your deployment URL 6. **Azure Deployment Name** (Optional): Only required for Managed Services deployments From 44db08f0e7a2d8977ccb718006d9bb817d934a17 Mon Sep 17 00:00:00 2001 From: Akhil Madhu Menon Date: Mon, 4 May 2026 15:01:58 +0530 Subject: [PATCH 4/5] Added copilot's suggestion --- integrations/llms/azure-foundry.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/integrations/llms/azure-foundry.mdx b/integrations/llms/azure-foundry.mdx index af4449c0..ac8ee057 100644 --- a/integrations/llms/azure-foundry.mdx +++ b/integrations/llms/azure-foundry.mdx @@ -210,7 +210,7 @@ You can Learn more about these [Azure Entra Resources here](https://learn.micros -For project-scoped OpenAI deployments, enter the full `/api/projects//openai/v1` URL exactly as shown in Azure AI Foundry. Portkey appends the request route, such as `/chat/completions`, when sending traffic to the deployment. +For project-scoped OpenAI v1 deployments, enter the full URL ending with `/api/projects//openai/v1`, such as `https://your-resource-name.services.ai.azure.com/api/projects/your-project-name/openai/v1`, exactly as shown in Azure AI Foundry. Portkey appends the request route, such as `/chat/completions`, when sending traffic to the deployment. ## Adding Multiple Models to Your Azure AI Foundry Provider From 07db849699799e6362e31844238e4aaa3b4d0fd8 Mon Sep 17 00:00:00 2001 From: Akhil Madhu Menon Date: Mon, 4 May 2026 15:05:35 +0530 Subject: [PATCH 5/5] Removed redundant instruc tion --- integrations/llms/azure-foundry.mdx | 1 - 1 file changed, 1 deletion(-) diff --git a/integrations/llms/azure-foundry.mdx b/integrations/llms/azure-foundry.mdx index ac8ee057..588d91b8 100644 --- a/integrations/llms/azure-foundry.mdx +++ b/integrations/llms/azure-foundry.mdx @@ -143,7 +143,6 @@ Portkey supports three authentication methods for Azure AI Foundry. For most use 2. Click on the deployment to view details 3. Copy the **API Key** from the authentication section 4. Copy the **Target URI** - this is your endpoint URL - - For project-scoped OpenAI v1 deployments, this can be the full `https://your-resource-name.services.ai.azure.com/api/projects/your-project-name/openai/v1` URL. 5. Note the **API Version** from your deployment URL 6. **Azure Deployment Name** (Optional): Only required for Managed Services deployments