feat (Azure): support Azure deployment selection#3260
feat (Azure): support Azure deployment selection#3260nanaya-tachibana wants to merge 2 commits intoChatGPTNextWeb:mainfrom
Conversation
use `-` to hide all internal models and use `+` to add azure deployments.
with the latest rename feat, a meaningful display name can be set by `{deployment-id}:{model-display-name}`
|
Add both for openai aswell not only azure. it will better because in future, its not only text models based,(eg tts,dalle) |
|
Also take for reference: #3206 (comment) and #3206 (comment). |
|
That's a reference summarizing how models should be selected depending on the models chosen by users for Azure. here is the example // fix known issue where summarize is not using the current model selected
function getSummarizeModel(currentModel: string, modelConfig: ModelConfig) {
// should be depends of user selected
return currentModel.startsWith("gpt") ? modelConfig.model : currentModel;
}then simply how call const sessionModelConfig = this.currentSession().mask.modelConfig;
const topicModel = getSummarizeModel(session.mask.modelConfig.model, sessionModelConfig); |
From my current understanding of the code, probably an advanced configuration is need to achieve that. Current For example: [
{
"api_provider": "openai",
"base_url": "https://api.openai.com",
"api_key": "XXXXXX",
"models": [...]
},
{
"api_provider": "openai",
"base_url": "https://proxy.foo.bar",
"api_key": "XXXXXX",
"models": [...]
},
{
"api_provider": "azure",
"base_url": "https://{resource-name-1}.openai.azure.com/openai/deployments/",
"api_key": "XXXXXX",
"models": [...]
},
{
"api_provider": "azure",
"base_url": "https://{resource-name-2}.openai.azure.com/openai/deployments/",
"api_key": "XXXXXX",
"models": [...]
}
] |
I didn't quite get your point. It seems openAI APIs are quite consistent. All APIs share the same base url and model is passed as a request parameter. There is no need to generate url based on model. I saw there is a PR working on TTS #3258 (comment). let me check how the server side will be implemented and adapt some changes if possible. |
this one sorry I was replying on phone that first comments |
I think your proposed schema is quite good. May further add "name" field to it, to give each model group a semantic meaning, e.g.
Same model name could be repeatedly used in different groups, so users can decide which provider to use for gpt-3.5-turbo model at preference. |
Oh, that makes sense to me. Thanks |
|
i also has same issue on it. |
|
I have already created a PR (Pull Request), which has resolved the issue. Two changes:
The changes are as follows:
|

I'd like to add the support of Azure deployment selection to give the same model switching experience when using Azure openAI service.
This patch probably won't break anything when
CUSTOM_MODELSis not set.User can use
-to hide all internal models and use+to add azure deployments inCUSTOM_MODELS.AZURE_URL.AZURE_URL.