Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
92 commits
Select commit Hold shift + click to select a range
13e9709
docs: add LLM Service introduction
ai-bankofai Mar 15, 2026
3c38154
docs: add Quick Start for LLM Service
ai-bankofai Mar 15, 2026
d801547
Create pricing-and-usage.md
ai-bankofai Mar 15, 2026
ef88757
Create chatgpt-5-2.md
ai-bankofai Mar 15, 2026
7d5c48c
Create chatgpt-5-mini.md
ai-bankofai Mar 15, 2026
0589a75
Create chatgpt-5-nano.md
ai-bankofai Mar 15, 2026
0599568
Create claude-opus-4-6.md
ai-bankofai Mar 15, 2026
19d6d35
Create claude-opus-4-5.md
ai-bankofai Mar 15, 2026
10c9387
Create claude-sonnet-4-6.md
ai-bankofai Mar 15, 2026
bcee033
Create claude-sonnet-4-5.md
ai-bankofai Mar 15, 2026
b628118
Create claude-haiku-4-5.md
ai-bankofai Mar 15, 2026
5c05d99
Create gemini-3-1-pro.md
ai-bankofai Mar 15, 2026
ca45afc
Create gemini-3-flash.md
ai-bankofai Mar 15, 2026
fedd20c
Create chat-completion.md
ai-bankofai Mar 15, 2026
2c2caa7
Create integration-guide.md
ai-bankofai Mar 15, 2026
ef2d68b
Update integration-guide.md
ai-bankofai Mar 15, 2026
b46cff4
Update integration-guide.md
ai-bankofai Mar 15, 2026
b26284f
Create ne-click-script-tutorial.md
ai-bankofai Mar 15, 2026
d34d935
Rename ne-click-script-tutorial.md to one-click-script-tutorial.md
ai-bankofai Mar 15, 2026
3db8666
Update one-click-script-tutorial.md
ai-bankofai Mar 15, 2026
a9b0079
Add files via upload
ai-bankofai Mar 16, 2026
c2b0274
Update sidebars.js
ai-bankofai Mar 16, 2026
b711bc9
Add files via upload
ai-bankofai Mar 16, 2026
3f3bd9e
Add files via upload
ai-bankofai Mar 16, 2026
5409581
Add files via upload
ai-bankofai Mar 16, 2026
f76f57e
Delete docs/llm-service/api/ai_studio_code.md
ai-bankofai Mar 16, 2026
787c9df
Delete docs/llm-service/api/ai_studio_code.yaml
ai-bankofai Mar 16, 2026
0511de1
Update swagger.json
ai-bankofai Mar 16, 2026
40de1f6
Delete docs/llm-service/api/swagger.json
ai-bankofai Mar 16, 2026
7268078
Delete docs/llm-service/api/Bankofai API.md
ai-bankofai Mar 16, 2026
07e3799
Add files via upload
ai-bankofai Mar 16, 2026
da0d9c2
Add files via upload
ai-bankofai Mar 16, 2026
ad59fc7
Update API.md
ai-bankofai Mar 16, 2026
5ee977a
Delete docs/llm-service/api/Bankofai API.md
ai-bankofai Mar 16, 2026
b796721
Delete docs/llm-service/api/chat-completion.md
ai-bankofai Mar 16, 2026
a0643c5
Update one-click-script-tutorial.md
ai-bankofai Mar 16, 2026
6f38a51
Update one-click-script-tutorial.md
ai-bankofai Mar 16, 2026
6287159
Update one-click-script-tutorial.md
ai-bankofai Mar 16, 2026
ee522cf
Create glm-5.md
ai-bankofai Mar 16, 2026
6862b97
Create kimi-k2.5.md
ai-bankofai Mar 16, 2026
6a6b9c9
Create minimax-m2.5.md
ai-bankofai Mar 16, 2026
4b1d7b3
Update integration-guide.md
ai-bankofai Mar 17, 2026
7282f61
Update integration-guide.md
ai-bankofai Mar 17, 2026
ec0cb42
Update integration-guide.md
ai-bankofai Mar 17, 2026
8cc3f88
Update integration-guide.md
ai-bankofai Mar 17, 2026
da9e087
Update integration-guide.md
ai-bankofai Mar 17, 2026
b720194
Update integration-guide.md
ai-bankofai Mar 17, 2026
61fd4fb
Update integration-guide.md
ai-bankofai Mar 17, 2026
42981f2
Update integration-guide.md
ai-bankofai Mar 17, 2026
c9a5647
Create llm-service
ai-bankofai Mar 17, 2026
6db77e9
Create introduction.md
ai-bankofai Mar 17, 2026
2f89493
Delete i18n/zh-Hans/docusaurus-plugin-content-docs/current/llm-service
ai-bankofai Mar 17, 2026
4311b5a
Create introduction.md
ai-bankofai Mar 17, 2026
273c72f
Delete i18n/zh-Hans/docusaurus-plugin-content-docs/llm-service directory
ai-bankofai Mar 17, 2026
82c5e05
Create quick-start.md
ai-bankofai Mar 17, 2026
28ea31b
Update quick-start.md
ai-bankofai Mar 17, 2026
c31773a
Create pricing-and-usage.md
ai-bankofai Mar 17, 2026
f2c52dc
Create chatgpt-5-2.md
ai-bankofai Mar 17, 2026
e30d121
Create chatgpt-5-mini.md
ai-bankofai Mar 17, 2026
8b9c1fe
Create chatgpt-5-nano.md
ai-bankofai Mar 17, 2026
b8bbe37
Update chatgpt-5-nano.md
ai-bankofai Mar 17, 2026
d8ef698
Create claude-haiku-4-5.md
ai-bankofai Mar 17, 2026
d931aa5
Create claude-opus-4-5.md
ai-bankofai Mar 17, 2026
3b538c9
Create claude-opus-4-6.md
ai-bankofai Mar 17, 2026
77f827d
Create claude-sonnet-4-5.md
ai-bankofai Mar 17, 2026
ceca1df
Create claude-sonnet-4-6.md
ai-bankofai Mar 17, 2026
4235cb2
Create gemini-3-1-pro.md
ai-bankofai Mar 17, 2026
c666fce
Create gemini-3-flash.md
ai-bankofai Mar 17, 2026
70e6277
Create glm-5.md
ai-bankofai Mar 17, 2026
ef5dd5f
Create kimi-k2.5.md
ai-bankofai Mar 17, 2026
c3035af
Create minimax-m2.5.md
ai-bankofai Mar 17, 2026
76342f2
Create integration-guide.md
ai-bankofai Mar 17, 2026
7a97ce6
Create one-click-script-tutorial.md
ai-bankofai Mar 17, 2026
a8439ce
Update one-click-script-tutorial.md
ai-bankofai Mar 17, 2026
47a6647
Create API.md
ai-bankofai Mar 17, 2026
347d776
Update API.md
ai-bankofai Mar 17, 2026
5e5c8c8
Merge pull request #22 from BofAI/update-mcp-server
Will-Guan Mar 17, 2026
9823854
config sidebars
jizhen181-dot Mar 18, 2026
860e0a7
fix llm-service-->llmservice
jizhen181-dot Mar 18, 2026
304c34a
Merge branch 'main' into ai-bankofai-patch-1
Will-Guan Mar 18, 2026
776a7cd
Merge pull request #13 from BofAI/ai-bankofai-patch-1
Will-Guan Mar 18, 2026
fdee564
Update API.md
ai-bankofai Mar 18, 2026
5267b06
Update API.md
ai-bankofai Mar 18, 2026
7a029d4
Update integration-guide.md
ai-bankofai Mar 18, 2026
a5949c4
Update integration-guide.md
ai-bankofai Mar 18, 2026
85f18bb
add docker.yml
jizhen181-dot Mar 18, 2026
422c990
add tag
jizhen181-dot Mar 18, 2026
cd6b170
fix
jizhen181-dot Mar 18, 2026
77685f0
fix
jizhen181-dot Mar 18, 2026
1e36b52
fix
jizhen181-dot Mar 18, 2026
d85d693
fix
jizhen181-dot Mar 18, 2026
de8fdca
fix
jizhen181-dot Mar 18, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
81 changes: 81 additions & 0 deletions .github/workflows/docker.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
name: Build and Push Docker Image

on:
push:
branches:
- main
- master
- ai-bankofai-patch-1
tags:
- 'test'
pull_request:
branches:
- main
- master
- ai-bankofai-patch-1
workflow_dispatch:

env:
IMAGE_NAME: bankofai/docs

jobs:
build-and-push:
runs-on: ubuntu-latest
permissions:
contents: read

steps:
- name: Checkout repository
uses: actions/checkout@v4

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Debug Docker Hub secrets
if: github.event_name != 'pull_request'
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
DOCKERHUB_TOKEN: ${{ secrets.DOCKERHUB_TOKEN }}
run: |
echo "username=[$DOCKERHUB_USERNAME]"
echo "token_length=${#DOCKERHUB_TOKEN}"

- name: Log in to Docker Hub
if: github.event_name != 'pull_request'
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
DOCKERHUB_TOKEN: ${{ secrets.DOCKERHUB_TOKEN }}
run: |
echo "$DOCKERHUB_TOKEN" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin

- name: Extract Docker metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.IMAGE_NAME }}
tags: |
type=raw,value=test

- name: Determine APP_ENV
id: app_env
run: |
if [[ "${{ github.ref }}" == "refs/heads/main" || "${{ github.ref }}" == "refs/heads/master" ]]; then
echo "APP_ENV=production" >> $GITHUB_OUTPUT
elif [[ "${{ github.ref }}" == refs/tags/* ]]; then
echo "APP_ENV=production" >> $GITHUB_OUTPUT
else
echo "APP_ENV=development" >> $GITHUB_OUTPUT
fi

- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
APP_ENV=${{ steps.app_env.outputs.APP_ENV }}
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: linux/amd64
236 changes: 236 additions & 0 deletions docs/llmservice/api/API.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,236 @@
# AI API (OpenAI Compatible)
Chat completion. Auth: Bearer token. Non-stream: JSON with choices[].content. Stream: SSE chunks with choices[].delta.content.

## Version: 1.0

### BaseURL
https://api.bankofai.io/

### Available authorizations
#### bearerAuth (HTTP, bearer)
Bearer `<token>`, e.g. Bearer sk-xxx
Bearer format: JWT

---
## Model List

### [GET] /v1/models
**List models (OpenAI compatible)**

List available models. Auth: Bearer token. Response: object, success, data.

#### Responses

| Code | Description | Schema |
| ---- | ----------- | ------ |
| 200 | object: list; success: true; data: array of { id, object, created, owned_by } | **application/json**: [V1ModelsResponse](#v1modelsresponse)<br/> |
| 400 | Bad Request - invalid parameters or malformed body | **application/json**: [ErrorResponse](#errorresponse)<br/> |
| 401 | Unauthorized - invalid or missing authentication | **application/json**: [ErrorResponse](#errorresponse)<br/> |
| 403 | Forbidden - access denied, insufficient quota, or banned | **application/json**: [ErrorResponse](#errorresponse)<br/> |
| 429 | Too Many Requests - rate limit exceeded | **application/json**: [ErrorResponse](#errorresponse)<br/> |
| 500 | Internal Server Error | **application/json**: [ErrorResponse](#errorresponse)<br/> |

##### Security

| Security Schema | Scopes |
| --------------- | ------ |
| bearerAuth | |

---
## Chat Completions

### [POST] /v1/chat/completions
**Create chat completion (OpenAI compatible)**

Chat completion. Auth: Bearer token. Non-stream: JSON with choices[].content. Stream: SSE chunks with choices[].delta.content.

#### Request Body

| Required | Schema |
| -------- | ------ |
| Yes | **application/json**: [ChatCompletionsRequest](#chatcompletionsrequest)<br/> |

#### Responses

| Code | Description | Schema |
| ---- | ----------- | ------ |
| 200 | Success. Schema differs by stream mode. | **application/json**: [ChatCompletionsResponse](#chatcompletionsresponse)<br/>**text/event-stream**: [ChatCompletionsResponse](#chatcompletionsresponse)<br/> |
| 400 | Bad Request - invalid parameters, malformed body, or invalid request | **application/json**: [ErrorResponse](#errorresponse)<br/> |
| 401 | Unauthorized - invalid or missing authentication | **application/json**: [ErrorResponse](#errorresponse)<br/> |
| 403 | Forbidden - access denied, insufficient quota, or model access restricted | **application/json**: [ErrorResponse](#errorresponse)<br/> |
| 429 | Too Many Requests - rate limit exceeded | **application/json**: [ErrorResponse](#errorresponse)<br/> |
| 500 | Internal Server Error | **application/json**: [ErrorResponse](#errorresponse)<br/> |
| 502 | Bad Gateway - upstream service error | **application/json**: [ErrorResponse](#errorresponse)<br/> |
| 503 | Service Unavailable - overloaded or no available channel | **application/json**: [ErrorResponse](#errorresponse)<br/> |

##### Security

| Security Schema | Scopes |
| --------------- | ------ |
| bearerAuth | |

---
### Schemas

#### ErrorResponse

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| error | { **"message"**: string, **"type"**: string, **"param"**: string, null, **"code"**: string, null } | | No |

#### V1ModelsResponse

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| object | string | *Example:* `"list"` | No |
| success | boolean | *Example:* `true` | No |
| data | [ [V1ModelItem](#v1modelitem) ] | | No |

#### V1ModelItem

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| id | string | *Example:* `"gpt-4"` | No |
| object | string | *Example:* `"model"` | No |
| created | integer | *Example:* `1626777600` | No |
| owned_by | string | *Example:* `"openai"` | No |

#### ChatCompletionsRequest

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| model | string | ID of the model to use (e.g. gpt-4).<br/>*Example:* `"gpt-4"` | Yes |
| messages | [ [ChatMessage](#chatmessage) ] | List of messages in the conversation. | Yes |
| stream | boolean | If true, partial message deltas will be sent as server-sent events. Default false. | No |
| max_tokens | integer | Maximum number of tokens that can be generated in the completion. | No |
| temperature | number | Sampling temperature between 0 and 2. Higher = more random. Default 1. | No |
| top_p | number | Nucleus sampling: consider tokens with top_p probability mass. Default 1. | No |
| stop | | Up to 4 sequences where the API will stop generating. String or array of strings. | No |
| n | integer | How many chat completion choices to generate. Default 1. | No |
| frequency_penalty | number | -2.0 to 2.0. Penalize repeated tokens. Default 0. | No |
| presence_penalty | number | -2.0 to 2.0. Penalize tokens that appear in the text so far. Default 0. | No |
| seed | integer | Random seed for deterministic sampling (if supported by model). | No |
| response_format | [ChatResponseFormat](#chatresponseformat) | | No |
| tools | [ [ChatTool](#chattool) ] | List of tools the model may call. Each has type "function" and function { name, description?, parameters? }. | No |
| tool_choice | | | No |
| user | string | Optional end-user identifier for abuse monitoring. | No |

#### ChatMessage

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| role | string | "system" \| "user" \| "assistant" \| "tool". System sets behavior; user/assistant are conversation; tool is tool result.<br/>*Example:* `"user"` | No |
| content | string | Message content. For tool role, the result of the tool call.<br/>*Example:* `"Hello"` | No |
| name | string | Optional name for the message author (e.g. to disambiguate multiple users). | No |
| tool_call_id | string | When role is "tool", the id of the tool call this result is for. Required for tool messages. | No |
| tool_calls | [ [ChatToolCallItem](#chattoolcallitem) ] | When role is "assistant" and the model called tools, array of { id, type, function: { name, arguments } }. | No |

#### ChatResponseFormat

Specify output format: { "type": "text" } or { "type": "json_object" } or json_schema.

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| type | string | "text" or "json_object". | No |
| json_schema | | When type is json_schema, optional schema for the output. | No |

#### ChatTool

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| type | string | Must be "function".<br/>*Example:* `"function"` | No |
| function | [ChatToolFunction](#chattoolfunction) | Function definition (name, description, parameters). | No |

#### ChatToolFunction

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| name | string | Name of the function. | No |
| description | string | Optional description for the model. | No |
| parameters | | Optional JSON schema for the function arguments. | No |

#### ChatToolCallItem

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| id | string | ID of the tool call. | No |
| type | string | "function".<br/>*Example:* `"function"` | No |
| function | [ChatToolCallFunction](#chattoolcallfunction) | Name and arguments of the call. | No |

#### ChatToolCallFunction

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| name | string | Name of the function to call. | No |
| arguments | string | JSON string of the arguments. | No |

#### ToolChoiceObject

Precise mode: specifies the particular function to call.

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| type | string, <br/>**Available values:** "function" | Must be "function".<br/>*Enum:* `"function"`<br/>*Example:* `"function"` | Yes |
| function | [ToolChoiceFunction](#toolchoicefunction) | Function definition to call. | Yes |

#### ToolChoiceFunction

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| name | string | Name of the function to call. | Yes |

#### ChatCompletionsResponse

Non-stream: object=chat.completion, choices[].message, usage. Stream: object=chat.completion.chunk, choices[].delta; final chunk has usage.

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| id | string | *Example:* `"chatcmpl-xxx"` | No |
| object | string, <br/>**Available values:** "chat.completion", "chat.completion.chunk" | "chat.completion" (non-stream) or "chat.completion.chunk" (stream).<br/>*Enum:* `"chat.completion"`, `"chat.completion.chunk"` | No |
| created | integer | *Example:* `1677652288` | No |
| model | string | *Example:* `"gpt-4"` | No |
| service_tier | string | *Example:* `"default"` | No |
| system_fingerprint | string, null | | No |
| choices | [ [ChatChoice](#chatchoice) ] | Empty in final usage chunk. | No |
| usage | | Non-stream: always present. Stream: null until final chunk. | No |
| obfuscation | string | | No |

#### ChatMessageContent

Non-stream choices[].message. Full assistant message with role, content, refusal, annotations.

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| role | string | *Example:* `"assistant"` | No |
| content | string | Assistant reply text. | No |
| refusal | string, null | Refusal reason when model declines; null otherwise. | No |
| annotations | [ ] | Citations, references, etc. | No |

#### ChatChoice

Non-stream: message. Stream: delta. finish_reason null until last content chunk.

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| index | integer | | No |
| message | [ChatMessageContent](#chatmessagecontent) | Non-stream only. Full assistant message. | No |
| delta | [ChatChoiceDelta](#chatchoicedelta) | Stream only. Incremental content; empty {} on stop. | No |
| finish_reason | string, null | Null until done; e.g. "stop", "length", "tool_calls". | No |

#### ChatChoiceDelta

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| content | string | | No |
| role | string | | No |
| tool_calls | [ [ChatToolCallItem](#chattoolcallitem) ] | | No |

#### ChatUsage

| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| prompt_tokens | integer | Number of tokens in the prompt. | No |
| completion_tokens | integer | Number of tokens in the completion. | No |
| total_tokens | integer | Total tokens (prompt + completion). | No |
| prompt_tokens_details | { **"cached_tokens"**: integer, **"audio_tokens"**: integer } | | No |
| completion_tokens_details | { **"reasoning_tokens"**: integer, **"audio_tokens"**: integer, **"accepted_prediction_tokens"**: integer, **"rejected_prediction_tokens"**: integer } | | No |
19 changes: 19 additions & 0 deletions docs/llmservice/introduction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Welcome to LLM Service

## About LLM Service

LLM Service is a professional AI service module within the Bank of AI ecosystem, built on top-tier blockchain infrastructure. It is dedicated to providing users with an efficient, user-friendly, and creative AI interaction experience. As a core AI service infrastructure of Bank of AI, this service leverages the decentralization, security, and high efficiency of blockchain technology to introduce a brand-new AI service model.

The service's core features include:

* **Multi-Model AI Chat:** We integrate various industry-leading Large Language Models (LLMs), allowing users to select the most suitable model based on their specific needs.
* **Powerful Integrated AI Services:** We offer comprehensive AI-related API services, enabling users to access and integrate them rapidly and easily within the Bank of AI framework.
* **Web3 Native Experience:** Through seamless integration with mainstream Web3 wallets, we provide an end-to-end native experience, from login to payment.

## Why Choose LLM Service?

Choosing our LLM Service means enjoying the unique advantages of a secure blockchain ecosystem alongside meticulously designed features.

* **Multi-chain Ecosystem Advantages:** As part of the Bank of AI ecosystem, users can make payments using mainstream tokens on supported chains, benefiting from fast transaction confirmations and low fees.
* **Low Cost & High Efficiency:** By optimizing resources and ensuring efficient on-chain interactions, we deliver highly cost-effective AI services to users.
* **Security & Privacy Protection:** We utilize a decentralized login method. Users can complete authentication simply by signing with their Web3 wallet, ensuring greater security and privacy for all AI interactions.
30 changes: 30 additions & 0 deletions docs/llmservice/models/chatgpt-5-2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# ChatGPT-5.2

## Overview
ChatGPT-5.2 is the latest generation of the flagship large language model developed by OpenAI. Building upon the powerful capabilities of the 5.1 version, it further optimizes the speed of multimodal processing and the execution efficiency of complex tasks, making it the ideal choice for professional users seeking ultimate performance and efficiency.

## Key Features
* **Efficient Multimodal Processing:** Significantly improves the parsing and generation speed of image and video content compared to 5.1, achieving a smoother multimodal interaction experience.
* **Enhanced Task Execution Efficiency:** Optimizes the internal reasoning engine, allowing for faster and more accurate conclusions when handling long-chain, multi-step complex tasks.
* **Stronger Interference Resistance:** Exhibits greater robustness and accuracy when processing inputs containing significant noise or ambiguous instructions.

## Best Use Cases
* **Real-time Data Analysis and Visualization:** Capable of quickly processing real-time data streams and generating complex charts and visualization reports.
* **Complex Project Management and Planning:** Assists with task decomposition, resource allocation, and risk assessment for efficient decision support.
* **High-Frequency, High-Precision Professional Consulting:** Suitable for professional fields requiring fast and accurate responses, such as financial trading analysis and legal document retrieval.

## Capabilities and Limitations

| Capability | Detailed Description |
| :--- | :--- |
| **Reasoning Ability** | Extremely Strong. Maintains a leading position in complex logical reasoning and scientific computation, with improved efficiency. |
| **Creative Ability** | Extremely Strong. Can generate high-quality, in-depth content, particularly excelling in structured and professional texts. |
| **Multimodal Ability** | Comprehensive and Efficient. Supports input and understanding of images, videos, and audio, and can quickly generate high-quality image content. |
| **Response Speed** | Medium to Slow. Improved compared to 5.1, but still a deep analysis model, not suitable for extremely low-latency scenarios. |
| **Context Window** | Huge. Supports a context window of millions of tokens. |

## Credits and Pricing

| Model | Input (Credits/Token) | Output (Credits/Token) |
| :--- | :--- | :--- |
| **ChatGPT-5.2** | 1.75 | 14.00 |
Loading
Loading