Conversation
## Background Since the change from useChat to Chat is happening the message handling did change, in the "wire up th ui" a working code already mention: https://v5.ai-sdk.dev/docs/getting-started/nuxt#wire-up-the-ui copy the logic to the migration doc. Subject: https://v5.ai-sdk.dev/docs/migration-guides/migration-guide-5-0#usechat-replaced-with-chat-class ## Summary Add e.preventDefault(); to the handleSubmit methode
## Background Anthropic's official Claude 4 model IDs use the format `claude-opus-4-20250514` and `claude-sonnet-4-20250514`, but the AI SDK was using the format `claude-4-opus-20250514` and `claude-4-sonnet-20250514`. ## Summary Updated Claude 4 model ID to match Anthropic's official format: - `claude-4-opus-20250514` --> `claude-opus-4-20250514` - `claude-4-sonnet-20250514` --> `claude-sonnet-4-20250514` Changes made to: - TypeScript model ID types in anthropic provider (AnthropicMessagesModelId type) - Provider documentation - Cookbook examples - Model reference tables - Migration guide examples ## Verification Verified against [Anthropic's official documentation](https://docs.anthropic.com/en/docs/about-claude/models/overview) that the correct model IDs are `claude-opus-4-20250514` and `claude-sonnet-4-20250514`. ## Future Work I didn't change the AI Gateway model ids since those are obviously vercel-specific ids that are routed internally, but note that those also use the older format (claude-4-sonnet rather than claude-sonnet-4)
…rt (vercel#7088) ## Background https://ai.google.dev/gemini-api/docs/openai#thinking ## Summary add a reasoningEffort Option ## Related Issues Fixes vercel#7087
## Background Over 60% of users prefer an OOP Agent abstraction over the functional `generateText` / `streamText` abstractions. ## Summary Add an experimental `Agent` abstraction. ## Verification Ran new examples.
## Background The system prompt is expected to be reused in multiple agent calls. ## Summary Add system option to agent constructor.
# Releases ## ai@5.0.0-beta.8 ### Patch Changes - 6909543: feat (ai): support system parameter in Agent constructor - c8fce91: feat (ai): add experimental Agent abstraction - 9121250: Expose provider metadata as an attribute on exported OTEL spans - Updated dependencies [97fedf9] - @ai-sdk/gateway@1.0.0-beta.4 ## @ai-sdk/anthropic@2.0.0-beta.4 ### Patch Changes - fdff8a4: fix(provider/anthropic): correct Claude 4 model ID format - 84577c8: fix (providers/anthropic): remove fine grained tool streaming beta ## @ai-sdk/cerebras@1.0.0-beta.3 ### Patch Changes - Updated dependencies [7b069ed] - @ai-sdk/openai-compatible@1.0.0-beta.3 ## @ai-sdk/deepinfra@1.0.0-beta.3 ### Patch Changes - Updated dependencies [7b069ed] - @ai-sdk/openai-compatible@1.0.0-beta.3 ## @ai-sdk/deepseek@1.0.0-beta.3 ### Patch Changes - Updated dependencies [7b069ed] - @ai-sdk/openai-compatible@1.0.0-beta.3 ## @ai-sdk/fireworks@1.0.0-beta.3 ### Patch Changes - Updated dependencies [7b069ed] - @ai-sdk/openai-compatible@1.0.0-beta.3 ## @ai-sdk/gateway@1.0.0-beta.4 ### Patch Changes - 97fedf9: feat (providers/gateway): include description and pricing info in model list ## @ai-sdk/google-vertex@3.0.0-beta.7 ### Patch Changes - Updated dependencies [fdff8a4] - Updated dependencies [84577c8] - @ai-sdk/anthropic@2.0.0-beta.4 ## @ai-sdk/langchain@1.0.0-beta.8 ### Patch Changes - Updated dependencies [6909543] - Updated dependencies [c8fce91] - Updated dependencies [9121250] - ai@5.0.0-beta.8 ## @ai-sdk/llamaindex@1.0.0-beta.8 ### Patch Changes - Updated dependencies [6909543] - Updated dependencies [c8fce91] - Updated dependencies [9121250] - ai@5.0.0-beta.8 ## @ai-sdk/openai-compatible@1.0.0-beta.3 ### Patch Changes - 7b069ed: allow any string as reasoningEffort ## @ai-sdk/react@2.0.0-beta.8 ### Patch Changes - Updated dependencies [6909543] - Updated dependencies [c8fce91] - Updated dependencies [9121250] - ai@5.0.0-beta.8 ## @ai-sdk/rsc@1.0.0-beta.8 ### Patch Changes - Updated dependencies [6909543] - Updated dependencies [c8fce91] - Updated dependencies [9121250] - ai@5.0.0-beta.8 ## @ai-sdk/svelte@3.0.0-beta.8 ### Patch Changes - Updated dependencies [6909543] - Updated dependencies [c8fce91] - Updated dependencies [9121250] - ai@5.0.0-beta.8 ## @ai-sdk/togetherai@1.0.0-beta.3 ### Patch Changes - Updated dependencies [7b069ed] - @ai-sdk/openai-compatible@1.0.0-beta.3 ## @ai-sdk/vercel@1.0.0-beta.3 ### Patch Changes - Updated dependencies [7b069ed] - @ai-sdk/openai-compatible@1.0.0-beta.3 ## @ai-sdk/vue@2.0.0-beta.9 ### Patch Changes - Updated dependencies [6909543] - Updated dependencies [c8fce91] - Updated dependencies [9121250] - ai@5.0.0-beta.8 ## @ai-sdk/xai@2.0.0-beta.3 ### Patch Changes - Updated dependencies [7b069ed] - @ai-sdk/openai-compatible@1.0.0-beta.3 Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…7065) ## Background While getting intouch with mcp and ai sdk framework, I'm stumble about the not optimal used env naming of the nuxt getting started part. Since the env is only available at build but not at run time, when not NUXT_ as prefixed is used. So I added the NUXT_* prefix to the doc, evade other followers to stumble about the point. Official doc about this topic: https://nuxt.com/docs/guide/going-further/runtime-config#environment-variables Keeping the note aboud the default env name unchanged. In https://github.com/vercel/ai/blob/main/examples/nuxt-openai/.env.example the right name already used. ## Summary Change env name of nuxt getting started section to NUXT_*
…ages (vercel#7100) ## Background Converting UI to model messages currently throws an error for incomplete tool calls. ## Summary Add an option to ignore incomplete tool calls. ## Related Issues Fixes vercel#7097
…el#7073) <!-- Welcome to contributing to AI SDK! We're excited to see your changes. We suggest you read the following contributing guide we've created before submitting: https://github.com/vercel/ai/blob/main/CONTRIBUTING.md --> ## Background The Bedrock provider was sending `reasoningConfig` in the request body when reasoning was enabled, but according to [the Bedrock API documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/claude-messages-extended-thinking.html#claude-messages-use-extended-thinking), the correct field name should be `thinking`. This caused "Extra inputs are not permitted" errors when using reasoning capabilities with Bedrock models. (vercel#6617) ## Summary - **Fixed field name transformation**: Changed `reasoningConfig` to `thinking` in `additionalModelRequestFields` when reasoning is enabled - **Added filtering logic**: Prevented `reasoningConfig` from being accidentally sent at the top level by filtering it out from `providerOptions.bedrock` - **Updated comments**: Changed code comments to reflect the correct field name (`thinking` instead of `reasoningConfig`) - **Added comprehensive tests**: Added tests for both `doGenerate` and `doStream` to verify the transformation works correctly ### After Fix <details> <summary>generate-text/amazon-bedrock-reasoning-chatbot.ts</summary> ```shell ➜ ai-core git:(fix/bedrock/reasoningConfig) pnpm tsx src/generate-text/amazon-bedrock-reasoning-chatbot.ts You: Hello DefaultStepResult { content: [ { type: 'reasoning', text: `The user has simply greeted me with "Hello". There is no specific request or question that would require the use of the available weatherTool function. At this point, I should just respond with a friendly greeting and possibly indicate that I'm here to help.`, providerMetadata: [Object] }, { type: 'text', text: "Hello! How can I help you today? I'm here to assist you with information, answer questions, or help with various tasks. Is there something specific you'd like to know or discuss?" } ], finishReason: 'stop', usage: { inputTokens: 455, outputTokens: 104, totalTokens: 559, cachedInputTokens: 0 }, warnings: [], request: {}, response: { id: 'aitxt-uvy3GgPrVyR7uWaoY6AgHkch', timestamp: 2025-07-05T10:37:19.527Z, modelId: 'us.anthropic.claude-3-7-sonnet-20250219-v1:0', headers: { connection: 'keep-alive', 'content-length': '1051', 'content-type': 'application/json', date: 'Sat, 05 Jul 2025 10:37:19 GMT', 'x-amzn-requestid': 'OMITTED' }, body: undefined, messages: [ [Object] ] }, providerMetadata: { bedrock: { usage: [Object] } } } The user has simply greeted me with "Hello". There is no specific request or question that would require the use of the available weatherTool function. At this point, I should just respond with a friendly greeting and possibly indicate that I'm here to help. Hello! How can I help you today? I'm here to assist you with information, answer questions, or help with various tasks. Is there something specific you'd like to know or discuss? ``` </details> <details> <summary>generate-text/amazon-bedrock-reasoning.ts</summary> ```shell ➜ ai-core git:(fix/bedrock/reasoningConfig) pnpm tsx src/generate-text/amazon-bedrock-reasoning.ts Reasoning: [ { type: 'reasoning', text: 'Let me count the number of "r"s in the word "strawberry".\n' + '\n' + 'The word "strawberry" is spelled:\n' + 's-t-r-a-w-b-e-r-r-y\n' + '\n' + 'Going through each letter:\n' + '- "s": not an "r"\n' + '- "t": not an "r"\n' + '- "r": this is an "r" (first one)\n' + '- "a": not an "r"\n' + '- "w": not an "r"\n' + '- "b": not an "r"\n' + '- "e": not an "r"\n' + '- "r": this is an "r" (second one)\n' + '- "r": this is an "r" (third one)\n' + '- "y": not an "r"\n' + '\n' + 'So there are 3 occurrences of the letter "r" in the word "strawberry".', providerMetadata: { bedrock: [Object] } } ] Text: There are 3 "r"s in the word "strawberry". Warnings: [ { type: 'unsupported-setting', setting: 'temperature', details: 'temperature is not supported when thinking is enabled' } ] ``` </details> <details> <summary>stream-text/amazon-bedrock-reasoning-chatbot.ts</summary> ```shell ➜ ai-core git:(fix/bedrock/reasoningConfig) pnpm tsx src/stream-text/amazon-bedrock-reasoning-chatbot.ts You: Hello Assistant: URL https://bedrock-runtime.us-east-1.amazonaws.com/model/us.anthropic.claude-3-7-sonnet-20250219-v1%3A0/converse-stream Headers { "content-type": "application/json", "authorization": "<OMITTED>", "x-amz-date": "20250705T103848Z", "x-amz-security-token": "<OMITTED>" } Body { "system": [], "messages": [ { "role": "user", "content": [ { "text": "Hello" } ] } ], "additionalModelRequestFields": { "thinking": { "type": "enabled", "budget_tokens": 2048 } }, "inferenceConfig": { "maxOutputTokens": 6144 }, "toolConfig": { "tools": [ { "toolSpec": { "name": "weather", "description": "Get the weather in a location", "inputSchema": { "json": { "$schema": "http://json-schema.org/draft-07/schema#", "type": "object", "properties": { "location": { "description": "The location to get the weather for", "type": "string" } }, "required": [ "location" ], "additionalProperties": false } } } } ], "toolChoice": { "auto": {} } } } > The user has just greeted me with "Hello". This is a general greeting and doesn't contain any specific request that would require using the available tools. The only tool available to me is the "weather" function, which requires a location parameter to provide weather information. Since the user hasn't asked about weather or mentioned any location, I don't need to use any tools at this point. I should simply respond with a greeting and perhaps let them know what I can help with, particularly mentioning that I can provide weather information since that's the capability I have.Hello! How can I help you today? I can provide information such as checking the weather for a specific location. Is there something specific you'd like to know? You: ``` </details> <details> <summary>stream-text/amazon-bedrock-reasoning-fullstream.ts</summary> ```shell ➜ ai-core git:(fix/bedrock/reasoningConfig) pnpm tsx src/stream-text/amazon-bedrock-reasoning-fullstream.ts REASONING: The user is asking about the weather in San Francisco. I have a weather function available that takes a location parameter. San Francisco is a valid location, so I can call this function to get the weather information. Required parameter: - location: "San Francisco" All required parameters are available, so I can proceed with the function call. TEXT: I'll check the current weather in San Francisco for you. Tool call: 'weather' {"location":"San Francisco"} Tool response: 'weather' {"location":"San Francisco","temperature":75}The current temperature in San Francisco is 75 degrees Fahrenheit.% ``` </details> <details> <summary>stream-text/amazon-bedrock-reasoning.ts</summary> ```shell ➜ ai-core git:(fix/bedrock/reasoningConfig) pnpm tsx src/stream-text/amazon-bedrock-reasoning.ts To answer this question, I need to count the number of times the letter "r" appears in the word "strawberry". Let me go through the word letter by letter: s - not an "r" t - not an "r" r - this is an "r" a - not an "r" w - not an "r" b - not an "r" e - not an "r" r - this is an "r" r - this is an "r" y - not an "r" So I count 3 instances of the letter "r" in the word "strawberry".There are 3 letter "r"s in the word "strawberry". Warnings: [ { type: 'unsupported-setting', setting: 'temperature', details: 'temperature is not supported when thinking is enabled' } ] ``` </details> ## Tasks <!-- This task list is intended to help you keep track of what you need to do. Feel free to add tasks and remove unnecessary tasks as needed. Please check if the PR fulfills the following requirements: --> - [x] Tests have been added / updated (for bug fixes / features) - [ ] Documentation has been added / updated (for bug fixes / features) - [x] A _patch_ changeset for relevant packages has been added (for bug fixes / features - run `pnpm changeset` in the project root) - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root) ## Related Issues Fixes vercel#6617 --------- Co-authored-by: Aryuth Ekkul <aryuth.ekkul@trilogy.com>
# Releases ## ai@5.0.0-beta.9 ### Patch Changes - 86cfc72: feat (ai): add ignoreIncompleteToolCalls option to convertToModelMessages ## @ai-sdk/amazon-bedrock@3.0.0-beta.4 ### Patch Changes - a10bf62: Fixes "Extra inputs are not permitted" error when using reasoning with Bedrock ## @ai-sdk/langchain@1.0.0-beta.9 ### Patch Changes - Updated dependencies [86cfc72] - ai@5.0.0-beta.9 ## @ai-sdk/llamaindex@1.0.0-beta.9 ### Patch Changes - Updated dependencies [86cfc72] - ai@5.0.0-beta.9 ## @ai-sdk/react@2.0.0-beta.9 ### Patch Changes - Updated dependencies [86cfc72] - ai@5.0.0-beta.9 ## @ai-sdk/rsc@1.0.0-beta.9 ### Patch Changes - Updated dependencies [86cfc72] - ai@5.0.0-beta.9 ## @ai-sdk/svelte@3.0.0-beta.9 ### Patch Changes - Updated dependencies [86cfc72] - ai@5.0.0-beta.9 ## @ai-sdk/vue@2.0.0-beta.10 ### Patch Changes - Updated dependencies [86cfc72] - ai@5.0.0-beta.9 Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Hi, Patrick here from DeepMind. I wanted to update the Google Generative AI docs to use newer models and simplify some stuff. This PR includes the following changes: - Update some links - Add a simple generate text example - Use gemini-2.5-flash as default (same as on the Gemini API docs) - Clarify thinking budgets and document includeThoughts - Remove dynamic retrieval (this is only supported with the old 1.5 and considered as legacy) - Update model name in image generation snippet - Mention support for Streaming - Simplify Search example and merge with "Sources" section - Add info about implicit and explicit caching (this is copied over from the v4 docs and looks newer) - Copy Gemma section from v4 docs and move the section to the bottom - Remove info about tuned models as this is no longer actively supported in the Gemini API --------- Co-authored-by: Lars Grammel <lars.grammel@gmail.com> Co-authored-by: Philipp Schmid <32632186+philschmid@users.noreply.github.com>
…ware (vercel#7107) ## Background If the extract reasoning middleware is used when making a request but the model's response does not include `<think></think>` tags, the middleware will generate an unexpected reasoning-start event. ## Summary Ensure that the reasoning-start event is only emitted in real reasoning mode.
## background added new mistral medium 3 models for users to be able to use themselves ## summary - add mistral medium 3 model support - mistral-medium-latest, mistral-medium-2505 ## tasks - [x] model ids added to mistral-chat-options.ts - [x] capability tables updated in all docs - [x] example implementation created
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Add docs page for transports with `useChat`. ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
vercel#7120) ## Background Amazon Nova Canvas now supports predefined visual styles for image generation via the style parameter. AI SDK didn’t forward this parameter, so users couldn’t take advantage of the feature. ## Summary Adds conditional support for style in the image-generation request: ``` ...(providerOptions?.bedrock?.style ? { style: providerOptions.bedrock.style } : {}), ``` ## Verification 1. Building the package with `pnpm build` 2. Running tests with `pnpm test` in the `packages/amazon-bedrock` directory 3. Confirmed request body contains the `style` field only when specified. ## Future Work - Add docs & examples for using providerOptions.bedrock.style. - Consider a TypeScript enum / union for autocomplete. - Validate style values against the supported set. ## Reference [AWS Documentation - Amazon Nova Canvas Request and Response Structure](https://docs.aws.amazon.com/nova/latest/userguide/image-gen-styles.html)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## background updated perplexity provider page docs, to follow the newly update v5 version
google and google-vertex providers
…ercel#7591) ## Background Anthropic supports several different flavors of prompt caching in beta. These are priced differently. It would be lovely to give tracing providers a way to accurately track this. ## Summary Adds the raw returned Anthropic `usage` object to provider metadata. A subset of this is returned as `providerMetadata.anthropic.cache_creation_input_tokens`, but this is not enough granularity to tell whether it was a 5 minute cache or a 1 hr cache. ## Verification See added example. ## Related Issues Continues vercel#7566 --------- Co-authored-by: jacoblee93 <jacoblee93@gmail.com>
## Description This PR adds documentation for the [ai-sdk-provider-gemini-cli](https://github.com/ben-vargas/ai-sdk-provider-gemini-cli) community provider, which enables developers to access Google's Gemini models through the official @google/gemini-cli-core library and Google Cloud Code endpoints. ## What does this PR do? - Adds documentation for the Gemini CLI provider at `content/providers/03-community-providers/100-gemini-cli.mdx` - Updates the community providers list in `content/docs/02-foundations/02-providers-and-models.mdx` ## Key Features The Gemini CLI provider offers flexible authentication and full AI SDK compatibility: - **OAuth authentication** - Uses existing Gemini CLI credentials (ideal for GCA license holders) - **API key authentication** - Standard authentication method with Gemini API keys - **Full streaming support** - Real-time text generation with `streamText` - **Structured object generation** - Support for `generateObject` and `streamObject` with Zod schemas - **Model flexibility** - Access to both Gemini 2.5 Pro and Flash models with 64K output tokens ## Notes This provider is particularly useful for developers who have a Gemini Code Assist (GCA) subscription and want to use their existing license rather than paid API keys. It also supports standard API key authentication for maximum flexibility. ## Testing - [x] Documentation follows the existing community provider format - [x] All installation methods (npm, pnpm, yarn) are documented - [x] Model capabilities table accurately reflects the provider's features - [x] Links to the provider repository are working - [x] Authentication methods are clearly documented with examples
|
Actually, I'm gonna merge in the |
## Background Important reference for `useChat` in v5 is missing. Without the `prepareReconnectToStreamRequest` option, resumeStream doesn't work when it doesn't match the default setups. ## Summary Fixed prepareSendMessagesRequest reference and added prepareReconnectToStreamRequest reference. No changes other than documentation. ## Future Work I noticed that many parts of the actual behavior didn’t match the documentation. Let me know specifically which parts need documentation updates, and I’ll be happy to help. Feel free to leave a comment if needed.
## Background We have added docs for SigNoz with Vercel AI SDK for obervability [here](https://signoz.io/docs/llm/vercel-ai-sdk-monitoring/). We would like to add our name to the providers/observability docs. ## Summary Added signoz.mdx to content/providers/05-obervability. (DOCS ONLY)
google and google-vertex providersgoogle and google-vertex providers, refactor googleSearch to be provider tool
|
@lgrammel, this one should be good to go. I'll revisit the bedrock anthropic PR later this week. I initially got a bit confused with the three different versions of this package, but I think this should unify it all to the v5 concepts. Tested with vertex, but should work with genai if it works there. One thing I WASN'T sure of is whether or not it's preferred to import |
## background Users with FastMCP servers (version 2.10.6+) were unable to connect because they use MCP protocol version 2025-06-18, but the AI SDK only supported versions up to 2025-03-26, causing connection failures with "Server's protocol version is not supported" errors. ## summary - add 2025-06-18 to supported MCP protocol versions - update mock transport to use latest protocol version ## tasks - [x] updated SUPPORTED_PROTOCOL_VERSIONS array in types.ts - [x] set LATEST_PROTOCOL_VERSION to 2025-06-18 - [x] updated mock transport default protocol version - [x] verified all test pass related issue vercel#7575
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
|
Hey @Und3rf10w, the main branch is V5 now. We already have the |
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
submodule action broken, fixes vercel#7588
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
## Background Updating docs pages for v5 ## Tasks - [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the project root)
…history (vercel#7600) ## background Users integrating Amazon Bedrock with multi-step agents hit validation errors when setting `activeTools: []` or `toolChoice: 'none'` in conversations that previously used tools. Bedrock requires toolConfig to be present when conversation contains tool content, but rejects empty tools arrays. ## summary - add placeholder tool when activeTools is empty but conversation has tool content - handle both `activeTools: []` and `toolChoice: 'none'` scenarios - include helpful warning about workaround ## tasks - [x] placeholder tool logic in bedrock-prepare-tools.ts - [x] updated test expectations for both scenarios - [x] warning messages for user awareness ## future work * remove workaround if Amazon Bedrock API supports empty tools with conversation history related issue vercel#7528
Update ai sdk cli page link via index page
4f126f0 to
327d6ea
Compare
lol that explains a lot. Let me redo this one then real quick |
|
Superseded by #7608 |
Background
This change was necessary to add support for the Code Execution feature available in Google's Gemini models. This allows the model to generate and execute Python code to perform calculations and solve problems, enhancing its capabilities. The existing tool preparation logic was also refactored to accommodate multiple tool types (e.g., functions, search, and code execution) in a single API call, which is a prerequisite for using code execution alongside other tools.
The previous implementation had separate logic paths for different tools and options (like
useSearchGrounding). This update unifies the handling of all native Google tools—codeExecution,googleSearch, and the newly addedurlContext—into a single, robust system. This simplifies the provider's internal logic, makes the tool system more extensible, and provides a more consistent developer experience.Summary
This pull request integrates the provider-defined tool handling for the Google provider, introducing the
urlContexttool, and formalizes the usage of thegoogleSearchandcodeExecutiontools.Key changes include:
codeExecution,googleSearch,urlContext) are now treated as first-class provider-defined tools, passed explicitly in thetoolsarray.urlContextTool: AurlContexttool has been added, allowing the model to use web pages provided in the prompt as context.googleSearchas a Tool: TheuseSearchGroundingprovider option has been removed. Grounding with Google Search is now enabled by explicitly adding thegoogleSearchtool to a request.codeExecutiontool cannot be used with any other tools.googleSearchandurlContextcannot be combined with standardfunctiontools, but they can be used together.prepareToolsLogic: The internalprepareToolsfunction has been rewritten to support the new validation rules and correctly format thetoolsarray for the Google API, ensuring each provider-defined tool is sent as a distinct object.useSearchGrounding.Verification
I manually verified the changes by setting up the new examples for Google Vertex AI. I ran both the
generate-text/google-vertex-code-execution.tsandstream-text/google-vertex-code-execution.tsexamples against thegemini-2.5-promodel.In both cases, the model successfully used the code execution tool to first calculate the 20th Fibonacci number and then find the nearest palindrome. The console output correctly displayed the tool calls and their corresponding results, followed by the final generated text, confirming that the end-to-end flow is working as expected.
The existing test suite has been significantly updated to cover the new tool handling logic. This includes tests for tool exclusivity rules (e.g.,
codeExecutioncannot be combined with other tools), valid tool combinations (googleSearchandurlContext), and the removal of theuseSearchGroundingoption. All 109 tests are now passing, ensuring the refactoring is robust and correct.Tasks
pnpm changesetin the project root)pnpm prettier-fixin the project root)Future Work
The non-streaming (
doGenerate) implementation for matching acodeExecutionResultto itsexecutableCodepart relies on finding the last tool call that has not yet received a result. While functional, this could be made more robust if the API provided a direct correlation ID in the future.The documentation could use some updating to match the new methodology.
Related Issues
Closes #3205
Closes #6354
Closes #6365