-
Notifications
You must be signed in to change notification settings - Fork 20.1k
chore(ollama,groq): Filtering Parameters in bind_tools for Ollama and Groq #34167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(ollama,groq): Filtering Parameters in bind_tools for Ollama and Groq #34167
Conversation
CodSpeed Performance ReportMerging #34167 will not alter performanceComparing Summary
Footnotes
|
|
@ccurme is this looking good? |
ccurme
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding Ollama (and possibly Groq): we should add handling for response_format, similar to what's done here for Anthropic:
langchain/libs/partners/anthropic/langchain_anthropic/chat_models.py
Lines 2189 to 2204 in 05ba853
| if "response_format" in payload: | |
| # response_format present when using agents.create_agent's ProviderStrategy | |
| # --- | |
| # ProviderStrategy converts to OpenAI-style format, which passes kwargs to | |
| # ChatAnthropic, ending up in our payload | |
| response_format = payload.pop("response_format") | |
| if ( | |
| isinstance(response_format, dict) | |
| and response_format.get("type") == "json_schema" | |
| and "schema" in response_format.get("json_schema", {}) | |
| ): | |
| response_format = cast(dict, response_format["json_schema"]["schema"]) | |
| # Convert OpenAI-style response_format to Anthropic's output_format | |
| payload["output_format"] = _convert_to_anthropic_output_format( | |
| response_format | |
| ) |
|
Thanks! |
openai/gpt-oss-120bmodel incompatible with structured output (create_agent,ChatGroq.with_structured_output) #34155