Skip to content

feat: add dd tool/function calling test scripts for all LLM providers except openai via Javelin#182

Merged
dhruvj07 merged 3 commits intomainfrom
feat/function-tool-calling
Apr 17, 2025
Merged

feat: add dd tool/function calling test scripts for all LLM providers except openai via Javelin#182
dhruvj07 merged 3 commits intomainfrom
feat/function-tool-calling

Conversation

@dhruvj07
Copy link
Copy Markdown
Contributor

@dhruvj07 dhruvj07 commented Apr 11, 2025

add tool/function calling test scripts for all LLM providers except openai via Javelin

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @dhruvj07, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

This pull request introduces new test scripts for function and tool calling capabilities for several LLM providers (Anthropic, Azure OpenAI, Bedrock, Gemini) via the Javelin SDK. The scripts are designed to verify the integration and functionality of these providers with the Javelin platform, excluding OpenAI. Each script sets up the necessary environment, defines function/tool schemas, and sends requests to the respective LLM providers through Javelin, printing the raw responses or error messages.

Highlights

  • New Test Scripts: Adds new test scripts for Anthropic, Azure OpenAI, Bedrock, and Gemini to verify function and tool calling support via Javelin.
  • Javelin Integration: Demonstrates how to use the Javelin SDK to interact with different LLM providers for function and tool calling.
  • Environment Setup: Each script loads environment variables for API keys and Javelin configurations, ensuring proper setup for testing.

Changelog

Click here to see the changelog
  • examples/anthropic/anthropic_function_call.py
    • Added a new script to test Anthropic function calling support via Javelin.
    • Defines messages and a dummy tool call to check for errors.
    • Uses client.query_unified_endpoint to send requests to Anthropic.
    • Includes headers for Javelin routing and model selection.
  • examples/azure-openai/azure_function_call.py
    • Added a new script to test Azure OpenAI function and tool calling support via Javelin.
    • Initializes the Azure OpenAI client and registers it with Javelin.
    • Includes tests for both function and tool calling, using chat.completions.create.
    • Defines function and tool schemas for weather information and motivational quotes.
  • examples/bedrock/bedrock_function_tool_call.py
    • Added a new script to test Bedrock function and tool calling support via Javelin.
    • Uses client.query_unified_endpoint to send requests to Bedrock.
    • Includes tests for both function and tool calling, defining schemas for weather information and motivational quotes.
    • Sets up headers for Javelin routing and model selection.
  • examples/gemini/gemini_function_tool_call.py
    • Added a new script to test Gemini function and tool calling support via Javelin.
    • Initializes the Gemini client and registers it with Javelin.
    • Includes tests for both function and tool calling, using chat.completions.create.
    • Defines function and tool schemas for weather information and motivational quotes.
  • examples/mistral/mistral_function_tool_call.py
    • Added a new script to test Mistral function and tool calling support via Javelin.
    • Uses Langchain's init_chat_model to initialize the Mistral model.
    • Includes tests for basic prompts, function calling, and tool calling.
    • Defines function and tool schemas for weather information and motivational quotes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


A function called,
A tool stands up tall,
LLMs converse,
Through Javelin's verse,
Testing makes sure they don't fall.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds function/tool calling test scripts for various LLM providers (excluding OpenAI) via Javelin. The addition of these tests is a good step towards ensuring the reliability and compatibility of Javelin with different LLM providers. The code is generally well-structured and easy to follow.

Summary of Findings

  • Error Handling: The error handling in the test scripts is basic. Consider adding more robust error handling, such as logging the full exception traceback or providing more context-specific error messages.
  • Configuration: The configuration of API keys and base URLs relies heavily on environment variables. While this is acceptable for test scripts, ensure that the documentation clearly outlines the required environment variables and how to set them.
  • Provider-Specific Logic: Each test script has provider-specific logic. Ensure that any common patterns or functionalities are abstracted into reusable functions or classes to reduce code duplication.

Merge Readiness

The pull request is a valuable addition to the project, providing essential test coverage for function/tool calling across multiple LLM providers. However, before merging, it would be beneficial to address the identified issues related to error handling and configuration. I am unable to directly approve this pull request, and recommend that others review and approve this code before merging. At a minimum, the high severity issues should be addressed before merging.

Comment on lines +13 to +17
extra_headers={
"x-javelin-route": "mistral_univ",
"x-api-key": os.environ.get("OPENAI_API_KEY"),
"Authorization": f"Bearer {os.environ.get('MISTRAL_API_KEY')}"
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

It's good to see the use of environment variables for API keys. However, consider adding a check to ensure that these environment variables are set before initializing the model. If they are not set, raise an exception with a clear message indicating which variables are missing. This will help prevent runtime errors and provide better guidance to users.

    mistral_api_key = os.environ.get("MISTRAL_API_KEY")
    openai_api_key = os.environ.get("OPENAI_API_KEY")

    if not mistral_api_key or not openai_api_key:
        missing_vars = []
        if not mistral_api_key: missing_vars.append("MISTRAL_API_KEY")
        if not openai_api_key: missing_vars.append("OPENAI_API_KEY")
        raise ValueError(f"Missing environment variables: {', '.join(missing_vars)}")

    return init_chat_model(
        model_name="mistral-large-latest",
        model_provider="openai",
        base_url=f"{os.getenv('JAVELIN_BASE_URL')}/v1",
        extra_headers={
            "x-javelin-route": "mistral_univ",
            "x-api-key": openai_api_key,
            "Authorization": f"Bearer {mistral_api_key}"
        }
    )

Comment on lines +64 to +65
except Exception as e:
print(f"Function/tool call failed for Anthropic: {str(e)}")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider logging the full exception traceback for better debugging. This will provide more context when debugging failures.

Suggested change
except Exception as e:
print(f"Function/tool call failed for Anthropic: {str(e)}")
except Exception as e:
print(f"Function/tool call failed for Anthropic: {str(e)}")
import traceback
traceback.print_exc()

Comment on lines +14 to +15
if not azure_api_key or not javelin_api_key:
raise ValueError("Missing AZURE_OPENAI_API_KEY or JAVELIN_API_KEY")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider logging which environment variables are missing for easier debugging.

    if not azure_api_key or not javelin_api_key:
        missing_vars = []
        if not azure_api_key: missing_vars.append("AZURE_OPENAI_API_KEY")
        if not javelin_api_key: missing_vars.append("JAVELIN_API_KEY")
        raise ValueError(f"Missing environment variables: {', '.join(missing_vars)}")

Comment on lines +91 to +93
except Exception as e:
print(f"Initialization failed: {e}")
return
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider logging the full exception traceback for better debugging.

Suggested change
except Exception as e:
print(f"Initialization failed: {e}")
return
except Exception as e:
print(f"Initialization failed: {e}")
import traceback
traceback.print_exc()
return

@dhruvj07 dhruvj07 merged commit 984343c into main Apr 17, 2025
9 of 11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants