Skip to content
This repository was archived by the owner on Nov 10, 2025. It is now read-only.

Conversation

@devin-ai-integration
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot commented Sep 29, 2025

Fix MDXSearchTool to work with local Ollama models without OpenAI API key

Summary

Fixes issue #3622 where MDXSearchTool required an OpenAI API key even when configured to use local Ollama models for both LLM and embedder. The tool now works entirely offline with local models.

Key Changes:

  • Modified _parse_config in RagTool to properly handle configurations with both llm and embedder specified as non-OpenAI providers
  • Added direct OllamaEmbeddingFunction instantiation for Ollama embeddings to bypass OpenAI validation
  • Added comprehensive test suite covering local provider configurations
  • Maintains full backward compatibility with existing OpenAI configurations
  • Added ollama>=0.6.0 dependency to support local embedding functions

Root Cause: The config parsing logic wasn't properly extracting embedding configuration when both LLM and embedder were configured as local providers, causing it to fall back to OpenAI validation.

Review & Testing Checklist for Human (3 items - Yellow Risk)

  • End-to-end testing with actual Ollama instance - Verify the tool works with a real local Ollama server running both LLM and embedding models (tests use mocks extensively)
  • Backward compatibility verification - Test that existing OpenAI configurations still work correctly after these config parsing changes
  • Field mapping validation - Confirm that the field mappings (modelEMBEDDINGS_OLLAMA_MODEL_NAME, urlEMBEDDINGS_OLLAMA_URL) match what the CrewAI RAG system actually expects

Test Plan Recommendation

  1. Set up local Ollama instance with both LLM and embedding models
  2. Create MDXSearchTool with the exact config from the issue reproduction script
  3. Verify it initializes without OpenAI API key errors
  4. Test actual search functionality works end-to-end
  5. Test existing OpenAI configurations still work

Notes


Note

Enable MDXSearchTool to run fully offline with local Ollama LLM/embeddings by updating config parsing and adding tests, plus ollama dependency.

  • RAG Tool:
    • Update crewai_tools/tools/rag/rag_tool.py config parsing to support non-OpenAI llm and embedder providers.
    • Instantiate OllamaEmbeddingFunction directly for local embeddings; preserve existing OpenAI flow.
  • Tests:
    • Add tests/tools/test_mdx_search_tool_local_config.py covering local Ollama configurations for LLM and embeddings.
  • Dependencies:
    • Add ollama>=0.6.0 in pyproject.toml.

Written by Cursor Bugbot for commit 36a2f46. This will update automatically on new commits. Configure here.

… key

- Modified _parse_config in RagTool to properly handle llm+embedder config
- Added direct OllamaEmbeddingFunction instantiation for Ollama provider
- Added comprehensive tests for local provider configurations
- Maintains backward compatibility with existing OpenAI configurations
- Added ollama dependency to support OllamaEmbeddingFunction

Fixes #3622

Co-Authored-By: João <joao@crewai.com>
@devin-ai-integration
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

cursor[bot]

This comment was marked as outdated.

…g setup

- Fix config parsing to respect vectordb settings when llm+embedder are present
- Add proper validation for Ollama model_name requirement
- Improve error handling with ImportError instead of generic Exception
- Ensure fallback to get_embedding_function uses correct config keys

Co-Authored-By: João <joao@crewai.com>
@devin-ai-integration
Copy link
Contributor Author

Closing due to inactivity for more than 7 days. Configure here.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant