Skip to content

[BUG] Remove hardcoded llama3.2 model, allow configurable Ollama model selection #383

@sujay-d07

Description

@sujay-d07

What happened?

The Ollama model is hardcoded to llama3.2 in multiple places (ask.py and tests), preventing users from specifying alternative Ollama models like llama3.1:8b, mistral, phi3, or other compatible models.

How to reproduce

  1. Set environment variable: export OLLAMA_MODEL=mistral
  2. Run: cortex ask "what is nginx?"
  3. Expected: Cortex uses the mistral model specified in OLLAMA_MODEL
  4. Actually happened: Cortex ignores the environment variable and defaults to llama3.2 hardcoded in ask.py

Error output

No error is thrown, but the hardcoded model is used instead of respecting configuration:

Current hardcoded locations:

ask.py - _default_model() returns hardcoded "llama3.2"
test_ask.py - Test expects hardcoded model
test_ollama_integration.py - Multiple test instances with hardcoded ollama_model="llama3.2"

Expected behavior

Read from OLLAMA_MODEL environment variable or config file, allowing users to specify any Ollama model.

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions