Skip to content

Conversation

@jaysurse
Copy link
Collaborator

@jaysurse jaysurse commented Jan 19, 2026

Summary by CodeRabbit

  • New Features

    • Added API key detection and validation for Anthropic and OpenAI providers.
    • Enhanced first-run setup wizard with improved provider detection and interactive configuration flow.
    • Expanded environment file discovery across multiple locations for better configuration flexibility.
    • Implemented provider-specific API key fallback handling during installation.
  • Tests

    • Added comprehensive parallel LLM interaction test suite.

✏️ Tip: You can customize this high-level summary in your review settings.

Jay Surse added 30 commits December 29, 2025 11:16
- The wizard command was only showing a static message instead of running
- Now properly calls FirstRunWizard.run() for full interactive setup
- Wizard detects existing API keys and proceeds through all setup steps
- Always show menu letting user choose API provider
- Display '✓ (key found)' next to providers with existing keys
- User can now select provider even when keys already exist
- Only auto-select in non-interactive mode
…n\n- Remove hardware, preferences, and other steps\n- Always prompt for API provider and key, even if already set\n- Allow reconfiguration on every run
… consistent dry-run

- Add provider auto-detection based on available API keys and tools
- Implement lazy validation: check env keys first, prompt only if invalid
- Ensure dry-run in wizard uses identical logic to normal install
- Add provider fallback in install command for robustness
- Update env_loader to check package directory for .env files
- Fix install_id initialization to prevent NameError in exception handling
- Add API key testing utilities
…ing wizard methods, fix hardware detection data handling, remove module-level env loading, update test expectations
- Update _get_api_key to return 'ollama-local' when no key found and provider is ollama or no cloud keys set
- Change default provider to 'ollama' when no API keys available
- Add skip option to wizard for subsequent runs
- Fix test mocking for install dry run test
- Format code with Black and ensure Ruff compliance

Resolves CI test failures for API key fallback and install commands.
- Add proper validation for existing OPENAI_API_KEY in wizard
- Test both format and API functionality before accepting keys
- Prevent prompting when valid key exists in environment
- Update test expectations for offline mode behavior
- Add missing Mock import in wizard tests

Fixes issue where wizard would ignore valid OpenAI API keys and repeatedly prompt for manual input.
- Add CommandInterpreter mock to test_install_no_api_key
- Ensures test passes when no API keys are available but Ollama fallback works
- Fixes test failure: 'No commands generated with any available provider'
- Remove @patch.object(CortexCLI, '_get_api_key', return_value=None)
- Test now relies on the fixed install() method that doesn't require API keys
- Ensures test passes when install works without API key validation
- Add load_env() call to ensure consistent API key loading from .env files
- Implement proper non-interactive mode that auto-configures without prompts
- Fix config file path handling to use instance paths instead of global paths
- Mark setup complete when user chooses to skip configuration
- All 47 tests pass with consistent behavior
- Remove auto-skip behavior when provider is already configured
- Always display provider selection menu with available options
- Add 'Keep current provider (skip setup)' option when re-running wizard
- Show current provider status clearly in menu
- Allow users to switch providers anytime via wizard
- Provider selection menu now always displays OpenAI, Anthropic, and Ollama options, regardless of API key presence.
- If a selected provider's API key is blank or missing in .env, the wizard prompts the user to enter and save a valid key.
- Ensures a consistent onboarding flow and allows users to set or update any provider at any time.
… collection

- Renamed cortex/utils/api_key_test.py to cortex/utils/api_key_validator.py
- Renamed test_anthropic_api_key to validate_anthropic_api_key
- Renamed test_openai_api_key to validate_openai_api_key
- Updated imports in first_run_wizard.py and test_first_run_wizard.py
- Fixes pytest fixture 'api_key' not found error
- Renamed cortex/utils/api_key_test.py to cortex/utils/api_key_validator.py
- Renamed test_* functions to validate_* to prevent pytest auto-discovery
- Updated all imports in first_run_wizard.py and tests
- All 47 tests now pass
- The wizard command was only showing a static message instead of running
- Now properly calls FirstRunWizard.run() for full interactive setup
- Wizard detects existing API keys and proceeds through all setup steps
Jay Surse and others added 19 commits December 29, 2025 12:54
Resolved conflicts in:
- cortex/cli.py: kept improved _get_api_key_for_provider method
- tests/test_cli_extended.py: kept simplified test matching new implementation
- Apply black formatting to cli.py
- Fix test_no_key_found to also patch Path.cwd() to prevent finding local .env file
- Add backward compatibility functions to api_key_detector
- Update wizard to allow reconfiguration after setup
- Fix wizard to continue flow when keys are found
- Update tests for proper mocking
- Rename test_install_no_api_key to test_install_with_openai_key
- Fix duplicate function name causing CI lint failure
- Fix duplicate try blocks and indentation errors in cortex/cli.py
- Fix misplaced print statements in cortex/first_run_wizard.py
- Clean up fish shell completion script formatting
- Add jay@cortex.local for Jay Surse (@jaysurse)
- Add shreemj0407@example.com for Shree Milind Jejurikar (@ShreeJejurikar)

Both contributors have already signed the CLA. These additions link their
alternate email addresses (used in some commits) to their existing CLA
signatures to resolve automated verification failures.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 19, 2026

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

The pull request introduces a comprehensive API key detection and provider-agnostic setup system. It expands the first-run wizard with modular step-based flows for multiple providers (Anthropic, OpenAI, Ollama), adds API key validation utilities, updates the CLI to use provider-specific key retrieval with fallback logic, extends environment file location resolution, and includes new parallel LLM tests. Minor updates include CLA signer email additions and example file corrections.

Changes

Cohort / File(s) Summary
Core API Key Detection & Validation
cortex/api_key_detector.py, cortex/utils/api_key_validator.py
Added backward-compatibility helper functions detect_api_key(provider) and get_detected_provider() plus SUPPORTED_PROVIDERS constant. Introduced new module with validate_anthropic_api_key() and validate_openai_api_key() functions for API key validation via HTTP requests.
CLI Provider-Specific Integration
cortex/cli.py
Introduced _get_api_key_for_provider(provider) method for provider-specific key retrieval with fallbacks (ollama-local, fake-key). Updated install() signature to accept forced_provider and implemented multi-provider resolution with fallback during install. Rewrote wizard() to instantiate and run FirstRunWizard(interactive=True). Added notify() public method for desktop notifications. Tightened error handling in env-related commands.
Environment File Location Resolution
cortex/env_loader.py
Extended get_env_file_locations() to search additional .env locations in new priority order: .env (current working directory), ../.env (parent directory), cortex/.env (package directory if importable), ~/.cortex/.env (user home), /etc/cortex/.env (system-wide for POSIX).
First Run Wizard Refactor
cortex/first_run_wizard.py
Major expansion with provider-centric modular architecture: added ~30 new methods with comprehensive type hints including step-based flows (_step_welcome(), _step_api_setup(), _step_hardware_detection(), etc.), provider-specific setup paths (_setup_claude_api(), _setup_openai_api(), _setup_ollama_legacy()), API key utilities (_prompt_for_api_key(), _verify_api_key(), _save_env_var()), and shell integration helpers. Integrated centralized API key detector with fallback paths and encryption via env_manager. Added dry-run verification for API keys.
API Key Detection Tests
tests/test_api_key_detector.py, tests/test_first_run_wizard.py
Enhanced test_no_key_found to patch both home and current-working-directory lookups. Added extensive test suite for API key utilities: TestGetValidApiKey with Anthropic/OpenAI validation scenarios, .env file loading, shell-exported key priority, and fallback behavior. Reworked step tests to mock key resolution.
CLI Test Updates
tests/test_cli.py, tests/test_cli_extended.py
Renamed test_install_dry_run to test_install_with_openai_key and separated dry-run logic into new test_install_dry_run test. Replaced direct key patching with mocked _get_api_key_for_provider() across extended tests. Updated install flow test expectations to reflect provider-specific fallback behavior.
Parallel LLM Tests & Demos
test_parallel_llm.py, examples/parallel_llm_demo.py
Added comprehensive test script with five async test scenarios: async completion, batch processing, rate limiting, helper functions, and performance comparison. Updated demo file with redundant duplicate async call.
Infrastructure & Examples
tests/installer/test_parallel_install.py, tests/test_env_manager.py, examples/env_demo.py, .env.example, .github/cla-signers.json
Changed shell invocation from python to python3 across parallel install tests. Updated valid URL scheme from sftp:// to ftps:// in env manager tests. Fixed future imports ordering in env_demo.py. Added API key environment placeholders and merge conflict markers to .env.example. Appended emails to existing CLA signers: "shreemj0407@example.com" for Shree Milind Jejurikar and "jay@cortex.local" for Jay Surse.

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly Related PRs

Suggested Labels

enhancement

Suggested Reviewers

  • mikejmorgan-ai
  • Suyashd999
  • Anshgrover23

Poem

🐰 A wizard hops through providers with flair,
Detecting API keys everywhere,
With Anthropic, OpenAI, Ollama in line,
The first-run flow now feels so fine—
Validation checked, fallbacks in place,
Configuration blooms with modular grace! 🌟

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

CLA Verification Failed

The following contributors have not signed the Contributor License Agreement:

  • Jay Surse (jay@cortex.local)
  • Shree (shreemj0407@example.com)

How to Sign

  1. Read the CLA document
  2. Open a CLA signature request
  3. A maintainer will add you to the signers list
  4. Comment recheck on this PR to re-run verification

Verified Signers


This check runs automatically. Maintainers can update .github/cla-signers.json to add signers.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @jaysurse, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a major overhaul to the first-run wizard, significantly improving the user onboarding experience by streamlining API key configuration and validation. It centralizes API key management, enhances the CLI's ability to select and fallback between LLM providers, and expands environment variable loading capabilities. These changes aim to make Cortex more user-friendly and robust in its interaction with various AI models.

Highlights

  • Enhanced First-Run Wizard: The initial setup wizard has been significantly refactored to provide a more interactive and robust experience for configuring API keys. It now guides users through selecting LLM providers, validating keys, and setting up Ollama models.
  • Improved API Key Management and Detection: API key detection logic has been centralized and improved, with new utility functions for reading, saving, and validating API keys from various locations (e.g., .env files, environment variables). The system now prioritizes key sources and includes validation checks for Anthropic and OpenAI keys.
  • Flexible LLM Provider Selection in CLI: The cortex install command now intelligently attempts to use available LLM providers. If the primary provider fails or is not configured, it can try alternative providers for which API keys are detected, enhancing the reliability of AI-powered operations.
  • New API Key Validation Utilities: Dedicated functions have been introduced to perform live validation of Anthropic and OpenAI API keys by making minimal test requests to their respective endpoints, ensuring keys are functional.
  • Expanded Environment File Search Paths: The system now searches for .env files in additional locations, including the parent directory and the Cortex package directory, improving flexibility for environment variable configuration.
  • CLA Signer Updates: Additional email addresses have been added for existing contributors 'Shree Milind Jejurikar' and 'Jay Surse' in the CLA signers list.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@sonarqubecloud
Copy link

@Anshgrover23
Copy link
Collaborator

@jaysurse I think your branch is broken, create a new PR.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significantly refactored and improved first-run wizard for Cortex, enhancing the onboarding experience by providing a more interactive and robust API key setup. Key changes include centralizing API key detection and validation logic into new utility functions and a dedicated api_key_validator.py file, which performs actual API calls for verification. The cortex/cli.py was updated to integrate this new wizard, refactor API key retrieval, and implement a more sophisticated provider fallback mechanism for install commands. Review comments highlighted several areas for improvement: addressing a potential security risk with os.system() in the wizard (suggesting console.clear() from rich), using more specific exception handling in API key validation functions instead of broad Exception catches, simplifying the complex provider fallback logic in the install method by encapsulating it, and removing a duplicate query_multiple_packages call in an example file. Additionally, minor updates were made to .env.example (fixing merge conflicts and adding new API key examples), .github/cla-signers.json (adding email addresses), cortex/env_loader.py (expanding .env file search locations), and various test files (test_parallel_llm.py was added, test_parallel_install.py updated python -c to python3 -c, and test_api_key_detector.py, test_cli.py, test_cli_extended.py, test_env_manager.py, test_first_run_wizard.py were adjusted to reflect the new API key handling and wizard structure).

Comment on lines 897 to 901
subprocess.run(
"curl -fsSL https://ollama.ai/install.sh | sh", shell=True, check=True
"curl -fsSL https://ollama.ai/install.sh | sh",
shell=True, # noqa: S602
check=True,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using shell=True with a command constructed from an external source like curl poses a significant security risk. If the content from ollama.ai is compromised or a man-in-the-middle attack occurs, arbitrary code could be executed on the user's machine. While the URL is hardcoded, this pattern is dangerous and should be avoided. A safer approach would be to provide the user with the command and instructions to run it manually.

Comment on lines +861 to 897
# Try providers in order
initial_provider = forced_provider or self._get_provider()
providers_to_try = [initial_provider]
if initial_provider in ["claude", "openai"]:
other_provider = "openai" if initial_provider == "claude" else "claude"
if self._get_api_key_for_provider(other_provider):
providers_to_try.append(other_provider)

commands = None
provider = None # noqa: F841 - assigned in loop
api_key = None
history = InstallationHistory()
install_id = None
start_time = datetime.now()
for try_provider in providers_to_try:
try:
try_api_key = self._get_api_key_for_provider(try_provider) or "dummy-key"
self._debug(f"Trying provider: {try_provider}")
interpreter = CommandInterpreter(api_key=try_api_key, provider=try_provider)

try:
self._print_status("🧠", t("install.analyzing"))

interpreter = CommandInterpreter(api_key=api_key, provider=provider)
self._print_status("🧠", t("install.analyzing"))

self._print_status("📦", t("install.planning"))
for _ in range(10):
self._animate_spinner(t("progress.analyzing_requirements"))
self._clear_line()

for _ in range(10):
self._animate_spinner(t("progress.analyzing_requirements"))
self._clear_line()
commands = interpreter.parse(f"install {software}")

commands = interpreter.parse(f"install {software}")
if commands:
provider = try_provider
api_key = try_api_key
break
else:
self._debug(f"No commands generated with {try_provider}")
except (RuntimeError, Exception) as e:
self._debug(f"API call failed with {try_provider}: {e}")
continue

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for trying different providers within the install method is quite complex, involving a loop, nested try/except blocks, and manual provider ordering. This functionality could be encapsulated into a dedicated helper function or class (e.g., a ProviderFallbackHandler) to simplify the install method, improve readability, and make the logic more reusable and easier to test independently.

def _clear_screen(self) -> None:
"""Clear the terminal screen."""
if self.interactive:
os.system("clear" if os.name == "posix" else "cls") # noqa: S605, S607
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using os.system() can introduce a security risk (shell injection). Since your project already uses the rich library, you can achieve the same result more safely by importing console from cortex.branding and calling console.clear(). This avoids invoking a subshell and is the recommended practice.

Comment on lines +12 to +28
try:
headers = {
"x-api-key": api_key,
"anthropic-version": "2023-06-01",
"content-type": "application/json",
}
data = {
"model": "claude-3-opus-20240229",
"max_tokens": 1,
"messages": [{"role": "user", "content": "Hello"}],
}
resp = requests.post(
"https://api.anthropic.com/v1/messages", headers=headers, json=data, timeout=10
)
return resp.status_code == 200
except Exception:
return False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Catching a broad Exception can hide various issues, such as network errors, timeouts, or invalid responses. It's better to catch more specific exceptions from the requests library (e.g., requests.exceptions.RequestException) to provide more granular error handling and logging. This applies to both validate_anthropic_api_key and validate_openai_api_key functions.

start_time = time.time()

responses = await query_multiple_packages(router, packages, max_concurrent=5)
responses = await query_multiple_packages(router, packages, max_concurrent=5)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This line is a duplicate of the line above it, causing query_multiple_packages to be called twice unnecessarily. Please remove the redundant call.

@jaysurse jaysurse deleted the cla-add-alternate-emails branch January 19, 2026 07:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants