Skip to content

Conversation

@pavanimanchala53
Copy link
Contributor

@pavanimanchala53 pavanimanchala53 commented Jan 8, 2026

Related Issue

Closes #

Summary

AI Disclosure

  • No AI used
  • AI/IDE/Agents used (please describe below)

Checklist

  • Tests pass (pytest tests/)
  • MVP label added if closing MVP issue
  • Update "Cortex -h" (if needed)

Summary by CodeRabbit

Release Notes

  • New Features

    • Interactive clarification for ambiguous install requests with AI-generated suggestions
    • Support for multiple AI backends (OpenAI, Claude, Ollama)
    • Natural-language understanding feedback for commands
    • API key and provider parameter overrides for install command
  • Improvements

    • Virtual environment command normalization for cross-platform compatibility
  • Documentation

    • Added Natural Language Installer guide
  • Removals

    • Removed legacy shell-environment-analyzer subcommands

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 8, 2026

📝 Walkthrough

Walkthrough

This pull request adds a natural language installation feature enabling users to install software via conversational input. Changes introduce intent extraction across multiple LLM providers (OpenAI, Claude, Ollama), interactive clarification workflows for ambiguous requests, stdin-aware prompt construction, and virtual environment command normalization. The CommandInterpreter class gains intent extraction and natural language understanding methods; the CortexCLI class incorporates ambiguity detection and interactive decision flows into the install workflow. Legacy shell-environment-analyzer subcommands are removed.

Changes

Cohort / File(s) Summary
CLI Intent Handling & Interactive Flows
cortex/cli.py
Added stdin-aware prompt construction (_build_prompt_with_stdin), ambiguity detection (_is_ambiguous_request), interactive clarification with LLM-based suggestions (_clarification_prompt, _generate_suggestions), and venv command normalization (_normalize_venv_commands). Updated install() signature to accept api_key and provider overrides. Expanded install workflow with interactive decision points, editing, and execution modes. Removed legacy shell-environment-analyzer subcommands (audit, check, path).
LLM Intent Extraction & Understanding
cortex/llm/interpreter.py
Implemented intent extraction (extract_intent) for OpenAI, Claude, and Ollama via provider-specific methods (_extract_intent_openai, _extract_intent_claude, _extract_intent_ollama). Added robust JSON parsing (_parse_intent_from_text). Enhanced _get_system_prompt with structured command-generation rules. Added natural language understanding generation (generate_understanding_message), clarifying questions (ask_clarifying_questions), and intent refinement (refine_intent). Ollama model resolution via environment/config/default. Improved JSON repair and command validation.
Documentation
docs/docs/nl-installer.md
New documentation describing Natural Language Installer feature, usage examples (with/without dry-run), execution behavior, and test reference.
Test Coverage
tests/test_nl_parser_cases.py
New comprehensive test module with fake_interpreter fixture and tests for CommandInterpreter.parse and CommandInterpreter.extract_intent across various scenarios (installations, typos, ambiguous/unknown requests, sentence-style queries, explicit intent simulation).

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI as CortexCLI
    participant Interp as CommandInterpreter
    participant LLM as LLM API
    
    User->>CLI: install("install pytorch")
    CLI->>CLI: _build_prompt_with_stdin(user_input)
    CLI->>Interp: extract_intent(prompt)
    Interp->>LLM: POST intent extraction request
    LLM-->>Interp: intent JSON response
    Interp->>Interp: _parse_intent_from_text(response)
    Interp-->>CLI: intent dict
    
    alt is_ambiguous_request
        CLI->>CLI: _clarification_prompt(user_input, intent)
        CLI->>Interp: ask_clarifying_questions(user_input, intent)
        Interp->>LLM: POST clarifying questions request
        LLM-->>Interp: questions
        Interp-->>CLI: questions list
        CLI->>CLI: _generate_suggestions(interpreter, user_input, intent)
        CLI->>LLM: POST suggestion generation
        LLM-->>CLI: command suggestions
        CLI->>User: Display suggestions, prompt for selection/edit
        User->>CLI: User selection/confirmation
        CLI->>Interp: refine_intent(original_request, clarification)
        Interp-->>CLI: refined intent
    else not ambiguous
        CLI->>CLI: Proceed with direct command generation
    end
    
    CLI->>Interp: get_commands(refined_intent)
    Interp-->>CLI: command list
    CLI->>CLI: _normalize_venv_commands(commands)
    CLI->>User: Display commands (dry-run or execute)
    
    alt execute mode
        User->>CLI: Confirm execution
        CLI->>CLI: Execute commands
    else dry_run mode
        CLI->>User: Show commands without execution
    end
Loading
sequenceDiagram
    participant Interp as CommandInterpreter
    participant OpenAI as OpenAI API
    participant Claude as Claude API
    participant Ollama as Ollama API
    
    Interp->>Interp: extract_intent(user_input)
    
    alt provider == OPENAI
        Interp->>OpenAI: _call_openai(_get_intent_prompt())
        OpenAI-->>Interp: JSON response
        Interp->>Interp: _parse_intent_from_text(response)
    else provider == CLAUDE
        Interp->>Claude: _call_claude(_get_intent_prompt())
        Claude-->>Interp: text response
        Interp->>Interp: _parse_intent_from_text(response)
    else provider == OLLAMA
        Interp->>Interp: _get_ollama_model()
        Interp->>Ollama: POST request (base_url + model)
        Ollama-->>Interp: JSON response
        Interp->>Interp: _parse_intent_from_text(response)
    else provider == FAKE
        Interp->>Interp: Return configurable/default intent
    end
    
    Interp-->>Interp: return intent dict
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

MVP

Suggested reviewers

  • mikejmorgan-ai
  • Suyashd999

Poem

🐰 A rabbit hops through natural language flowers,
Intent blooms bright with LLM powers,
Clarifications hop, suggestions arise,
Install with words, watch commands materialize!

🚥 Pre-merge checks | ❌ 3
❌ Failed checks (2 warnings, 1 inconclusive)
Check name Status Explanation Resolution
Description check ⚠️ Warning The description template is used but all required and optional sections are left empty or unchecked; no actual summary, issue reference, or disclosure information is provided. Fill in all required sections: add the issue number in 'Related Issue', provide a substantive summary of changes, disclose AI tool usage, and verify checklist items are complete before merging.
Docstring Coverage ⚠️ Warning Docstring coverage is 71.93% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'Nl parser' is vague and does not clearly describe the main changes; it lacks specificity about what the parser does or its purpose. Use a more descriptive title that clarifies the purpose, such as 'Add natural language installer with intent extraction and clarification flow' or 'Implement NL-based software installation with interactive clarification'.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

github-actions bot commented Jan 8, 2026

CLA Verification Passed

All contributors have signed the CLA.

Contributor Signed As
@pavanimanchala53 @pavanimanchala53
@pavanimanchala53 @pavanimanchala53
@Anshgrover23 @Anshgrover23

@sonarqubecloud
Copy link

sonarqubecloud bot commented Jan 8, 2026

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🤖 Fix all issues with AI agents
In @cortex/cli.py:
- Around line 75-78: The inline comment in cortex/cli.py describing the
confidence threshold is misleading: it says "lowered from 0.5 to be more
conservative" while the code uses 0.7 (which is a raise). Update the comment
near the ambiguous-check block (the lines describing domain unknown, confidence
< 0.7, and short requests) to correctly state that the confidence threshold was
raised/increased from 0.5 to 0.7 (or simply say "threshold increased to 0.7 for
conservatism") so the comment matches the actual logic in that function/block.
- Around line 130-179: The provider comparisons in the chat branch use
interpreter.provider.name (which yields enum member names like "OPENAI") causing
the wrong path to be taken; switch comparisons to use interpreter.provider.value
or compare against the APIProvider enum members directly (e.g.,
APIProvider.OPENAI) and update all occurrences in the function handling the
provider logic (the blocks referencing interpreter.provider.name,
interpreter.client, interpreter.model, and interpreter.ollama_url) so each
provider-specific branch (OpenAI/Claude/Ollama/Fake) is matched correctly.

In @cortex/llm/interpreter.py:
- Around line 575-604: Add a proper docstring to the public method
extract_intent describing its purpose (parse user input into an intent dict),
the parameters (user_input: str), the return type (dict with keys like action,
domain, install_mode, description, ambiguous, confidence and that it may vary by
provider), and raised exceptions (ValueError for empty input or unsupported
provider, JSONDecodeError handling for CORTEX_FAKE_INTENT). Mention supported
providers (APIProvider.OPENAI, APIProvider.CLAUDE, APIProvider.OLLAMA,
APIProvider.FAKE) and note that when using FAKE it may read CORTEX_FAKE_INTENT
from the environment as JSON fallback behavior.
- Around line 328-335: The fallback intent returned in the parsing-failure
branch is missing the install_mode field; update the returned dict in the
failure branch (the anonymous intent fallback returned in interpreter.py) to
include "install_mode" with the same sentinel value used by other fallbacks
(e.g., "unknown" or the project’s standard fallback value) so the structure
matches other intent fallback objects.
- Around line 233-242: The JSON example in the prompt block under the Format:
docstring in cortex/llm/interpreter.py has a missing comma after the
"install_mode": "..." entry; update that prompt/example (the triple-quoted
string starting with "Format:") to include a comma after "install_mode": "..."
so the example becomes valid JSON and won't confuse the LLM.
- Around line 194-202: The Ollama fallback returned in the except Exception
block in cortex/llm/interpreter.py is missing the install_mode key, which causes
downstream KeyError when code reads intent.get("install_mode"); update that
returned dict (the one currently returning
action/domain/description/ambiguous/confidence) to include "install_mode":
"unknown" (matching the shape used by the OpenAI/Claude fallbacks) so the intent
structure is consistent.

In @docs/docs/nl-installer.md:
- Around line 1-18: Update the incorrect test file reference in the NL Installer
docs: change the mention of tests/test_nl_installer.py to the correct
tests/test_nl_parser_cases.py in the docs/docs/nl-installer.md content (the
paragraph that currently states "The NL installer is validated using unit tests
in `tests/test_nl_installer.py`"). Ensure the updated line reads that the NL
installer is validated using unit tests in `tests/test_nl_parser_cases.py`.
🧹 Nitpick comments (5)
cortex/llm/interpreter.py (4)

72-94: Add explicit encoding and consider narrowing exception handling.

Two minor improvements:

  1. Line 85: Add encoding="utf-8" to open() for cross-platform consistency
  2. Line 90: The bare except Exception is broad; consider catching (OSError, json.JSONDecodeError, KeyError) for clarity
Suggested improvement
             config_file = Path.home() / ".cortex" / "config.json"
             if config_file.exists():
-                with open(config_file) as f:
+                with open(config_file, encoding="utf-8") as f:
                     config = json.load(f)
                     model = config.get("ollama_model")
                     if model:
                         return model
-        except Exception:
+        except (OSError, json.JSONDecodeError, KeyError):
             pass  # Ignore errors reading config

188-202: Narrow the exception handling to avoid catching system exceptions.

The bare except Exception on line 194 could catch KeyboardInterrupt or SystemExit. Consider catching specific network-related exceptions.

Suggested fix
-        except Exception:
+        except (urllib.error.URLError, json.JSONDecodeError, OSError, TimeoutError) as e:
             # True failure → unknown intent
             return {
                 "action": "unknown",
                 "domain": "unknown",
-                "description": "Failed to extract intent",
+                "description": f"Failed to extract intent: {e}",
                 "ambiguous": True,
                 "confidence": 0.0,
             }

678-680: Avoid bare except clause.

The bare except on line 678 will catch all exceptions including KeyboardInterrupt and SystemExit. Consider catching specific exceptions or at minimum use except Exception:.


752-755: Add docstring for public method.

The refine_intent method is a public API and should have a docstring per coding guidelines.

Add docstring
     def refine_intent(self, original_request: str, clarification: str) -> dict:
-        """Refine intent using a user clarification, delegating back to extraction."""
+        """Refine intent by combining original request with user clarification.
+
+        Args:
+            original_request: The initial user request
+            clarification: Additional context provided by the user
+
+        Returns:
+            Updated intent dict with refined understanding
+        """
         combined = f"{original_request}\nClarification: {clarification}"
         return self.extract_intent(combined)
cortex/cli.py (1)

194-196: Unused exception variable and missing debug logging.

The exception is caught but not logged despite the comment indicating debug logging was intended. Consider implementing the debug logging or removing the misleading comment.

Add debug logging
         except Exception as e:
-            # Log error in debug mode
-            pass
+            # Silent fallback - suggestions are non-critical
+            # Debug: print(f"Suggestion generation failed: {e}", file=sys.stderr)
+            pass
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 012513c and 0f75fe6.

📒 Files selected for processing (4)
  • cortex/cli.py
  • cortex/llm/interpreter.py
  • docs/docs/nl-installer.md
  • tests/test_nl_parser_cases.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • tests/test_nl_parser_cases.py
  • cortex/llm/interpreter.py
  • cortex/cli.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_nl_parser_cases.py
🧠 Learnings (4)
📓 Common learnings
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
📚 Learning: 2026-01-05T20:26:18.626Z
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.

Applied to files:

  • cortex/llm/interpreter.py
  • cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution

Applied to files:

  • cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation

Applied to files:

  • cortex/cli.py
🧬 Code graph analysis (1)
tests/test_nl_parser_cases.py (1)
cortex/llm/interpreter.py (2)
  • parse (505-563)
  • extract_intent (575-604)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.11)
🔇 Additional comments (11)
cortex/llm/interpreter.py (2)

42-44: Good defensive initialization, minor redundancy noted.

The Ollama base URL is correctly initialized here, but note that _initialize_client (line 116) also reads from the same environment variable. Consider storing self.ollama_url unconditionally in __init__ and reusing it in _initialize_client to avoid the duplicate os.environ.get call.


692-750: LGTM - good defensive parsing of LLM responses.

The filtering logic on lines 741-746 effectively removes meta-commentary from LLM responses. The silent exception handling on line 749-750 is acceptable for a non-critical feature, though consider adding debug logging if verbose mode is available.

tests/test_nl_parser_cases.py (4)

8-19: Well-designed test fixture.

The fixture properly isolates tests from external API calls by using the fake provider with predefined responses.


22-80: Tests validate parsing doesn't crash, but have limited verification depth.

These tests effectively verify that the parse method handles various input types without errors. However, since the fake provider returns identical commands regardless of input, consider adding tests with CORTEX_FAKE_INTENT to verify that different intents are correctly extracted for different request types.


127-159: Good coverage of fake intent injection mechanism.

These tests properly verify that the CORTEX_FAKE_INTENT environment variable correctly overrides the default fake intent, enabling controlled testing of downstream intent-handling logic.


82-88: Good test documenting default fake provider behavior.

This test ensures that the default fake intent is non-ambiguous, which is important for downstream test isolation.

cortex/cli.py (5)

38-52: Good stdin handling with defensive error catching.

The stdin consumption in __init__ correctly handles the case where stdin is not a tty and gracefully handles OSError.


283-316: Good platform-aware venv command normalization.

The implementation correctly detects venv creation and normalizes activation commands for both Windows and Unix platforms.


800-842: Well-implemented interactive clarification loop.

The clarification flow correctly:

  • Limits rounds to prevent infinite loops (max 3)
  • Handles both question-based and free-form clarification
  • Fails fast in non-interactive mode with helpful suggestions

Based on learnings, the ambiguity check at the CLI layer using domain is intentional as a safety mechanism.


896-944: Good confirmation flow with edit capability.

The confirmation flow provides flexibility with yes/edit/no options. The auto-approve in non-interactive mode (line 900-901) is necessary for CI/testing and is appropriately flagged with a warning. This aligns with the learning that execution requires the explicit --execute flag.


856-875: Good install mode-aware prompt construction.

The prompt correctly adjusts based on install_mode - using pip/venv for Python packages and apt for system packages. The command normalization (line 875) ensures venv activation is platform-appropriate.

Comment on lines +75 to +78
# Consider ambiguous if:
# - domain unknown
# - confidence < 0.7 (lowered from 0.5 to be more conservative)
# - request is very short (< 5 words) suggesting underspecification
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Comment says "lowered" but threshold was raised.

Line 77 states the confidence threshold was "lowered from 0.5 to be more conservative" but 0.7 > 0.5, meaning it was actually raised to be more conservative (requiring higher confidence). This is correct behavior but the comment is misleading.

Fix comment
         # Consider ambiguous if:
         # - domain unknown
-        # - confidence < 0.7 (lowered from 0.5 to be more conservative)
+        # - confidence < 0.7 (raised from 0.5 to be more conservative)
         # - request is very short (< 5 words) suggesting underspecification
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Consider ambiguous if:
# - domain unknown
# - confidence < 0.7 (lowered from 0.5 to be more conservative)
# - request is very short (< 5 words) suggesting underspecification
# Consider ambiguous if:
# - domain unknown
# - confidence < 0.7 (raised from 0.5 to be more conservative)
# - request is very short (< 5 words) suggesting underspecification
🤖 Prompt for AI Agents
In @cortex/cli.py around lines 75 - 78, The inline comment in cortex/cli.py
describing the confidence threshold is misleading: it says "lowered from 0.5 to
be more conservative" while the code uses 0.7 (which is a raise). Update the
comment near the ambiguous-check block (the lines describing domain unknown,
confidence < 0.7, and short requests) to correctly state that the confidence
threshold was raised/increased from 0.5 to 0.7 (or simply say "threshold
increased to 0.7 for conservatism") so the comment matches the actual logic in
that function/block.

Comment on lines +130 to +179
if interpreter.provider.name == "openai":
response = interpreter.client.chat.completions.create(
model=interpreter.model,
messages=[
{
"role": "system",
"content": "You are a helpful assistant that suggests installation requests. Be specific and relevant.",
},
{"role": "user", "content": prompt},
],
temperature=0.3,
max_tokens=200,
)
content = response.choices[0].message.content.strip()
elif interpreter.provider.name == "claude":
response = interpreter.client.messages.create(
model=interpreter.model,
max_tokens=200,
temperature=0.3,
system="You are a helpful assistant that suggests installation requests. Be specific and relevant.",
messages=[{"role": "user", "content": prompt}],
)
content = response.content[0].text.strip()
elif interpreter.provider.name == "ollama":
full_prompt = f"System: You are a helpful assistant that suggests installation requests. Be specific and relevant.\n\nUser: {prompt}"
data = json.dumps(
{
"model": interpreter.model,
"prompt": full_prompt,
"stream": False,
"options": {"temperature": 0.3},
}
).encode("utf-8")
req = urllib.request.Request(
f"{interpreter.ollama_url}/api/generate",
data=data,
headers={"Content-Type": "application/json"},
)
with urllib.request.urlopen(req, timeout=30) as response:
result = json.loads(response.read().decode("utf-8"))
content = result.get("response", "").strip()
elif interpreter.provider.name == "fake":
# Return fake suggestions for testing
return [
f"install {user_input} with more details",
f"set up {user_input} environment",
f"configure {user_input} tools",
]
else:
return []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bug: Provider comparison uses wrong attribute.

The code compares interpreter.provider.name with lowercase strings like "openai", but APIProvider is an Enum where .name returns the uppercase member name (e.g., "OPENAI"), not the value. This causes all provider-specific paths to be skipped, always falling back to the default suggestions.

Use interpreter.provider.value or compare with the enum members directly.

Fix provider comparisons
-            if interpreter.provider.name == "openai":
+            if interpreter.provider.value == "openai":
                 response = interpreter.client.chat.completions.create(
                     ...
                 )
                 content = response.choices[0].message.content.strip()
-            elif interpreter.provider.name == "claude":
+            elif interpreter.provider.value == "claude":
                 response = interpreter.client.messages.create(
                     ...
                 )
                 content = response.content[0].text.strip()
-            elif interpreter.provider.name == "ollama":
+            elif interpreter.provider.value == "ollama":
                 ...
-            elif interpreter.provider.name == "fake":
+            elif interpreter.provider.value == "fake":
                 # Return fake suggestions for testing
                 return [...]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if interpreter.provider.name == "openai":
response = interpreter.client.chat.completions.create(
model=interpreter.model,
messages=[
{
"role": "system",
"content": "You are a helpful assistant that suggests installation requests. Be specific and relevant.",
},
{"role": "user", "content": prompt},
],
temperature=0.3,
max_tokens=200,
)
content = response.choices[0].message.content.strip()
elif interpreter.provider.name == "claude":
response = interpreter.client.messages.create(
model=interpreter.model,
max_tokens=200,
temperature=0.3,
system="You are a helpful assistant that suggests installation requests. Be specific and relevant.",
messages=[{"role": "user", "content": prompt}],
)
content = response.content[0].text.strip()
elif interpreter.provider.name == "ollama":
full_prompt = f"System: You are a helpful assistant that suggests installation requests. Be specific and relevant.\n\nUser: {prompt}"
data = json.dumps(
{
"model": interpreter.model,
"prompt": full_prompt,
"stream": False,
"options": {"temperature": 0.3},
}
).encode("utf-8")
req = urllib.request.Request(
f"{interpreter.ollama_url}/api/generate",
data=data,
headers={"Content-Type": "application/json"},
)
with urllib.request.urlopen(req, timeout=30) as response:
result = json.loads(response.read().decode("utf-8"))
content = result.get("response", "").strip()
elif interpreter.provider.name == "fake":
# Return fake suggestions for testing
return [
f"install {user_input} with more details",
f"set up {user_input} environment",
f"configure {user_input} tools",
]
else:
return []
if interpreter.provider.value == "openai":
response = interpreter.client.chat.completions.create(
model=interpreter.model,
messages=[
{
"role": "system",
"content": "You are a helpful assistant that suggests installation requests. Be specific and relevant.",
},
{"role": "user", "content": prompt},
],
temperature=0.3,
max_tokens=200,
)
content = response.choices[0].message.content.strip()
elif interpreter.provider.value == "claude":
response = interpreter.client.messages.create(
model=interpreter.model,
max_tokens=200,
temperature=0.3,
system="You are a helpful assistant that suggests installation requests. Be specific and relevant.",
messages=[{"role": "user", "content": prompt}],
)
content = response.content[0].text.strip()
elif interpreter.provider.value == "ollama":
full_prompt = f"System: You are a helpful assistant that suggests installation requests. Be specific and relevant.\n\nUser: {prompt}"
data = json.dumps(
{
"model": interpreter.model,
"prompt": full_prompt,
"stream": False,
"options": {"temperature": 0.3},
}
).encode("utf-8")
req = urllib.request.Request(
f"{interpreter.ollama_url}/api/generate",
data=data,
headers={"Content-Type": "application/json"},
)
with urllib.request.urlopen(req, timeout=30) as response:
result = json.loads(response.read().decode("utf-8"))
content = result.get("response", "").strip()
elif interpreter.provider.value == "fake":
# Return fake suggestions for testing
return [
f"install {user_input} with more details",
f"set up {user_input} environment",
f"configure {user_input} tools",
]
else:
return []
🤖 Prompt for AI Agents
In @cortex/cli.py around lines 130 - 179, The provider comparisons in the chat
branch use interpreter.provider.name (which yields enum member names like
"OPENAI") causing the wrong path to be taken; switch comparisons to use
interpreter.provider.value or compare against the APIProvider enum members
directly (e.g., APIProvider.OPENAI) and update all occurrences in the function
handling the provider logic (the blocks referencing interpreter.provider.name,
interpreter.client, interpreter.model, and interpreter.ollama_url) so each
provider-specific branch (OpenAI/Claude/Ollama/Fake) is matched correctly.

Comment on lines +194 to +202
except Exception:
# True failure → unknown intent
return {
"action": "unknown",
"domain": "unknown",
"description": "Failed to extract intent",
"ambiguous": True,
"confidence": 0.0,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistent fallback intent structure - missing install_mode field.

The Ollama fallback intent (lines 196-202) is missing the install_mode field that's present in the OpenAI and Claude fallbacks (lines 276-283, 298-305). This could cause KeyError when downstream code accesses intent.get("install_mode").

Add missing field
         except Exception:
             # True failure → unknown intent
             return {
                 "action": "unknown",
                 "domain": "unknown",
                 "description": "Failed to extract intent",
                 "ambiguous": True,
                 "confidence": 0.0,
+                "install_mode": "system",
             }
🤖 Prompt for AI Agents
In @cortex/llm/interpreter.py around lines 194 - 202, The Ollama fallback
returned in the except Exception block in cortex/llm/interpreter.py is missing
the install_mode key, which causes downstream KeyError when code reads
intent.get("install_mode"); update that returned dict (the one currently
returning action/domain/description/ambiguous/confidence) to include
"install_mode": "unknown" (matching the shape used by the OpenAI/Claude
fallbacks) so the intent structure is consistent.

Comment on lines +233 to +242
Format:
{
"action": "...",
"domain": "...",
"install_mode": "..."
"description": "...",
"ambiguous": true/false,
"confidence": 0.0
}
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix JSON syntax error in the prompt example.

The example JSON in the prompt is missing a comma after "install_mode": "..." on line 237, which could confuse the LLM about the expected output format.

Fix
         Format:
         {
         "action": "...",
         "domain": "...",
-        "install_mode": "..."
+        "install_mode": "...",
         "description": "...",
         "ambiguous": true/false,
         "confidence": 0.0
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Format:
{
"action": "...",
"domain": "...",
"install_mode": "..."
"description": "...",
"ambiguous": true/false,
"confidence": 0.0
}
"""
Format:
{
"action": "...",
"domain": "...",
"install_mode": "...",
"description": "...",
"ambiguous": true/false,
"confidence": 0.0
}
"""
🤖 Prompt for AI Agents
In @cortex/llm/interpreter.py around lines 233 - 242, The JSON example in the
prompt block under the Format: docstring in cortex/llm/interpreter.py has a
missing comma after the "install_mode": "..." entry; update that prompt/example
(the triple-quoted string starting with "Format:") to include a comma after
"install_mode": "..." so the example becomes valid JSON and won't confuse the
LLM.

Comment on lines +328 to +335
# If parsing fails, do NOT guess meaning
return {
"action": "unknown",
"domain": "unknown",
"description": "Unstructured intent output",
"ambiguous": True,
"confidence": 0.0,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fallback intent missing install_mode field.

The fallback intent returned when parsing fails is also missing install_mode, which should be consistent with other fallback structures.

Fix
         # If parsing fails, do NOT guess meaning
         return {
             "action": "unknown",
             "domain": "unknown",
             "description": "Unstructured intent output",
             "ambiguous": True,
             "confidence": 0.0,
+            "install_mode": "system",
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# If parsing fails, do NOT guess meaning
return {
"action": "unknown",
"domain": "unknown",
"description": "Unstructured intent output",
"ambiguous": True,
"confidence": 0.0,
}
# If parsing fails, do NOT guess meaning
return {
"action": "unknown",
"domain": "unknown",
"description": "Unstructured intent output",
"ambiguous": True,
"confidence": 0.0,
"install_mode": "system",
}
🤖 Prompt for AI Agents
In @cortex/llm/interpreter.py around lines 328 - 335, The fallback intent
returned in the parsing-failure branch is missing the install_mode field; update
the returned dict in the failure branch (the anonymous intent fallback returned
in interpreter.py) to include "install_mode" with the same sentinel value used
by other fallbacks (e.g., "unknown" or the project’s standard fallback value) so
the structure matches other intent fallback objects.

Comment on lines +575 to +604
def extract_intent(self, user_input: str) -> dict:
if not user_input or not user_input.strip():
raise ValueError("User input cannot be empty")

if self.provider == APIProvider.OPENAI:
return self._extract_intent_openai(user_input)
elif self.provider == APIProvider.CLAUDE:
return self._extract_intent_claude(user_input)
elif self.provider == APIProvider.OLLAMA:
return self._extract_intent_ollama(user_input)
elif self.provider == APIProvider.FAKE:
# Check for configurable fake intent from environment
fake_intent_env = os.environ.get("CORTEX_FAKE_INTENT")
if fake_intent_env:
try:
return json.loads(fake_intent_env)
except json.JSONDecodeError:
pass # Fall back to default

# Return realistic intent for testing (not ambiguous)
return {
"action": "install",
"domain": "general",
"install_mode": "system",
"description": user_input,
"ambiguous": False,
"confidence": 0.8,
}
else:
raise ValueError(f"Unsupported provider: {self.provider}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add docstring for public API method.

Per coding guidelines, docstrings are required for all public APIs. The extract_intent method is a public API that should document its purpose, parameters, and return value.

Add docstring
     def extract_intent(self, user_input: str) -> dict:
+        """Extract intent from natural language input.
+
+        Args:
+            user_input: Natural language description of desired action
+
+        Returns:
+            Dict containing action, domain, install_mode, description,
+            ambiguous flag, and confidence score
+
+        Raises:
+            ValueError: If input is empty
+        """
         if not user_input or not user_input.strip():
             raise ValueError("User input cannot be empty")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def extract_intent(self, user_input: str) -> dict:
if not user_input or not user_input.strip():
raise ValueError("User input cannot be empty")
if self.provider == APIProvider.OPENAI:
return self._extract_intent_openai(user_input)
elif self.provider == APIProvider.CLAUDE:
return self._extract_intent_claude(user_input)
elif self.provider == APIProvider.OLLAMA:
return self._extract_intent_ollama(user_input)
elif self.provider == APIProvider.FAKE:
# Check for configurable fake intent from environment
fake_intent_env = os.environ.get("CORTEX_FAKE_INTENT")
if fake_intent_env:
try:
return json.loads(fake_intent_env)
except json.JSONDecodeError:
pass # Fall back to default
# Return realistic intent for testing (not ambiguous)
return {
"action": "install",
"domain": "general",
"install_mode": "system",
"description": user_input,
"ambiguous": False,
"confidence": 0.8,
}
else:
raise ValueError(f"Unsupported provider: {self.provider}")
def extract_intent(self, user_input: str) -> dict:
"""Extract intent from natural language input.
Args:
user_input: Natural language description of desired action
Returns:
Dict containing action, domain, install_mode, description,
ambiguous flag, and confidence score
Raises:
ValueError: If input is empty
"""
if not user_input or not user_input.strip():
raise ValueError("User input cannot be empty")
if self.provider == APIProvider.OPENAI:
return self._extract_intent_openai(user_input)
elif self.provider == APIProvider.CLAUDE:
return self._extract_intent_claude(user_input)
elif self.provider == APIProvider.OLLAMA:
return self._extract_intent_ollama(user_input)
elif self.provider == APIProvider.FAKE:
# Check for configurable fake intent from environment
fake_intent_env = os.environ.get("CORTEX_FAKE_INTENT")
if fake_intent_env:
try:
return json.loads(fake_intent_env)
except json.JSONDecodeError:
pass # Fall back to default
# Return realistic intent for testing (not ambiguous)
return {
"action": "install",
"domain": "general",
"install_mode": "system",
"description": user_input,
"ambiguous": False,
"confidence": 0.8,
}
else:
raise ValueError(f"Unsupported provider: {self.provider}")
🤖 Prompt for AI Agents
In @cortex/llm/interpreter.py around lines 575 - 604, Add a proper docstring to
the public method extract_intent describing its purpose (parse user input into
an intent dict), the parameters (user_input: str), the return type (dict with
keys like action, domain, install_mode, description, ambiguous, confidence and
that it may vary by provider), and raised exceptions (ValueError for empty input
or unsupported provider, JSONDecodeError handling for CORTEX_FAKE_INTENT).
Mention supported providers (APIProvider.OPENAI, APIProvider.CLAUDE,
APIProvider.OLLAMA, APIProvider.FAKE) and note that when using FAKE it may read
CORTEX_FAKE_INTENT from the environment as JSON fallback behavior.

Comment on lines +1 to +18
# Natural Language Installer (NL Installer)

Cortex supports installing software using natural language instead of
explicit package names.

Example:
```bash
cortex install "something for machine learning"
```
The request is converted into shell commands using the CommandInterpreter
By default, commands are generated and printed (dry-run).
Execution only happens when `--execute` is explicitly provided.

```bash
cortex install "something for machine learning" --execute
```

The NL installer is validated using unit tests in `tests/test_nl_installer.py`. No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if the referenced test files exist
echo "=== Checking for test_nl_installer.py ==="
fd -t f "test_nl_installer.py" tests/ 2>/dev/null || echo "Not found"

echo "=== Checking for test_nl_parser files ==="
fd -t f "test_nl_parser" tests/ 2>/dev/null || echo "Not found"

echo "=== Listing all test files for context ==="
fd -t f "test_" tests/ | head -20

Repository: cortexlinux/cortex

Length of output: 859


🏁 Script executed:

#!/bin/bash
# Get complete list of test files (remove head limit)
echo "=== All test files in tests directory ==="
fd -t f "test_" tests/ | grep -i "nl\|parser\|install" || echo "No matches found"

# Check if there are any files referencing NL installer
echo ""
echo "=== Files referencing NL installer or parser ==="
rg -l "nl.*installer|NL.*Installer" --type md --type py 2>/dev/null | head -10 || echo "No matches"

Repository: cortexlinux/cortex

Length of output: 372


Update incorrect test file reference.

Line 17 references tests/test_nl_installer.py, but this file does not exist. The correct test file is tests/test_nl_parser_cases.py. Update the reference accordingly.

🤖 Prompt for AI Agents
In @docs/docs/nl-installer.md around lines 1 - 18, Update the incorrect test
file reference in the NL Installer docs: change the mention of
tests/test_nl_installer.py to the correct tests/test_nl_parser_cases.py in the
docs/docs/nl-installer.md content (the paragraph that currently states "The NL
installer is validated using unit tests in `tests/test_nl_installer.py`").
Ensure the updated line reads that the NL installer is validated using unit
tests in `tests/test_nl_parser_cases.py`.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants