-
-
Notifications
You must be signed in to change notification settings - Fork 52
feat : add Parallel Task Execution for Multi-Step Installs #288
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughThis PR introduces parallel task execution for multi-step installations, adding a new Changes
Sequence DiagramsequenceDiagram
participant User as User/CLI
participant CLI as CortexCLI.install()
participant ParallelRunner as run_parallel_install()
participant TaskExec as run_single_task()
participant Subprocess as Subprocess
participant Callback as log_callback
User->>CLI: install(--parallel)
CLI->>CLI: Validate inputs & parse config
alt parallel=True
CLI->>ParallelRunner: run_parallel_install(commands, deps, timeout, log_callback)
ParallelRunner->>ParallelRunner: Build task graph, validate dependencies
loop For each task (respecting dependencies)
ParallelRunner->>TaskExec: Schedule run_single_task()
TaskExec->>Callback: emit(Starting)
TaskExec->>Subprocess: Execute command with timeout
Subprocess-->>TaskExec: stdout/stderr, exit code
TaskExec->>Callback: emit(Finished/Error)
TaskExec-->>ParallelRunner: status, output, duration
end
ParallelRunner-->>CLI: (all_success, tasks)
CLI->>CLI: Compute total duration, update history
alt all_success
CLI->>CLI: Log SUCCESS, print time, update history
CLI-->>User: exit(0) + rollback hint
else failure
CLI->>CLI: Extract error, update history FAILED
CLI-->>User: exit(1) + error details + history link
end
else parallel=False
CLI->>CLI: Execute sequential install path (existing flow)
CLI-->>User: Sequential result
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
tests/installer/test_parallel_install.py (2)
16-24: Consider usingsys.executableinstead of hard-codedpythonThese tests shell out to
"python -c ...", which can be brittle on systems where the interpreter is only exposed aspython3or via a venv path. Usingsys.executable(quoted appropriately) would make the tests more robust across environments. Same applies to other commands in this file.
236-257: Unusedtasksvariable in log callback testIn
test_log_callback_called, the unpackedtasksresult is never used:success, tasks = await run_parallel_install(...)This triggers Ruff/Sonar warnings. Rename to
_tasksor_to signal intentional discard:- success, tasks = await run_parallel_install( + success, _tasks = await run_parallel_install( commands, timeout=10, log_callback=log_callback )cortex/cli.py (1)
115-301:CortexCLI.installis very large; consider extracting helpersStatic analysis flags very high cognitive complexity here. The method now handles:
- Validation, API key selection, and interpreter setup
- History bookkeeping
- Dry-run vs execute
- Sequential and parallel execution flows, each with their own logging and error handling
Longer term, consider splitting into helpers (e.g.
_plan_install,_run_parallel,_run_sequential) to make future changes around dependency-aware parallelism less painful.cortex/install_parallel.py (2)
73-81: Minor: static f-string for error messagetask.error = f"Command blocked: matches dangerous pattern"Since there are no placeholders, this doesn’t need to be an f-string:
- task.error = f"Command blocked: matches dangerous pattern" + task.error = "Command blocked: matches dangerous pattern"This also silences Ruff’s F541 warning.
82-97: Useget_running_loop()instead ofget_event_loop()in async contextInside an async function,
asyncio.get_running_loop()is the preferred API:- loop = asyncio.get_event_loop() + loop = asyncio.get_running_loop()This is more future-proof against changes in event-loop policy and matches current asyncio best practices.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cortex/cli.py(8 hunks)cortex/install_parallel.py(1 hunks)tests/installer/__init__.py(1 hunks)tests/installer/test_parallel_install.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
tests/installer/test_parallel_install.py (1)
cortex/install_parallel.py (4)
run_parallel_install(131-253)ParallelTask(20-35)TaskStatus(11-16)duration(32-35)
cortex/install_parallel.py (1)
tests/installer/test_parallel_install.py (1)
log_callback(242-243)
cortex/cli.py (3)
cortex/validators.py (1)
validate_install_request(72-99)cortex/coordinator.py (2)
execute(252-299)InstallationCoordinator(65-350)cortex/install_parallel.py (1)
run_parallel_install(131-253)
🪛 GitHub Check: SonarCloud Code Analysis
tests/installer/test_parallel_install.py
[warning] 245-245: Replace the unused local variable "tasks" with "_".
cortex/install_parallel.py
[warning] 220-220: Remove this unnecessary list() call on an already iterable object.
[failure] 88-88: Use an async subprocess call in this async function instead of a synchronous one.
[failure] 38-38: Refactor this function to reduce its Cognitive Complexity from 19 to the 15 allowed.
[failure] 131-131: Refactor this function to reduce its Cognitive Complexity from 69 to the 15 allowed.
[warning] 38-38: Remove this "timeout" parameter and use a timeout context manager instead.
[warning] 193-193: Remove this unnecessary list() call on an already iterable object.
[warning] 135-135: Remove this "timeout" parameter and use a timeout context manager instead.
[warning] 76-76: Add replacement fields or use a normal string instead of an f-string.
cortex/cli.py
[failure] 115-115: Refactor this function to reduce its Cognitive Complexity from 103 to the 15 allowed.
[failure] 235-235: Define a constant instead of duplicating this literal "Installation failed" 4 times.
🪛 Ruff (0.14.8)
tests/installer/test_parallel_install.py
245-245: Unpacked variable tasks is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
cortex/install_parallel.py
76-76: f-string without any placeholders
Remove extraneous f prefix
(F541)
88-88: subprocess call with shell=True identified, security issue
(S602)
122-122: Do not catch blind exception: Exception
(BLE001)
157-157: Avoid specifying long messages outside the exception class
(TRY003)
cortex/cli.py
200-200: String contains ambiguous ℹ (INFORMATION SOURCE). Did you mean i (LATIN SMALL LETTER I)?
(RUF001)
252-252: Do not catch blind exception: Exception
(BLE001)
255-255: Use explicit conversion flag
Replace with conversion flag
(RUF010)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Agent
🔇 Additional comments (6)
tests/installer/__init__.py (1)
1-1: Package marker is fineSimple docstring-only
__init__cleanly declares the tests package; no issues here.tests/installer/test_parallel_install.py (1)
13-322: Parallel install tests give strong behavioral coverageThe suite exercises timing, dependency ordering (including diamond graphs), failure propagation, timeouts, empty inputs, per-task metadata, and log callbacks against the public
run_parallel_installAPI. This is a solid baseline for the new parallel executor.cortex/cli.py (2)
188-257: Parallel branch error handling and history updates look consistent with sequential pathThe new parallel execution branch mirrors the sequential flow by:
- Printing a clear success message and total duration on success.
- Updating
InstallationHistorywithSUCCESS/FAILEDand including an error message where available.- Emitting rollback/history hints for the user.
This keeps user-visible behavior and history records consistent between modes. The inline
parallel_log_callbackintegration withcx_printalso looks good.
115-176: Parallel install path lacks dependency tracking for sequenced operationsThe
installmethod callsrun_parallel_installwithout a dependency graph or ordering constraints. This risks incorrect execution order for multi-step installations where later steps depend on earlier ones completing first (e.g.,apt updatebeforeapt install, or service configuration before service start).Unless
CommandInterpreter.parsefilters commands to return only truly independent steps, the--parallelflag should either:
- Pass an explicit dependency graph via the
dependenciesparameter to preserve required ordering, or- Implement conservative defaults that only parallelize clearly safe groupings
Current behavior treats all commands as independent, which is unsafe for typical system installations.
cortex/install_parallel.py (2)
19-35: Task model and duration tracking are well structured
ParallelTaskcleanly encapsulates per-task metadata (status, output, timing), andduration()provides a simple derived metric for higher-level reporting. This structure aligns well with how the CLI and tests consume task data.
58-71: Dangerous-command patterns duplicated from sequential path
run_single_taskdefines its ownDANGEROUS_PATTERNSlist. The sequentialInstallationCoordinatoralready uses a globalDANGEROUS_PATTERNSset for exactly this purpose. Duplicating the patterns here risks the two paths drifting over time, so parallel installs might allow commands that sequential installs would block (or vice versa), weakening your safety guarantees.It would be better to centralize these patterns, for example by importing the existing constant or moving them to a shared module used by both coordinator and parallel executor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds a parallel task execution system for Cortex installations, enabling concurrent execution of independent installation steps. The implementation introduces a new async-based parallel execution engine alongside the existing sequential coordinator, with CLI integration via a --parallel flag.
Key Changes
- New async parallel execution engine (
install_parallel.py) with dependency graph resolution, concurrent task execution (max 4 workers), and security validation matching the sequential coordinator - CLI integration adds
--parallelflag to the install command while maintaining backward compatibility with sequential execution as the default - Comprehensive test suite with 13 test cases covering parallel execution, dependency handling, failure scenarios, and edge cases
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 16 comments.
| File | Description |
|---|---|
cortex/install_parallel.py |
New parallel execution engine with async task scheduling, dependency resolution, security validation, and timeout handling |
cortex/cli.py |
Added --parallel flag support, integrated parallel execution path, cleaned up unused imports and improved None-checking in error handling |
tests/installer/test_parallel_install.py |
Comprehensive test suite covering parallel execution, dependency ordering, failure handling, timeouts, and integration scenarios |
tests/installer/__init__.py |
Package initialization file for installer tests |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (7)
cortex/validators.py (1)
12-26: Good consolidation of dangerous patterns; consider strengthening therm -rfpattern.Centralizing
DANGEROUS_PATTERNSin validators.py is a good approach for maintainability. However, therm -rfpattern at line 14 might miss some dangerous variants:
rm -rf /*(the pattern expects a space before/or*, but/*is a single token)rm -rf ~orrm -rf $HOMEConsider a more comprehensive pattern:
DANGEROUS_PATTERNS = [ - r'rm\s+-rf\s+[/\*]', + r'rm\s+-rf\s+(/|~|\*|\$)', r'rm\s+--no-preserve-root',Also consider adding
>>\s*/etc/for append redirects if that's a concern.tests/installer/test_parallel_install.py (1)
15-38: Consider using pytest-asyncio for cleaner async tests.The current pattern of wrapping async code in
asyncio.run()works, but usingpytest-asynciowith@pytest.mark.asynciodecorator would be more idiomatic and reduce boilerplate:import pytest @pytest.mark.asyncio async def test_parallel_runs_faster_than_sequential(self): """Verify that parallel execution is faster for independent tasks.""" commands = [...] success, tasks = await run_parallel_install(commands, timeout=10) # assertions...This would require adding
pytest-asyncioto dev dependencies.cortex/cli.py (2)
188-191: Move parallel execution imports to module level.The
asyncioandrun_parallel_installimports are done inline within the conditional block. While this works, it's generally better practice to have imports at the module level for clarity and to avoid repeated import overhead if the function is called multiple times.Add these imports at the top of the file with other imports:
import asyncio from cortex.install_parallel import run_parallel_installThen remove lines 190-191.
252-256: Consider catching more specific exceptions.The broad
Exceptioncatch is flagged by static analysis. While it's acceptable for a top-level error boundary, you could improve error reporting by catching more specific exceptions first.+ except asyncio.CancelledError: + if install_id: + history.update_installation(install_id, InstallationStatus.FAILED, "Operation cancelled") + self._print_error("Parallel execution cancelled") + return 130 except Exception as e: if install_id: history.update_installation(install_id, InstallationStatus.FAILED, str(e)) - self._print_error(f"Parallel execution failed: {str(e)}") + self._print_error(f"Parallel execution failed: {e!s}") return 1cortex/install_parallel.py (3)
103-117: Consider catchingsubprocess.TimeoutExpiredexplicitly.The
subprocess.run()call can raisesubprocess.TimeoutExpiredif the internal timeout fires before the asyncio timeout. While the genericExceptionhandler will catch it, explicit handling would provide clearer error messages consistent withcoordinator.py.except asyncio.TimeoutError: task.status = TaskStatus.FAILED task.error = f"Command timed out after {timeout} seconds" task.end_time = time.time() if log_callback: log_callback(f"Finished {task.name} (failed)", "error") return False + except subprocess.TimeoutExpired: + task.status = TaskStatus.FAILED + task.error = f"Command timed out after {timeout} seconds" + task.end_time = time.time() + if log_callback: + log_callback(f"Finished {task.name} (failed)", "error") + return False + except Exception as e:
176-176: Consider makingmax_workersconfigurable.The thread pool is hardcoded to 4 workers. For systems with more cores or when running many independent tasks, this could be a bottleneck. Consider exposing this as a parameter with a sensible default.
async def run_parallel_install( commands: List[str], descriptions: Optional[List[str]] = None, dependencies: Optional[Dict[int, List[int]]] = None, timeout: int = 300, stop_on_error: bool = True, - log_callback: Optional[Callable] = None + log_callback: Optional[Callable] = None, + max_workers: int = 4 ) -> tuple: ... - executor = concurrent.futures.ThreadPoolExecutor(max_workers=4) + executor = concurrent.futures.ThreadPoolExecutor(max_workers=max_workers)
120-141: Improve docstring return type annotation.The docstring states the return type as
tuplebut should be more specific to match best practices and aid IDE tooling. A previous review also flagged this.Returns: - tuple[bool, List[ParallelTask]]: Success status and list of all tasks + Tuple[bool, List[ParallelTask]]: (all_success, task_list) where all_success + is True if no tasks failed, and task_list contains all ParallelTask objects. """And update the return type hint:
-) -> tuple: +) -> Tuple[bool, List[ParallelTask]]:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
.github/workflows/codeql.yml(0 hunks)cortex/cli.py(8 hunks)cortex/coordinator.py(1 hunks)cortex/install_parallel.py(1 hunks)cortex/validators.py(1 hunks)requirements.txt(1 hunks)tests/installer/test_parallel_install.py(1 hunks)
💤 Files with no reviewable changes (1)
- .github/workflows/codeql.yml
🧰 Additional context used
🧬 Code graph analysis (2)
tests/installer/test_parallel_install.py (3)
src/sandbox_executor.py (1)
success(52-54)cortex/logging_system.py (1)
error(219-221)cortex/coordinator.py (1)
duration(36-39)
cortex/install_parallel.py (6)
cortex/cli.py (1)
status(620-675)cortex/logging_system.py (1)
error(219-221)cortex/coordinator.py (1)
duration(36-39)tests/installer/test_parallel_install.py (1)
log_callback(237-238)src/sandbox_executor.py (2)
failed(57-59)success(52-54)src/progress_tracker.py (1)
cancel(464-500)
🪛 Ruff (0.14.8)
cortex/cli.py
200-200: String contains ambiguous ℹ (INFORMATION SOURCE). Did you mean i (LATIN SMALL LETTER I)?
(RUF001)
252-252: Do not catch blind exception: Exception
(BLE001)
255-255: Use explicit conversion flag
Replace with conversion flag
(RUF010)
cortex/install_parallel.py
77-77: subprocess call with shell=True identified, security issue
(S602)
111-111: Do not catch blind exception: Exception
(BLE001)
146-146: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (9)
cortex/coordinator.py (1)
12-14: LGTM!Clean refactor to import
DANGEROUS_PATTERNSfrom the shared validators module instead of defining it locally. This ensures consistent security validation across both sequential and parallel execution paths.tests/installer/test_parallel_install.py (1)
1-9: LGTM! Comprehensive test coverage for parallel execution.The test suite covers key scenarios including dependency ordering, failure propagation, timeout handling, and diamond dependency graphs. The use of
python -ccommands makes tests cross-platform compatible.cortex/cli.py (3)
213-219: Good fix for duration calculation.The duration calculation now properly handles
Nonevalues using generator expressions withdefault=Noneand explicitis not Nonechecks. This addresses the previous review feedback about incorrect handling whenend_timecould be 0 orNone.
892-898: Underscore prefix addresses unused variable warnings.The parser variables are now prefixed with
_to indicate they're intentionally unused. This is a common Python convention and addresses the static analysis warnings from previous reviews.
905-905: LGTM! --parallel flag properly integrated.The
--parallelflag is correctly added to the install subparser and properly passed through to theinstall()method.cortex/install_parallel.py (3)
10-10: Good use of shared DANGEROUS_PATTERNS.Importing
DANGEROUS_PATTERNSfrom the centralized validators module ensures consistent security validation and addresses the previous review feedback about code duplication.
69-86: Good timeout handling with dual timeout approach.The implementation correctly uses both
subprocess.run(..., timeout=timeout)for the process itself andasyncio.wait_for(..., timeout=timeout + 5)for the async wrapper. The comment aboutshell=Trueaddresses the previous review feedback. The 5-second buffer for asyncio overhead is reasonable.
194-205: Good deadlock/cycle detection implementation.This addresses the critical issue from previous reviews about potential infinite loops when tasks have unresolvable dependencies. The implementation correctly detects when no progress can be made and marks remaining tasks as skipped.
requirements.txt (1)
12-12: No action needed—PyYAML is already used in the codebase.PyYAML is imported and used in
src/config_manager.pyandcortex/user_preferences.py, making it a legitimate dependency. AddingPyYAML>=6.0.0to requirements.txt is appropriate even though it's not imported directly by the changed files in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
cortex/cli.py (5)
181-188:installmethod complexity grows; consider extracting helpersAdding the
parallel: boolflag and upfrontvalidate_install_requestcall is correct, butCortexCLI.installis now very large (as Sonar notes). Consider extracting helpers (e.g._run_install_sequential,_run_install_parallel,_record_install_success/failure) to reduce cognitive complexity and make future changes safer.
242-247: Sequential vs parallel execution share duplicated failure-handling logicThe sequential and parallel branches both construct error messages, update
InstallationHistory, and print very similar “Installation failed” output. This duplication is what Sonar is flagging and makes changes error-prone. Consider centralizing this into a small helper that takesinstall_idand an optionalerror_msg, updates history, and prints the common messaging.+ def _handle_install_failure(self, history, install_id, error_msg: str | None = None): + message = error_msg or "Installation failed" + if install_id: + history.update_installation(install_id, InstallationStatus.FAILED, message) + print(f"\n📝 Installation recorded (ID: {install_id})") + print(f" View details: cortex history show {install_id}") + self._print_error("Installation failed") + if error_msg: + print(f" Error: {error_msg}", file=sys.stderr)Then call
_handle_install_failure(...)from both the parallel and sequential failure paths.Also applies to: 324-366
545-546: Avoid unconditional tracebacks in CLI error pathPrinting a full traceback for
edit_preferrors can be noisy for users and may leak internal details. Since you already have a verbose mode, consider gating the traceback onself.verbose(or logging it instead).- except Exception as e: - self._print_error(f"Failed to edit preferences: {str(e)}") - traceback.print_exc() + except Exception as e: + self._print_error(f"Failed to edit preferences: {str(e)}") + if self.verbose: + traceback.print_exc()
576-601: Tighten Ollama/config/history status checks (unused var and f-string nits)The Ollama and config/history status additions are useful; a few small cleanups will also satisfy the static analysis hints:
- Drop the unused
respbinding fromurlopen.- Remove f-strings where no interpolation occurs.
- ollama_host = os.environ.get('OLLAMA_HOST', 'http://localhost:11434') - try: - req = urllib.request.Request(f"{ollama_host}/api/tags", method='GET') - with urllib.request.urlopen(req, timeout=2) as resp: - cx_print(f"Ollama: [bold]Running[/bold] ({ollama_host})", "success") - except Exception: - cx_print(f"Ollama: [bold]Not running[/bold]", "info") - cx_print(" Start: ollama serve", "info") + ollama_host = os.environ.get('OLLAMA_HOST', 'http://localhost:11434') + try: + req = urllib.request.Request(f"{ollama_host}/api/tags", method="GET") + with urllib.request.urlopen(req, timeout=2): + cx_print(f"Ollama: [bold]Running[/bold] ({ollama_host})", "success") + except Exception: + cx_print("Ollama: [bold]Not running[/bold]", "info") + cx_print(" Start: ollama serve", "info") @@ - config_path = os.path.expanduser('~/.cortex/config.json') + config_path = os.path.expanduser("~/.cortex/config.json") if os.path.exists(config_path): cx_print(f"Config: {config_path}", "info") else: - cx_print(f"Config: Not created yet", "info") + cx_print("Config: Not created yet", "info") @@ - history_path = os.path.expanduser('~/.cortex/history.db') + history_path = os.path.expanduser("~/.cortex/history.db") if os.path.exists(history_path): cx_print(f"History: {history_path}", "info") else: - cx_print(f"History: Not created yet", "info") + cx_print("History: Not created yet", "info")
680-681:--parallelflag wiring is correct; optional UX tweak with--executeThe CLI correctly plumbs
--parallelintoCortexCLI.install(..., parallel=args.parallel), and the flag only has effect when--executeis set (since the parallel/sequential branch is underif execute:). Behavior is safe; if you want to make it clearer to users, you could either mention in the help text that--parallelonly applies with--execute, or enforce that combination via argparse (e.g., documenting or validating ininstall).Also applies to: 737-737
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cortex/cli.py(10 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
cortex/cli.py (2)
cortex/coordinator.py (2)
execute(238-285)InstallationCoordinator(51-336)cortex/install_parallel.py (1)
run_parallel_install(120-269)
🪛 GitHub Check: SonarCloud Code Analysis
cortex/cli.py
[failure] 181-181: Refactor this function to reduce its Cognitive Complexity from 101 to the 15 allowed.
[failure] 301-301: Define a constant instead of duplicating this literal "Installation failed" 4 times.
[warning] 583-583: Add replacement fields or use a normal string instead of an f-string.
[warning] 580-580: Remove the unused local variable "resp".
[warning] 594-594: Add replacement fields or use a normal string instead of an f-string.
[warning] 600-600: Add replacement fields or use a normal string instead of an f-string.
🔇 Additional comments (3)
cortex/cli.py (3)
65-71: API key error handling now avoids blank error linesConditionally printing the validation error only when
erroris non-empty prevents empty “Error:” lines while preserving existing behavior. This looks good.
254-287: Parallel execution integration and duration calculation look correctThe
asyncio.run(run_parallel_install(...))integration, logging callback, and None-safe duration calculation viamax_end/min_startare coherent and handle the earlier edge cases (noend_time/start_time) without producing bogus durations. The branch correctly only usestotal_durationon success.
694-701: Preferences subcommands and parser wiring line up with implementationThe
check-prefandedit-prefsubparsers, including thevalidateaction, align with the methods implemented onCortexCLIand make those flows reachable from the CLI. Argument shapes look consistent with howcheck_prefandedit_prefuse them.
CI Failing - Action RequiredHey @Sahilbhatane! Thanks for the parallel task execution feature. Before we can merge, please fix:
Please push fixes and let us know when ready for re-review! |
ed89532 to
d96994b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
♻️ Duplicate comments (1)
cortex/install_parallel.py (1)
223-260: Fix dictionary mutation during iteration (RuntimeError risk).Lines 226-260 iterate over
running.items()while deleting entries fromrunning(lines 238, 259). This can raise aRuntimeError: dictionary changed size during iterationwhen the dict is mutated during iteration.🔎 Apply this diff to fix the issue:
# Process completed tasks + to_remove = [] for task_coro in done: # Find which task this is - for task_name, running_coro in running.items(): + for task_name, running_coro in list(running.items()): if running_coro is task_coro: task = tasks[task_name] # Handle cancelled tasks try: success = task_coro.result() except asyncio.CancelledError: # Task was cancelled due to stop_on_error task.status = TaskStatus.SKIPPED task.error = "Task cancelled due to dependency failure" failed.add(task_name) - del running[task_name] + to_remove.append(task_name) break if success: completed.add(task_name) else: failed.add(task_name) # If stop_on_error, skip dependent tasks if stop_on_error: dependent_tasks = [ name for name, t in tasks.items() if task_name in t.dependencies ] for dep_task_name in dependent_tasks: if dep_task_name in pending: pending.remove(dep_task_name) tasks[dep_task_name].status = TaskStatus.SKIPPED elif dep_task_name in running: running[dep_task_name].cancel() - del running[task_name] + to_remove.append(task_name) break + + # Remove completed tasks from running dict + for task_name in to_remove: + del running[task_name]
🧹 Nitpick comments (6)
cortex/hardware_detection.py (1)
196-201: Consider adding return type hint.While the helper correctly abstracts platform differences, adding a return type hint would improve type safety.
🔎 Suggested improvement:
- def _uname(self): + def _uname(self) -> Any: """Return uname-like info with nodename/release/machine attributes.""" uname_fn = getattr(os, "uname", None) if callable(uname_fn): return uname_fn() return platform.uname()Note: Import
Anyfromtypingif not already imported.tests/test_hardware_detection.py (1)
6-7: Remove unused imports on lines 6-7.The
jsonandosimports are not used anywhere in the test file.tests/conftest.py (1)
7-15: Consider mentioning both supported layouts in the docstring.The docstring mentions only
cortex/but the code supports bothsrc/andcortex/layouts. For clarity, consider updating the docstring to reflect this.🔎 Suggested enhancement:
"""Pytest configuration. Some tests in this repository import implementation modules as if they were top-level (e.g. `import config_manager`). In the current layout, those modules live under the -`cortex/` directory. +`cortex/` directory (or `src/` in legacy branches). -Insert `cortex/` into `sys.path` early (at conftest import time) so imports resolve -during test collection. +Insert the appropriate directory into `sys.path` early (at conftest import time) so +imports resolve during test collection. """tests/unit/test_sandbox_executor.py (1)
303-317: Consider improving pattern-to-command conversion robustness.The data-driven approach using
DANGEROUS_PATTERNSis excellent for maintainability. However, the regex-to-command conversion (lines 307-314) uses heuristic string replacements that may not produce valid test commands for all patterns.Consider creating a mapping of patterns to representative test commands to ensure each pattern is tested with a realistic, known-malicious command:
Example approach:
# Map patterns to representative dangerous commands PATTERN_TEST_CASES = { r'rm\s+-rf\s+[/\*]': 'rm -rf /', r'rm\s+--no-preserve-root': 'rm --no-preserve-root /', r'dd\s+if=.*of=/dev/': 'dd if=/dev/zero of=/dev/sda', r'curl\s+.*\|\s*sh': 'curl http://example.com/script.sh | sh', # ... etc } for pattern, test_cmd in PATTERN_TEST_CASES.items(): is_valid, violation = self.executor.validate_command(test_cmd) self.assertFalse(is_valid, f"Pattern should be blocked: {pattern}")tests/installer/test_parallel_install.py (1)
36-36: Consider making timing assertion more resilient for CI environments.The hard assertion
assert parallel_time < 1.0may be flaky on slow CI systems or under heavy load. The parallel speedup test is already validated by the success of all tasks.🔎 Consider a more lenient assertion:
- assert parallel_time < 1.0, f"Parallel execution took {parallel_time}s, expected < 1.0s" + # On CI systems, just verify completion without strict timing + # The key validation is that all tasks succeeded in parallel + assert parallel_time < 5.0, f"Parallel execution took {parallel_time}s, expected < 5.0s (reasonable timeout)"Or remove the timing assertion entirely and rely on functional correctness:
- # Parallel execution should be faster than sequential (0.3s + overhead) - # On Windows, Python subprocess startup adds significant overhead - # We just verify it completes and doesn't take more than 1 second - assert parallel_time < 1.0, f"Parallel execution took {parallel_time}s, expected < 1.0s" + # Verify parallel execution completes successfully + # Note: Timing validation removed to avoid flakiness on slow CI systemscortex/cli.py (1)
321-328: Consider simplifying the status value access.Line 322 uses
getattr(t.status, "value", "")to defensively access the enum value. Sincet.statusis typed asTaskStatus(an Enum), the.valueattribute should always exist. Consider usingt.status.valuedirectly or filtering with a type check instead.🔎 Alternative approach:
- failed_tasks = [ - t for t in parallel_tasks if getattr(t.status, "value", "") == "failed" - ] + failed_tasks = [ + t for t in parallel_tasks if t.status == TaskStatus.FAILED + ]This requires importing
TaskStatusat the top of the parallel execution block.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
cortex/cli.py(4 hunks)cortex/config_manager.py(1 hunks)cortex/coordinator.py(1 hunks)cortex/hardware_detection.py(7 hunks)cortex/install_parallel.py(1 hunks)cortex/progress_tracker.py(1 hunks)cortex/sandbox/sandbox_executor.py(4 hunks)cortex/validators.py(1 hunks)requirements.txt(1 hunks)tests/conftest.py(1 hunks)tests/installer/__init__.py(1 hunks)tests/installer/test_parallel_install.py(1 hunks)tests/test_cli.py(1 hunks)tests/test_hardware_detection.py(4 hunks)tests/unit/test_sandbox_executor.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/installer/init.py
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/hardware_detection.pytests/unit/test_sandbox_executor.pytests/installer/test_parallel_install.pytests/conftest.pytests/test_cli.pycortex/progress_tracker.pycortex/sandbox/sandbox_executor.pytests/test_hardware_detection.pycortex/config_manager.pycortex/cli.pycortex/coordinator.pycortex/validators.pycortex/install_parallel.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/unit/test_sandbox_executor.pytests/installer/test_parallel_install.pytests/conftest.pytests/test_cli.pytests/test_hardware_detection.py
**/*sandbox*.py
📄 CodeRabbit inference engine (AGENTS.md)
Implement Firejail sandboxing for package operations
Files:
tests/unit/test_sandbox_executor.pycortex/sandbox/sandbox_executor.py
**/*install*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*install*.py: Dry-run by default for all installations in command execution
No silent sudo execution - require explicit user confirmation
Implement audit logging to ~/.cortex/history.db for all package operations
Files:
tests/installer/test_parallel_install.pycortex/install_parallel.py
🧠 Learnings (2)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*sandbox*.py : Implement Firejail sandboxing for package operations
Applied to files:
cortex/sandbox/sandbox_executor.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*.py : Type hints required in Python code
Applied to files:
requirements.txt
🧬 Code graph analysis (5)
tests/unit/test_sandbox_executor.py (2)
cortex/sandbox/sandbox_executor.py (1)
validate_command(294-342)cortex/utils/commands.py (1)
validate_command(134-196)
tests/installer/test_parallel_install.py (4)
cortex/install_parallel.py (2)
TaskStatus(13-18)duration(34-37)cortex/sandbox/sandbox_executor.py (1)
success(64-66)cortex/logging_system.py (1)
error(206-208)cortex/coordinator.py (1)
duration(36-39)
tests/test_cli.py (1)
cortex/cli.py (1)
main(693-825)
cortex/cli.py (3)
cortex/coordinator.py (1)
execute(230-279)cortex/install_parallel.py (1)
run_parallel_install(120-269)cortex/installation_history.py (2)
update_installation(311-362)InstallationStatus(34-40)
cortex/install_parallel.py (3)
cortex/logging_system.py (1)
error(206-208)cortex/coordinator.py (1)
duration(36-39)tests/installer/test_parallel_install.py (1)
log_callback(237-238)
🪛 GitHub Actions: CI
cortex/install_parallel.py
[error] 1-1: cortex/install_parallel.py:1:1: I001 Import block is un-sorted or un-formatted (Ruff).
🪛 GitHub Check: Lint
cortex/install_parallel.py
[failure] 40-40: Ruff (UP045)
cortex/install_parallel.py:40:85: UP045 Use X | None for type annotations
[failure] 34-34: Ruff (UP045)
cortex/install_parallel.py:34:27: UP045 Use X | None for type annotations
[failure] 33-33: Ruff (W293)
cortex/install_parallel.py:33:1: W293 Blank line contains whitespace
[failure] 32-32: Ruff (UP045)
cortex/install_parallel.py:32:15: UP045 Use X | None for type annotations
[failure] 31-31: Ruff (UP045)
cortex/install_parallel.py:31:17: UP045 Use X | None for type annotations
[failure] 27-27: Ruff (UP006)
cortex/install_parallel.py:27:19: UP006 Use list instead of List for type annotation
[failure] 6-6: Ruff (UP035)
cortex/install_parallel.py:6:1: UP035 typing.Dict is deprecated, use dict instead
[failure] 6-6: Ruff (UP035)
cortex/install_parallel.py:6:1: UP035 typing.List is deprecated, use list instead
[failure] 6-6: Ruff (UP035)
cortex/install_parallel.py:6:1: UP035 Import from collections.abc instead: Callable
[failure] 1-10: Ruff (I001)
cortex/install_parallel.py:1:1: I001 Import block is un-sorted or un-formatted
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: security
🔇 Additional comments (21)
cortex/progress_tracker.py (1)
45-51: LGTM! Proper handling of optional dependency.This follows Python best practices for optional imports by ensuring
plyer_notificationis always defined as a module-level name, even when the import fails. The defensive approach prevents potentialNameErrorexceptions and aligns well with the existingPLYER_AVAILABLEguard at line 514 that protects the usage at line 518.cortex/hardware_detection.py (4)
15-15: LGTM: Cross-platform imports added.The
platformandshutilimports support Windows compatibility whereos.unameandos.statvfsare unavailable.Also applies to: 17-17
310-310: LGTM: System detection now cross-platform.The switch from direct
os.uname()calls toself._uname()enables Windows compatibility.Also applies to: 316-316
339-339: LGTM: Efficient CPU detection with cached uname.Storing the uname result once and reusing it for ARM detection and architecture is efficient and correct.
Also applies to: 353-353, 373-373
618-630: LGTM: Disk space detection now cross-platform.The fallback from
os.statvfstoshutil.disk_usageenables Windows support. The logic correctly calculates free space in GB for both paths.tests/test_hardware_detection.py (1)
288-288: LGTM: Test patches now cross-platform compatible.The
create=Trueparameter allows these tests to run on Windows whereos.statvfsandos.unameare unavailable, preventingAttributeErrorduring patch setup.Also applies to: 321-321
tests/conftest.py (3)
1-5: LGTM!Clean imports with future annotations for forward compatibility.
17-17: LGTM!Correct calculation to obtain the repository root.
19-25: Verify the intended behavior when both directories exist.The current implementation adds both
src/andcortex/tosys.pathif both exist, withcortex/taking precedence (inserted last at position 0). This could lead to import ambiguity if the same module names exist in both directories.Consider one of these approaches:
- If only one layout should be active: Add a
breakstatement after the first successful insertion- If both should be supported with explicit precedence: Document the precedence order in a comment
- If current behavior is correct: Confirm this is intentional for supporting migration/transition periods
🔎 Option 1: Break after first match (if only one layout should be active)
for path in (repo_root / "src", repo_root / "cortex"): path_str = str(path) if path.exists() and path_str not in sys.path: sys.path.insert(0, path_str) + break🔎 Option 2: Document precedence (if both should coexist)
# Support both layouts used across branches/history: # - `cortex/` (package modules) # - `src/` (legacy flat modules) +# If both exist, `cortex/` takes precedence over `src/`. for path in (repo_root / "src", repo_root / "cortex"):cortex/config_manager.py (1)
75-78: LGTM! Cross-platform compatibility handled correctly.The early return for Windows and non-POSIX environments is appropriate, as ownership and permission enforcement using
os.getuid(),os.getgid(), andchmod(0o700)are POSIX-specific operations. The checks are defensive and correct.cortex/coordinator.py (1)
12-12: LGTM! Centralized dangerous patterns improve maintainability.Importing
DANGEROUS_PATTERNSfromcortex.validatorseliminates code duplication and provides a single source of truth for security patterns across the codebase.cortex/sandbox/sandbox_executor.py (2)
24-28: LGTM! Safe import pattern for POSIX-only module.The try/except block correctly handles the
resourcemodule being unavailable on Windows, setting it toNonewith appropriate pragma coverage exclusion.
602-629: LGTM! Cross-platform subprocess execution handled correctly.The conditional logic properly guards POSIX-specific resource limits and
preexec_fnusage:
- Resource limits only set on non-Windows systems when the
resourcemodule is availablepreexec_fnonly passed toPopenwhen defined, avoiding Windows incompatibilitytests/test_cli.py (1)
187-189: LGTM! Test assertions updated for new parallel parameter.The test assertions correctly verify that the new
parallelparameter defaults toFalsewhen not specified, aligning with the expandedCortexCLI.installsignature.Also applies to: 197-197, 205-205
cortex/validators.py (1)
31-34: LGTM! ValidationError enhancement improves error reporting.Adding the optional
suggestionparameter allows validators to provide actionable guidance alongside error messages.tests/installer/test_parallel_install.py (2)
176-176: LGTM! Timeout assertion correctly handles multiple error formats.The assertion properly checks for both "timed out" and "timeout" variations in the error message, addressing the previous review feedback.
92-95: The test logic is sound and deterministic—no race conditions exist.This test correctly verifies that with
stop_on_error=True, independent tasks run regardless of other failures. The dependency logic (line 192) is deterministic: Task 3 has no dependencies, soall(dep in completed for dep in [])evaluates toTrueimmediately, guaranteeing Task 3 starts in the first loop iteration alongside Task 1. When Task 1 fails, only tasks directly dependent on it (Task 2) are blocked—not independent tasks. Task 2 remains pending with unmet dependencies and is markedSKIPPEDby deadlock detection (lines 201–213). The assertions are correct without timing concerns.Running the test multiple times is unnecessary since the dependency-checking logic contains no timing assumptions, but the existing test validates the behavior sufficiently on a single run.
cortex/cli.py (3)
181-187: LGTM: Clean signature extension for parallel execution.The
parallelparameter is properly typed and defaults toFalse, maintaining backward compatibility with existing callers.
726-730: LGTM: CLI flag properly integrated.The
--parallelflag is well-documented and follows the existing CLI patterns.
793-798: LGTM: Argument properly forwarded to install method.The parallel flag is correctly passed from CLI args to the install method.
cortex/install_parallel.py (1)
194-205: LGTM: Deadlock detection properly implemented.This block correctly addresses the potential infinite loop issue by detecting when no tasks can make progress (no tasks ready, none running, but some still pending) and marking remaining tasks as skipped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (8)
cortex/hardware_detection.py (2)
196-201: Add return type hint for clarity.The method is missing a return type hint. While both
os.uname()andplatform.uname()return compatible structures, explicitly annotating the return type improves code clarity and aligns with the project's coding guidelines requiring type hints.🔎 Proposed fix
- def _uname(self): + def _uname(self) -> os.uname_result | platform.uname_result: """Return uname-like info with nodename/release/machine attributes."""Alternatively, if you want a simpler annotation that works across all platforms:
+from typing import Any + - def _uname(self): + def _uname(self) -> Any: """Return uname-like info with nodename/release/machine attributes."""As per coding guidelines.
307-336: Consider caching the uname result.
self._uname()is called twice (lines 311, 317). For consistency with_detect_cpu()(line 340) and to avoid redundant calls, consider caching the result in a local variable.🔎 Proposed optimization
def _detect_system(self, info: SystemInfo): """Detect basic system information.""" + uname = self._uname() # Hostname try: - info.hostname = self._uname().nodename + info.hostname = uname.nodename except: info.hostname = "unknown" # Kernel with contextlib.suppress(builtins.BaseException): - info.kernel_version = self._uname().release + info.kernel_version = uname.releasetests/unit/test_sandbox_executor.py (1)
304-327: Data-driven test is good, but pattern-to-command conversion is fragile.The approach of testing all patterns from
DANGEROUS_PATTERNSis excellent for ensuring complete coverage. However, the naive regex-to-command conversion on lines 317-324 may not produce valid commands for all patterns, potentially causing false negatives (pattern not matching its own test command).For example:
- Patterns with complex regex like
["\'].*execwon't convert to valid shell syntax with simplereplace()calls- The fork bomb pattern
:\s*\(\)\s*\{\s*:\s*\|\s*:\s*&\s*\}won't convert properlyConsider adding explicit test command mappings for patterns that can't be reliably auto-generated:
🔎 Suggested approach
# Add a mapping for patterns that need explicit test commands PATTERN_TEST_COMMANDS = { r":\s*\(\)\s*\{\s*:\s*\|\s*:\s*&\s*\}": ":() { : | : & }; :", # Add other complex patterns as needed } for pattern in DANGEROUS_PATTERNS: if pattern in PATTERN_TEST_COMMANDS: test_cmd = PATTERN_TEST_COMMANDS[pattern] elif "python\\s+-c" in pattern and "exec" in pattern: # ... existing special casescortex/cli.py (2)
261-264: Inline imports are acceptable but could be moved to module level.The
asyncioandrun_parallel_installimports inside the conditional block work correctly and avoid loading unnecessary modules when parallel mode isn't used. However, for consistency with PEP 8 recommendations, consider moving these to the module level since they're standard library and internal modules.This is a minor style preference - the current approach has the advantage of lazy loading.
309-312: Consider using enum comparison instead of string value extraction.The pattern
getattr(t.status, "value", "") == "failed"is defensive but indirect. IfParallelTask.statusis a proper enum, direct comparison would be cleaner and type-safe.🔎 Suggested improvement
from cortex.install_parallel import TaskStatus # if available failed_tasks = [t for t in parallel_tasks if t.status == TaskStatus.FAILED]Or if TaskStatus isn't easily importable, at least document why
getattris needed:# Handle both enum and string status values failed_tasks = [ t for t in parallel_tasks if getattr(t.status, "value", t.status) == "failed" ]cortex/install_parallel.py (3)
75-123: Command execution and error handling are robust.The dual-timeout approach (subprocess-level + asyncio-level) and dangerous pattern validation provide good safety. The explicit handling of
asyncio.TimeoutErrorand the generic exception catch ensure tasks fail gracefully.For slightly better clarity, you could explicitly catch
subprocess.TimeoutExpiredbetween lines 115 and 117, though the current generic handler is functionally correct:Optional: Explicit subprocess timeout handling
except asyncio.TimeoutError: task.status = TaskStatus.FAILED task.error = f"Command timed out after {timeout} seconds" task.end_time = time.time() if log_callback: log_callback(f"Finished {task.name} (failed)", "error") return False + except subprocess.TimeoutExpired: + task.status = TaskStatus.FAILED + task.error = f"Command timed out after {timeout} seconds" + task.end_time = time.time() + if log_callback: + log_callback(f"Finished {task.name} (failed)", "error") + return False + except Exception as e: task.status = TaskStatus.FAILED task.error = str(e) task.end_time = time.time() if log_callback: log_callback(f"Finished {task.name} (failed)", "error") return False
185-213: Execution loop and deadlock detection are well-implemented.The loop correctly handles:
- Dependency resolution with
stop_on_errorawareness (lines 191-194 addressing previous hanging issue)- Using
pending.copy()to avoid mutation during iteration (line 188)- Deadlock/cycle detection when no progress can be made (lines 200-213 addressing previous infinite loop issue)
The error message at line 207 is generic. For better debugging, you could distinguish between dependency cycles and dependencies blocked by failures:
Optional: More specific error messages
# If no tasks can be started and none are running, we're stuck (deadlock/cycle detection) if not ready_to_start and not running and pending: # Mark remaining tasks as skipped - they have unresolvable dependencies for task_name in pending: task = tasks[task_name] if task.status == TaskStatus.PENDING: task.status = TaskStatus.SKIPPED - task.error = "Task could not run because dependencies never completed" + # Check if dependencies failed or form a cycle + failed_deps = [dep for dep in task.dependencies if dep in failed] + if failed_deps: + task.error = f"Skipped due to failed dependencies: {', '.join(failed_deps)}" + else: + task.error = "Skipped due to circular dependencies or unresolvable dependency graph" if log_callback: log_callback( - f"{task_name} skipped due to unresolved dependencies", "error" + f"{task_name} skipped: {task.error}", "error" ) failed.update(pending) break
224-278: Task completion and cancellation handling are correct.The completion loop properly:
- Catches and handles
CancelledError(lines 241-247, addressing previous concern)- Tracks completed vs failed tasks for dependency resolution
- Cancels dependent tasks when
stop_on_error=True(lines 254-267)- Cleans up the executor in the finally block (line 272)
The dict iteration pattern (line 234 within the outer loop at 232) is safe because each outer iteration creates a fresh iterator after the previous deletion, but the safety is not immediately obvious.
Consider adding a clarifying comment about the dict iteration pattern:
Optional: Add comment explaining iteration safety
# Process completed tasks + # Note: We iterate running.items() inside the outer loop; each outer iteration + # creates a new iterator, so deleting from running between iterations is safe. for task_coro in done: # Find which task this is for task_name, running_coro in running.items():
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
cortex/cli.py(4 hunks)cortex/config_manager.py(1 hunks)cortex/hardware_detection.py(7 hunks)cortex/install_parallel.py(1 hunks)cortex/sandbox/sandbox_executor.py(5 hunks)cortex/validators.py(1 hunks)tests/conftest.py(1 hunks)tests/installer/test_parallel_install.py(1 hunks)tests/test_cli.py(1 hunks)tests/test_hardware_detection.py(4 hunks)tests/test_interpreter.py(2 hunks)tests/unit/test_sandbox_executor.py(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- cortex/config_manager.py
- tests/conftest.py
- tests/test_cli.py
- tests/installer/test_parallel_install.py
- tests/test_hardware_detection.py
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/hardware_detection.pytests/unit/test_sandbox_executor.pytests/test_interpreter.pycortex/sandbox/sandbox_executor.pycortex/validators.pycortex/install_parallel.pycortex/cli.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/unit/test_sandbox_executor.pytests/test_interpreter.py
**/*sandbox*.py
📄 CodeRabbit inference engine (AGENTS.md)
Implement Firejail sandboxing for package operations
Files:
tests/unit/test_sandbox_executor.pycortex/sandbox/sandbox_executor.py
**/*install*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*install*.py: Dry-run by default for all installations in command execution
No silent sudo execution - require explicit user confirmation
Implement audit logging to ~/.cortex/history.db for all package operations
Files:
cortex/install_parallel.py
🧠 Learnings (1)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*.py : Type hints required in Python code
Applied to files:
cortex/install_parallel.py
🧬 Code graph analysis (4)
tests/unit/test_sandbox_executor.py (2)
cortex/sandbox/sandbox_executor.py (1)
validate_command(258-306)cortex/utils/commands.py (1)
validate_command(134-196)
tests/test_interpreter.py (1)
cortex/llm/interpreter.py (2)
parse(206-265)parse_with_context(267-275)
cortex/install_parallel.py (2)
cortex/coordinator.py (1)
duration(36-39)tests/installer/test_parallel_install.py (1)
log_callback(224-225)
cortex/cli.py (5)
cortex/coordinator.py (1)
execute(230-279)cortex/install_parallel.py (1)
run_parallel_install(126-278)tests/installer/test_parallel_install.py (1)
log_callback(224-225)cortex/installation_history.py (2)
update_installation(311-362)InstallationStatus(34-40)cortex/kernel_features/llm_device.py (1)
getattr(81-105)
🔇 Additional comments (20)
cortex/hardware_detection.py (3)
15-15: LGTM: Cross-platform imports added.The
platformandshutilimports enable fallback mechanisms for Windows and other platforms that lackos.uname()oros.statvfs().Also applies to: 17-17
340-340: LGTM: Efficient use of cached uname result.The code correctly caches the uname result once and reuses it for ARM detection and architecture extraction, avoiding redundant calls.
Also applies to: 354-354, 374-374
622-629: LGTM: Robust cross-platform disk usage detection.The implementation correctly checks for
os.statvfsavailability and provides a clean fallback toshutil.disk_usagefor platforms where it's unavailable (e.g., Windows). The conversion from bytes to GB is accurate.tests/test_interpreter.py (2)
94-99: LGTM - Good test isolation by disabling cache.Setting
interpreter.cache = Noneensures this test exercises the actual_call_openaipath without cache interference, which is appropriate for testing the OpenAI call behavior directly.
166-177: LGTM - Improved test approach forparse_with_context.Mocking the public
parsemethod rather than internal OpenAI client details is a better testing pattern. The test correctly verifies that:
parse_with_contextdelegates toparse- The enriched input includes the system info ("ubuntu")
This is more resilient to internal refactoring while still validating the core behavior.
tests/unit/test_sandbox_executor.py (1)
24-24: LGTM - Centralizing dangerous patterns import.Importing
DANGEROUS_PATTERNSfromcortex.validatorsaligns the test with the new single source of truth for security patterns.cortex/validators.py (2)
10-46: LGTM - Comprehensive dangerous patterns list now centralized.The patterns list now appears complete with critical security checks including:
- File system destruction (
rm -rf,mkfs,fdisk,wipefs)- Remote code execution (
curl|bash,wget|sh,base64 -d|)- Privilege escalation (
sudo su,sudo -i,LD_PRELOAD)- Code injection (
eval,python -c exec/__import__)- Fork bomb detection
The negative lookahead for
/dev/nullon line 21 is a nice touch to allow legitimate redirections.Previous review concern about incomplete patterns has been addressed.
15-15: Consider:dd if=pattern may be overly broad.The pattern
r"dd\s+if="will block allddcommands with an input file, including legitimate uses like:
dd if=ubuntu.iso of=/dev/sdb(creating bootable USB)dd if=/dev/zero of=testfile bs=1M count=100(creating test files)The dangerous case is specifically writing to block devices (
of=/dev/sd*). The current pattern blocks the input side rather than the dangerous output side.Consider whether this is intentional. If
ddshould be allowed for safe operations, you might want:- r"dd\s+if=", # dd command + r"dd\s+.*of=/dev/(?!null\b)", # dd writing to block devices (excluding /dev/null)cortex/sandbox/sandbox_executor.py (5)
25-28: LGTM - Proper cross-platform handling for resource module.The try/except import pattern correctly handles the POSIX-only
resourcemodule, with# pragma: no coverappropriately excluding the ImportError branch from coverage on Linux.
32-33: LGTM - Centralizing dangerous patterns import.Importing
DANGEROUS_PATTERNSfromcortex.validatorsestablishes a single source of truth for security validation across the codebase.
566-566: LGTM - Correct multi-condition check for resource limits.The condition properly ensures resource limits are only applied when:
- Not on Windows (
os.name != "nt")- Firejail is not handling limits
- The
resourcemodule is available
585-594: LGTM - Clean Windows compatibility fix for subprocess.Building
popen_kwargsdynamically and only includingpreexec_fnwhen set avoids the Windows incompatibility issue wherepreexec_fnis not supported.
662-665: LGTM - Simplified audit log retrieval.The logic correctly returns:
- Full audit log copy when
limitis None- Last N entries when
limitis providedcortex/cli.py (4)
181-187: LGTM - Clean signature extension for parallel mode.The new
parallel: bool = Falseparameter follows the existing pattern of optional flags with sensible defaults.
285-296: LGTM - Duration calculation properly handles None values.The implementation correctly addresses the previous Copilot feedback:
- Uses generator with
if t.end_time is not Nonefilter- Uses
default=Noneformax()when no valid times exist- Properly checks both
max_endandmin_startare not None before calculating duration
710-714: LGTM - CLI argument wiring for parallel flag.The
--parallelflag is properly added to the install subcommand with a clear help message.
777-782: LGTM - Parallel argument passed through to install method.The CLI correctly passes
args.parallelto theinstall()method, completing the wiring from CLI to implementation.cortex/install_parallel.py (3)
1-10: LGTM: Imports are clean and properly organized.The imports follow best practices with proper sorting and use modern type hint syntax (
collections.abc.Callable). Integration with the sharedcortex.validatorsmodule forDANGEROUS_PATTERNSeliminates code duplication.
13-38: LGTM: Data structures use modern type hints correctly.The
TaskStatusenum andParallelTaskdataclass follow Python best practices. Modern type annotations (list[str],float | None) and theduration()method implementation align with the existing codebase patterns.
126-183: Function signature and task initialization are well-structured.Modern type annotations are used correctly throughout. The validation for matching descriptions and commands (lines 151-152) prevents configuration errors. Dependency mapping logic correctly translates index-based dependencies to task name references.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/sandbox/sandbox_executor.py (1)
24-39: Fix duplicate resource imports and import order violations.The code has critical issues:
- Duplicate resource imports: The
resourcemodule is imported twice (lines 25-28 and lines 33-39), creating redundancy and confusion.- Import order violation: The import block violates PEP 8 style guide and is flagged by Ruff (I001), causing pipeline failures.
Per PEP 8, imports should be grouped in this order: standard library, related third party, local application/library imports—each group separated by a blank line.
🔎 Proposed fix for duplicate imports and import order
- -try: - import resource # POSIX-only -except ImportError: # pragma: no cover - resource = None from datetime import datetime from typing import Any -from cortex.validators import DANGEROUS_PATTERNS try: import resource # type: ignore HAS_RESOURCE = True except ImportError: # pragma: no cover resource = None # type: ignore HAS_RESOURCE = False + +from cortex.validators import DANGEROUS_PATTERNSThis removes the duplicate import, maintains the HAS_RESOURCE flag, and groups imports correctly: standard library first, then local imports after a blank line.
♻️ Duplicate comments (1)
requirements.txt (1)
19-19: Remove duplicate PyYAML dependency.Line 9 specifies
PyYAML>=6.0.0and line 15 specifiespyyaml>=6.0.0. Package names in requirements.txt are case-insensitive, making line 19 a third duplicate entry.🔎 Remove the duplicate
-PyYAML>=6.0.0
🧹 Nitpick comments (2)
cortex/cli.py (2)
382-384: Consider moving imports to module level.The
asyncioandrun_parallel_installimports are currently inline. While this may be intentional for lazy loading, the general best practice (and PEP 8) recommends placing imports at the module level for better readability and consistency.🔎 Suggested refactor
Move these imports to the top of the file (after line 7):
import asyncio from cortex.install_parallel import run_parallel_installThen remove lines 382-384 from the function body.
429-447: Enhance error reporting with step information.The parallel execution path reports "Installation failed" without specifying which step failed, whereas the sequential path (line 487) provides "Installation failed at step X". Consider adding similar detail to improve user experience.
🔎 Suggested enhancement
failed_tasks = [ t for t in parallel_tasks if getattr(t.status, "value", "") == "failed" ] error_msg = failed_tasks[0].error if failed_tasks else "Installation failed" + failed_step_desc = failed_tasks[0].description if failed_tasks else None if install_id: history.update_installation( install_id, InstallationStatus.FAILED, error_msg, ) - self._print_error("Installation failed") + if failed_step_desc: + self._print_error(f"Installation failed: {failed_step_desc}") + else: + self._print_error("Installation failed") if error_msg: print(f" Error: {error_msg}", file=sys.stderr)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
cortex/cli.py(4 hunks)cortex/sandbox/sandbox_executor.py(5 hunks)requirements.txt(1 hunks)tests/test_cli.py(1 hunks)tests/test_interpreter.py(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_cli.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.pycortex/sandbox/sandbox_executor.pytests/test_interpreter.py
**/*sandbox*.py
📄 CodeRabbit inference engine (AGENTS.md)
Implement Firejail sandboxing for package operations
Files:
cortex/sandbox/sandbox_executor.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_interpreter.py
🧠 Learnings (2)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*.py : Type hints required in Python code
Applied to files:
requirements.txt
🧬 Code graph analysis (2)
cortex/cli.py (2)
cortex/install_parallel.py (1)
run_parallel_install(126-278)tests/installer/test_parallel_install.py (1)
log_callback(224-225)
tests/test_interpreter.py (2)
tests/test_graceful_degradation.py (1)
cache(28-31)cortex/llm/interpreter.py (1)
parse(206-265)
🪛 GitHub Actions: CI
cortex/sandbox/sandbox_executor.py
[error] 29-29: I001 Import block is un-sorted or un-formatted. (ruff)
🪛 GitHub Actions: Cortex Automation
tests/test_interpreter.py
[error] 189-189: AttributeError: 'function' object has no attribute 'assert_called_once'. The failing test TestCommandInterpreter.test_parse_with_context indicates a mock object was not set up correctly (interpreter.parse is a function, not a Mock), causing the assertion to fail.
🪛 GitHub Check: Lint
cortex/sandbox/sandbox_executor.py
[failure] 29-31: Ruff (I001)
cortex/sandbox/sandbox_executor.py:29:1: I001 Import block is un-sorted or un-formatted
🔇 Additional comments (7)
tests/test_interpreter.py (1)
96-96: LGTM!Explicitly disabling the cache ensures the test focuses on the OpenAI call path without cache interference.
cortex/sandbox/sandbox_executor.py (4)
573-592: LGTM! Good cross-platform compatibility improvement.The addition of
os.name != "nt"andresource is not Nonechecks ensures that POSIX-specific resource limits are only applied on compatible platforms, preventing errors on Windows where theresourcemodule andpreexec_fnare not supported.
594-603: LGTM! Improved Windows compatibility.The refactoring to build
popen_kwargsdictionary and conditionally addpreexec_fnonly when it's notNoneis a good improvement for Windows compatibility, aspreexec_fnis not supported on Windows and would cause errors if passed unconditionally.
671-674: LGTM! Correct audit log retrieval logic.The
get_audit_logmethod correctly returns a copy of all audit log entries when no limit is specified, or the last N entries when a limit is provided using negative list slicing.
276-276: Remove unguarded import concern—internal package dependency.The import of
DANGEROUS_PATTERNSfromcortex.validatorsat line 31 is correctly unguarded. This is an internal intra-package import within the cortex module structure, and cortex/validators.py is properly defined with the DANGEROUS_PATTERNS constant. For internal package dependencies, direct imports without fallback handling are standard Python practice and appropriate. No action needed.cortex/cli.py (2)
831-835: LGTM!The
--parallelflag is properly integrated into the install subcommand with clear help text and consistent with other boolean flags.
909-914: LGTM!The parallel argument is correctly forwarded from the parsed CLI arguments to the install method, maintaining consistency with the existing parameter passing pattern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/sandbox/sandbox_executor.py (1)
24-40: Duplicateresourcemodule import.The
resourcemodule import with try/except appears twice: lines 25-28 and lines 34-40. This is redundant and the second block shadows the first.🔎 Suggested fix - remove the first duplicate
import time - -try: - import resource # POSIX-only -except ImportError: # pragma: no cover - resource = None from datetime import datetime from typing import Any from cortex.validators import DANGEROUS_PATTERNS - try: import resource # type: ignore HAS_RESOURCE = True except ImportError: # pragma: no cover resource = None # type: ignore HAS_RESOURCE = False
♻️ Duplicate comments (2)
cortex/install_parallel.py (2)
233-270: Dict mutation during iteration could cause RuntimeError.When iterating over
running.items()at line 235 and deleting fromrunningat lines 247 and 269, you're modifying the dict while iterating. Although thebreakstatements exit the inner loop, if multiple tasks complete simultaneously in the samedoneset, the outerfor task_coro in doneloop continues with a mutatedrunningdict. This was flagged in a past review but the fix was not fully applied.Use
list(running.items())to iterate over a snapshot:🔎 Suggested fix
# Process completed tasks for task_coro in done: # Find which task this is - for task_name, running_coro in running.items(): + for task_name, running_coro in list(running.items()): if running_coro is task_coro: task = tasks[task_name]
76-93: Subprocess timeout is applied twice, potentially causing issues.The
subprocess.runhas its owntimeoutparameter (line 89), andasyncio.wait_foralso has a timeout (line 92). Ifsubprocess.runtimes out, it raisessubprocess.TimeoutExpired, which won't be caught by theasyncio.TimeoutErrorhandler at line 110. This could cause the genericExceptionhandler to catch it with a less informative error message.Consider catching
subprocess.TimeoutExpiredexplicitly or removing the inner timeout sinceasyncio.wait_foralready handles it.🔎 Suggested fix
try: # Run command in executor (thread pool) to avoid blocking the event loop loop = asyncio.get_running_loop() result = await asyncio.wait_for( loop.run_in_executor( executor, # Use shell=True carefully - commands are validated against dangerous patterns above. # shell=True is required to support complex shell commands (e.g., pipes, redirects). lambda: subprocess.run( task.command, shell=True, capture_output=True, text=True, - timeout=timeout, ), ), timeout=timeout + 5, # Slight buffer for asyncio overhead )
🧹 Nitpick comments (3)
test/test_cli.py (1)
271-293: Missing test coverage forparallel=Truecase.The tests correctly add
parallel=Falseto match the updated signature, but there's no test for the--parallelflag. Consider adding a test case like:@patch("sys.argv", ["cortex", "install", "docker", "--parallel"]) @patch("cortex.cli.CortexCLI.install") def test_main_install_with_parallel(self, mock_install) -> None: mock_install.return_value = 0 result = main() self.assertEqual(result, 0) mock_install.assert_called_once_with("docker", execute=False, dry_run=False, parallel=True)tests/unit/test_sandbox_executor.py (1)
307-330: Fragile pattern-to-command conversion may miss edge cases.The test attempts to convert regex patterns to concrete commands using string replacements (lines 320-327). This approach is brittle because:
- Complex regex like
(?!...)negative lookaheads can't be simply replaced- The replacements may not produce valid commands that actually match the pattern
- New patterns added to
DANGEROUS_PATTERNSmay not be covered by these replacementsConsider maintaining a parallel list of concrete test commands that are known to match each pattern, or using a more robust approach.
🔎 Suggested approach
# Map patterns to known matching commands for reliable testing PATTERN_TEST_COMMANDS = { r"rm\s+-rf\s+[/\*]": "rm -rf /", r"rm\s+-rf\s+\$HOME": "rm -rf $HOME", # ... add explicit mappings for each pattern } def test_dangerous_patterns_blocked(self): """Test that all dangerous patterns are blocked.""" for pattern in DANGEROUS_PATTERNS: test_cmd = PATTERN_TEST_COMMANDS.get(pattern) if test_cmd is None: self.skipTest(f"No test command defined for pattern: {pattern}") is_valid, _ = self.executor.validate_command(test_cmd) self.assertFalse(is_valid, f"Pattern should be blocked: {pattern}")cortex/sandbox/sandbox_executor.py (1)
659-662: Minor logic issue inget_audit_log.When
limitisNone, the function returnslist(self.audit_log). Whenlimitis provided, it also creates a full list first, then slices. The logic is correct but line 660-661 is redundant since line 662 handles both cases.🔎 Simplified implementation
def get_audit_log(self, limit: int | None = None) -> list[dict[str, Any]]: - - if limit is None: - return list(self.audit_log) - return list(self.audit_log)[-limit:] + log_copy = list(self.audit_log) + return log_copy if limit is None else log_copy[-limit:]
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
cortex/install_parallel.py(1 hunks)cortex/sandbox/sandbox_executor.py(6 hunks)test/test_cli.py(2 hunks)tests/test_interpreter.py(2 hunks)tests/unit/test_sandbox_executor.py(9 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_interpreter.py
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
test/test_cli.pycortex/sandbox/sandbox_executor.pycortex/install_parallel.pytests/unit/test_sandbox_executor.py
**/*sandbox*.py
📄 CodeRabbit inference engine (AGENTS.md)
Implement Firejail sandboxing for package operations
Files:
cortex/sandbox/sandbox_executor.pytests/unit/test_sandbox_executor.py
**/*install*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*install*.py: Dry-run by default for all installations in command execution
No silent sudo execution - require explicit user confirmation
Implement audit logging to ~/.cortex/history.db for all package operations
Files:
cortex/install_parallel.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/unit/test_sandbox_executor.py
🧬 Code graph analysis (3)
test/test_cli.py (1)
cortex/cli.py (1)
_get_provider(61-74)
cortex/install_parallel.py (4)
cortex/logging_system.py (1)
error(206-208)cortex/coordinator.py (1)
duration(36-39)tests/installer/test_parallel_install.py (1)
log_callback(224-225)cortex/sandbox/sandbox_executor.py (2)
failed(79-81)success(74-76)
tests/unit/test_sandbox_executor.py (2)
cortex/sandbox/sandbox_executor.py (1)
validate_command(266-314)cortex/utils/commands.py (1)
validate_command(134-196)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build Package
🔇 Additional comments (11)
test/test_cli.py (1)
44-54: LGTM!The test correctly verifies provider override behavior using a context manager with
clear=True, then tests fallback by deletingCORTEX_PROVIDERwithin the same context. This is a clean way to test both the override and fallback scenarios.cortex/install_parallel.py (4)
1-11: LGTM!Imports are correctly sorted and use modern Python 3.10+ annotations. The
Executortype fromconcurrent.futuresis properly imported for the executor parameter type hint.
22-39: LGTM!The
ParallelTaskdataclass uses modern type annotations (list[str],float | None) as previously requested in reviews.
127-148: This module is a low-level utility for parallel task execution, not a package operation orchestrator. Audit logging to~/.cortex/history.dbis already implemented at the proper architectural level—incortex/cli.py, which callshistory.record_installation()before execution andhistory.update_installation()after execution with status/error details. Therun_parallel_installfunction correctly remains decoupled from history tracking, following proper separation of concerns.Likely an incorrect or invalid review comment.
262-267: The review comment contains incorrect assertions about the code behavior.The code at lines 242-248 demonstrates that cancelled tasks ARE properly added to the
failedset (line 246) and ARE removed from therunningdictionary (line 247). Additionally, theelif dep_task_name in running:branch at line 266 is effectively unreachable—whenstop_on_error=True, dependent tasks cannot be in therunningset when a parent task fails, because the dependency check at line 193 only allows tasks to start when all dependencies are in thecompletedset, and failed tasks are added to thefailedset, notcompleted.The code correctly handles task cancellation and state management. No changes are needed.
tests/unit/test_sandbox_executor.py (3)
24-24: LGTM!Good practice to import
DANGEROUS_PATTERNSfrom the centralizedcortex.validatorsmodule, ensuring tests validate against the same patterns used in production code.
83-84: LGTM!Good assertion pattern - checking both that the command is blocked (
assertFalse) and that the violation message contains "not whitelisted" provides clear test semantics.
132-136: LGTM!Good improvement to explicitly check the
previewattribute and verify it contains the expected command. Theassert preview is not Noneguard enables type-safe access.cortex/sandbox/sandbox_executor.py (3)
32-32: LGTM!Good refactoring to import
DANGEROUS_PATTERNSfrom the centralizedcortex.validatorsmodule instead of maintaining a duplicate list. This ensures consistency across all modules that need these security patterns.
581-590: LGTM!Good cross-platform handling by building
popen_kwargsdynamically and only includingpreexec_fnwhen it's set (non-Windows). This prevents errors on Windows wherepreexec_fnis not supported.
614-615: LGTM!Good null check before calling
process.kill()to handle the case wherePopenfailed before assignment.
|
sonarcloud error is duplication error and can be ignored. if still wanted to fix it then i will do so later. |
|
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
cortex/cli.py (1)
288-294: Add docstring for public API method.The
installmethod is a public API but still lacks a docstring. Per coding guidelines, docstrings are required for all public APIs.🔎 Suggested docstring
def install( self, software: str, execute: bool = False, dry_run: bool = False, parallel: bool = False, ): + """ + Install software packages using AI-generated commands. + + Args: + software: Name or description of software to install + execute: If True, execute the generated commands + dry_run: If True, show commands without executing + parallel: If True, execute independent tasks concurrently + + Returns: + int: Exit code (0 for success, 1 for failure) + """ # Validate input firstAs per coding guidelines, docstrings are required for all public APIs.
🧹 Nitpick comments (2)
cortex/cli.py (2)
395-403: Consider adding dependency analysis for parallel execution.The parallel execution currently runs all commands without a dependency graph (
dependenciesparameter is not provided). For complex multi-step installs, some steps may depend on others (e.g., adding a repository before installing from it).Consider analyzing the command sequence to detect dependencies based on:
- Package manager operations (e.g.,
apt-get updatebeforeapt-get install)- File system operations (e.g., creating directories before writing files)
- Service dependencies (e.g., installing dependencies before main package)
This would enable smarter parallel scheduling while ensuring correctness.
429-432: Simplify failed task detection logic.The current logic uses
getattr(t.status, "value", "") == "failed"which is indirect and relies on string comparison. SinceTaskStatusis an enum (from the relevant code snippets), a direct comparison would be clearer.🔎 Cleaner approach
+ from cortex.install_parallel import TaskStatus + failed_tasks = [ - t for t in parallel_tasks if getattr(t.status, "value", "") == "failed" + t for t in parallel_tasks if t.status == TaskStatus.FAILED ]
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cortex/cli.py(4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.py
🧠 Learnings (1)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/cli.py
🧬 Code graph analysis (1)
cortex/cli.py (3)
cortex/install_parallel.py (1)
run_parallel_install(127-279)tests/installer/test_parallel_install.py (1)
log_callback(224-225)cortex/installation_history.py (1)
InstallationStatus(34-40)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build Package
🔇 Additional comments (2)
cortex/cli.py (2)
837-841: LGTM!The
--parallelflag is properly integrated with clear help text and follows the established pattern for boolean flags.
915-920: LGTM!The argument forwarding correctly passes all parameters including the new
parallelflag to theinstallmethod.
| def parallel_log_callback(message: str, level: str = "info"): | ||
| if level == "success": | ||
| cx_print(f" ✅ {message}", "success") | ||
| elif level == "error": | ||
| cx_print(f" ❌ {message}", "error") | ||
| else: | ||
| cx_print(f" ℹ {message}", "info") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add type hints to inline callback function.
The parallel_log_callback function lacks type hints. Per coding guidelines, type hints are required in Python code.
🔎 Add type hints
- def parallel_log_callback(message: str, level: str = "info"):
+ def parallel_log_callback(message: str, level: str = "info") -> None:
if level == "success":
cx_print(f" ✅ {message}", "success")
elif level == "error":
cx_print(f" ❌ {message}", "error")
else:
cx_print(f" ℹ {message}", "info")As per coding guidelines, type hints are required for all functions.
🤖 Prompt for AI Agents
In cortex/cli.py around lines 386 to 392, the inline callback function
parallel_log_callback is missing type hints; update its signature to include
parameter and return type annotations (e.g., message: str, level: str or a more
specific typing.Literal['info','success','error']) and add an explicit return
type of None; if you use Literal import it from typing (or typing_extensions for
older Python), leaving the body unchanged.
|
@mikejmorgan-ai did you test this locally? |





Related Issue
Closes #269
Summary
Implemented parallel task execution system for Cortex that enables concurrent execution of independent installation steps, providing 2-3x speedup for complex multi-step installations.
Key Features Implemented
1. Core Parallel Execution Engine (install_parallel.py)
2. CLI Integration (cli.py)
--parallelflag tocortex installcommandsubprocess,validate_installation_id,ValidationError)shutil,traceback,urllib.request)3. Comprehensive Test Suite (test_parallel_install.py)
test as user
Files Changed: 4 files -
Checklist
pytest tests/)[#XX] Description#video
https://drive.google.com/file/d/1GIzYu3dzfB0w2S_LU3lH7SgXsZAUCx2s/view?usp=sharing
Summary by CodeRabbit
Release Notes
New Features
--parallelflag to install command for faster parallel task execution with dependency management.Bug Fixes
Tests
✏️ Tip: You can customize this high-level summary in your review settings.