-
-
Notifications
You must be signed in to change notification settings - Fork 52
fix: [FEATURE] AI-powered dependency conflict prediction #554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Implements solution for cortexlinux#428 Changes: - cortex/conflict_predictor.py - cortex/llm_router.py - tests/test_cli.py - tests/test_conflict_predictor.py - ortex/cli.py Generated by Freelance Agent using Claude Code
📝 WalkthroughWalkthroughThis PR introduces an AI-powered dependency conflict predictor for Cortex. It adds a new Changes
Sequence Diagram(s)sequenceDiagram
participant User as User/CLI
participant CortexCLI as CortexCLI
participant Predictor as DependencyConflictPredictor
participant System as System (apt/pip)
participant LLMRouter as LLM Router
User->>CortexCLI: install(package, predict=True)
CortexCLI->>CortexCLI: _run_conflict_prediction(package)
CortexCLI->>Predictor: predict_conflicts(package)
Predictor->>System: refresh_caches() (apt/pip)
System-->>Predictor: installed packages
Predictor->>Predictor: _detect_ecosystem(package)
Predictor->>Predictor: _analyze_known_conflicts()
Predictor->>Predictor: _analyze_transitive_conflicts()
alt LLM Available
Predictor->>LLMRouter: route(CONFLICT_PREDICTION)
LLMRouter-->>Predictor: analysis_details
end
Predictor->>Predictor: _generate_resolutions()
Predictor->>Predictor: _determine_overall_risk()
Predictor-->>CortexCLI: ConflictPrediction (risk level, conflicts)
alt Critical Risk
CortexCLI->>User: ❌ Block installation (exit code)
else High Risk
CortexCLI->>User: ⚠️ Prompt to proceed/cancel
else No Risk
CortexCLI->>User: ✓ Proceed with installation
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ❌ 3❌ Failed checks (3 warnings)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
CLA Verification FailedThe following contributors have not signed the Contributor License Agreement:
How to Sign
This check runs automatically. Maintainers can update |
Summary of ChangesHello @jeffmarcilliat, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request delivers a crucial new feature: an AI-powered dependency conflict prediction system for the 'cortex' CLI. Its primary goal is to proactively identify and mitigate potential package conflicts before they disrupt installations or system stability. By analyzing the current system state and package requirements, it offers detailed conflict reports, confidence scores, and actionable resolution strategies, significantly improving the reliability of package management operations. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm_router.py (1)
32-44: UseTaskType.CONFLICT_PREDICTIONin the predictor’s LLM call (currently unused).You added the routing, but
cortex/conflict_predictor.py:predict_conflicts_async()still sends the LLM request asTaskType.DEPENDENCY_RESOLUTION, so this new rule may never be exercised. Consider switching that call toTaskType.CONFLICT_PREDICTION.Also applies to: 106-117
🤖 Fix all issues with AI agents
In @cortex/cli.py:
- Around line 709-713: The CLI is double-printing " [RECOMMENDED]" because
_run_conflict_prediction() appends rec_tag when printing but the predictor also
mutates ResolutionSuggestion.description to include " [RECOMMENDED]"; remove the
mutation inside the predictor (stop modifying ResolutionSuggestion.description)
and let the CLI add rec_tag based on res.recommended (in
_run_conflict_prediction()), ensuring the predictor sets only the boolean flag
(ResolutionSuggestion.recommended) and not the description text.
In @cortex/conflict_predictor.py:
- Around line 255-320: _parse_version_constraint currently only handles a single
specifier despite the docstring example, and _check_version_satisfies implements
"~=" incompletely; fix by either adjusting the docstring to state
single-specifier support or (preferred) implement proper PEP 440 compatible
handling: update _parse_version_constraint to accept only one comma-separated
specifier (or return multiple constraints if you choose multi-specifier
support), and modify _check_version_satisfies to implement "~=" correctly by
parsing constraint.version into numeric parts, computing the upper bound per PEP
440 (if constraint has one component bump the major; if multiple components
increment the penultimate component and zero following parts), then return True
only if installed_version >= lower_bound and installed_version < upper_bound
using the existing _compare_versions helper.
- Around line 179-190: The constructor __init__ incorrectly types and routes LLM
tasks: change the llm_router parameter to have a proper type hint (llm_router:
Optional[LLMRouter] = None) and update any routing usage so conflict analysis
uses TaskType.CONFLICT_PREDICTION instead of TaskType.DEPENDENCY_RESOLUTION
(ensure calls that pass the task enum to the llm_router or to its route method
reference TaskType.CONFLICT_PREDICTION); keep the rest of initialization
(_installed_apt_cache, _installed_pip_cache, _refresh_caches) intact.
In @tests/test_conflict_predictor.py:
- Around line 229-282: Tests are flaky because
DependencyConflictPredictor.predict_conflicts calls _refresh_caches which
overwrites injected _installed_pip_cache/_installed_apt_cache; update the
failing tests (test_predict_with_conflicts and test_predict_multiple_packages)
to prevent cache refresh by patching or stubbing
DependencyConflictPredictor._refresh_caches to a no-op before calling
predict_conflicts/predict_multiple (e.g., use
patch.object(DependencyConflictPredictor, "_refresh_caches", return_value=None)
or assign predictor._refresh_caches = lambda: None) so the injected cache state
remains intact for assertions.
🧹 Nitpick comments (3)
tests/test_cli.py (1)
215-278: Add a test for--predict+--no-predictconflict (and enforce mutual exclusion).Right now tests cover pass-through, but don’t lock in behavior for contradictory flags. Once CLI enforces mutual exclusivity (recommended), add a test asserting argparse rejects the combination.
cortex/cli.py (1)
635-762: Avoid N× cache refresh in prediction loops (install/predict can get very slow).
DependencyConflictPredictor.__init__()refreshes caches, andpredict_conflicts()refreshes again; looping over packages multiplies that cost. Consider refreshing once per command and reusing the snapshot (e.g.,predict_conflicts(..., refresh=False)or apredict_multiple()path that refreshes once).Also applies to: 763-912
cortex/conflict_predictor.py (1)
739-820: Don’t refresh caches per package (predict_multiple becomes O(N) external scans).
predict_conflicts()refreshes caches every call;predict_multiple()should refresh once then analyze many. Consider arefresh: bool = Trueflag onpredict_conflicts()and call it withFalsefrompredict_multiple()after one refresh.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
cortex/cli.pycortex/conflict_predictor.pycortex/llm_router.pytests/test_cli.pytests/test_conflict_predictor.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_cli.pycortex/llm_router.pytests/test_conflict_predictor.pycortex/conflict_predictor.pycortex/cli.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_cli.pytests/test_conflict_predictor.py
🧠 Learnings (1)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
🧬 Code graph analysis (3)
tests/test_conflict_predictor.py (1)
cortex/conflict_predictor.py (19)
ConflictPrediction(113-123)ConflictSeverity(25-31)InstalledPackage(56-62)PackageEcosystem(46-52)PredictedConflict(85-97)ResolutionStrategy(34-43)ResolutionSuggestion(101-109)VersionConstraint(66-71)_parse_version_constraint(255-269)_compare_versions(271-299)_check_version_satisfies(301-320)_analyze_known_conflicts(372-465)predict_conflicts(739-803)_generate_resolutions(510-620)_determine_overall_risk(622-635)_detect_ecosystem(637-681)format_prediction(821-868)export_prediction_json(870-903)_analyze_transitive_conflicts(467-508)
cortex/conflict_predictor.py (1)
cortex/llm_router.py (2)
TaskType(32-43)acomplete(545-611)
cortex/cli.py (1)
cortex/conflict_predictor.py (4)
ConflictSeverity(25-31)DependencyConflictPredictor(126-903)predict_conflicts(739-803)export_prediction_json(870-903)
| def _run_conflict_prediction(self, software: str) -> int | None: | ||
| """ | ||
| Run AI-powered conflict prediction before installation. | ||
| Returns: | ||
| None if safe to proceed | ||
| int (exit code) if should stop | ||
| """ | ||
| from cortex.conflict_predictor import ( | ||
| ConflictSeverity, | ||
| DependencyConflictPredictor, | ||
| ) | ||
|
|
||
| cx_print("Analyzing dependencies for potential conflicts...", "info") | ||
|
|
||
| try: | ||
| predictor = DependencyConflictPredictor() | ||
|
|
||
| # Split software into individual packages | ||
| packages = software.split() | ||
|
|
||
| # Run prediction for each package | ||
| all_conflicts = [] | ||
| for pkg in packages: | ||
| # Skip commands like "pip install" or "apt-get" | ||
| if pkg in ("pip", "pip3", "apt", "apt-get", "install", "-y", "&&"): | ||
| continue | ||
|
|
||
| prediction = predictor.predict_conflicts(pkg) | ||
|
|
||
| if prediction.conflicts: | ||
| all_conflicts.append(prediction) | ||
|
|
||
| if not all_conflicts: | ||
| cx_print("No conflicts predicted. Safe to proceed.", "success") | ||
| console.print() | ||
| return None # Safe to proceed | ||
|
|
||
| # Display conflicts | ||
| console.print() | ||
| for prediction in all_conflicts: | ||
| # Format the prediction nicely | ||
| risk_emoji = { | ||
| ConflictSeverity.LOW: "[yellow]", | ||
| ConflictSeverity.MEDIUM: "[orange1]", | ||
| ConflictSeverity.HIGH: "[red]", | ||
| ConflictSeverity.CRITICAL: "[bold red]", | ||
| } | ||
| risk_color = risk_emoji.get(prediction.overall_risk, "") | ||
|
|
||
| console.print( | ||
| f"{risk_color}Conflict predicted[/]: {prediction.package_name}" | ||
| ) | ||
| console.print( | ||
| f" Risk Level: {risk_color}{prediction.overall_risk.value.upper()}[/]" | ||
| ) | ||
| console.print( | ||
| f" Confidence: {prediction.prediction_confidence:.0%}" | ||
| ) | ||
| console.print() | ||
|
|
||
| # Show each conflict | ||
| for i, conflict in enumerate(prediction.conflicts, 1): | ||
| console.print(f" {i}. {conflict.description}") | ||
| console.print( | ||
| f" {conflict.conflicting_package} " | ||
| f"{conflict.conflicting_version} (installed)" | ||
| ) | ||
| console.print() | ||
|
|
||
| # Show top suggestions | ||
| if prediction.resolutions: | ||
| console.print(" [bold cyan]Suggestions (ranked by safety):[/bold cyan]") | ||
| for i, res in enumerate(prediction.resolutions[:3], 1): | ||
| rec_tag = " [RECOMMENDED]" if res.recommended else "" | ||
| console.print(f" {i}. {res.description}{rec_tag}") | ||
| if res.command: | ||
| console.print(f" [dim]$ {res.command}[/dim]") | ||
| console.print() | ||
|
|
||
| # Check if we should stop for critical conflicts | ||
| critical_predictions = [ | ||
| p for p in all_conflicts if p.overall_risk == ConflictSeverity.CRITICAL | ||
| ] | ||
|
|
||
| if critical_predictions: | ||
| cx_print( | ||
| "Critical conflicts detected. Installation blocked.", | ||
| "error", | ||
| ) | ||
| cx_print( | ||
| "Resolve conflicts above or use --no-predict to skip this check.", | ||
| "warning", | ||
| ) | ||
| return 1 # Stop with error | ||
|
|
||
| # For non-critical conflicts, ask user | ||
| high_predictions = [ | ||
| p for p in all_conflicts if p.overall_risk == ConflictSeverity.HIGH | ||
| ] | ||
|
|
||
| if high_predictions: | ||
| try: | ||
| response = console.input( | ||
| "[bold yellow]High-risk conflicts detected. " | ||
| "Proceed anyway? (y/N): [/bold yellow]" | ||
| ) | ||
| if response.lower() not in ("y", "yes"): | ||
| cx_print("Installation cancelled", "info") | ||
| return 0 # User cancelled | ||
| except (EOFError, KeyboardInterrupt): | ||
| console.print() | ||
| cx_print("Installation cancelled", "info") | ||
| return 0 | ||
|
|
||
| # Medium/Low conflicts - just warn and proceed | ||
| return None # Safe to proceed | ||
|
|
||
| except ImportError as e: | ||
| # Conflict predictor module not available - just warn and continue | ||
| self._debug(f"Conflict prediction unavailable: {e}") | ||
| return None | ||
| except Exception as e: | ||
| # Don't let prediction errors block installation | ||
| cx_print(f"Conflict prediction failed: {e}", "warning") | ||
| cx_print("Proceeding with installation...", "info") | ||
| return None | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make --predict and --no-predict mutually exclusive (and define precedence).
Right now users can pass both; argparse will accept it and your should_predict logic will silently prefer --predict. Use a mutually-exclusive group for install flags.
Also applies to: 913-935, 2426-2459
| rec_tag = " [RECOMMENDED]" if res.recommended else "" | ||
| console.print(f" {i}. {res.description}{rec_tag}") | ||
| if res.command: | ||
| console.print(f" [dim]$ {res.command}[/dim]") | ||
| console.print() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t double-print “[RECOMMENDED]”.
_run_conflict_prediction() adds rec_tag, but the predictor also mutates ResolutionSuggestion.description to include [RECOMMENDED] for the “safest” option, so you can end up with duplicated tags in output. Prefer leaving description unmodified and only styling in the CLI.
🤖 Prompt for AI Agents
In @cortex/cli.py around lines 709 - 713, The CLI is double-printing "
[RECOMMENDED]" because _run_conflict_prediction() appends rec_tag when printing
but the predictor also mutates ResolutionSuggestion.description to include "
[RECOMMENDED]"; remove the mutation inside the predictor (stop modifying
ResolutionSuggestion.description) and let the CLI add rec_tag based on
res.recommended (in _run_conflict_prediction()), ensuring the predictor sets
only the boolean flag (ResolutionSuggestion.recommended) and not the description
text.
| def __init__(self, llm_router=None): | ||
| """ | ||
| Initialize the conflict predictor. | ||
| Args: | ||
| llm_router: Optional LLMRouter instance for AI-powered analysis | ||
| """ | ||
| self.llm_router = llm_router | ||
| self._installed_apt_cache: dict[str, InstalledPackage] = {} | ||
| self._installed_pip_cache: dict[str, InstalledPackage] = {} | ||
| self._refresh_caches() | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -name "conflict_predictor.py" -type fRepository: cortexlinux/cortex
Length of output: 92
🏁 Script executed:
head -50 cortex/conflict_predictor.pyRepository: cortexlinux/cortex
Length of output: 1674
🏁 Script executed:
sed -n '175,195p' cortex/conflict_predictor.pyRepository: cortexlinux/cortex
Length of output: 912
🏁 Script executed:
# Check for TaskType imports and usage
rg "TaskType\.(CONFLICT_PREDICTION|DEPENDENCY_RESOLUTION)" cortex/conflict_predictor.py -A 2 -B 2Repository: cortexlinux/cortex
Length of output: 281
🏁 Script executed:
# Check all TaskType definitions
fd -e py | xargs rg "class TaskType|TaskType\s*=" | head -20Repository: cortexlinux/cortex
Length of output: 381
🏁 Script executed:
# Get the complete TaskType enum definition
rg "class TaskType" -A 30 cortex/llm_router.pyRepository: cortexlinux/cortex
Length of output: 1041
🏁 Script executed:
# Verify the exact line numbers and context of predict_conflicts_async usage
sed -n '700,740p' cortex/conflict_predictor.pyRepository: cortexlinux/cortex
Length of output: 1518
🏁 Script executed:
# Check if CONFLICT_PREDICTION exists anywhere in the codebase
rg "CONFLICT_PREDICTION" --type pyRepository: cortexlinux/cortex
Length of output: 266
🏁 Script executed:
# Check the type hint on llm_router parameter in __init__
sed -n '179,180p' cortex/conflict_predictor.pyRepository: cortexlinux/cortex
Length of output: 114
🏁 Script executed:
# Look for LLMRouter class and its type
rg "class LLMRouter" -A 5 cortex/Repository: cortexlinux/cortex
Length of output: 358
🏁 Script executed:
# Check imports at top of conflict_predictor.py
sed -n '1,30p' cortex/conflict_predictor.py | grep -E "^import|^from"Repository: cortexlinux/cortex
Length of output: 210
🏁 Script executed:
# Verify the line numbers for the "Also applies to: 683-737" claim
sed -n '683,737p' cortex/conflict_predictor.py | grep -n "task_type\|TaskType"Repository: cortexlinux/cortex
Length of output: 184
🏁 Script executed:
# Check for all instances of task_type=TaskType.DEPENDENCY_RESOLUTION in the range 683-737
sed -n '683,737p' cortex/conflict_predictor.py | cat -nRepository: cortexlinux/cortex
Length of output: 2497
🏁 Script executed:
# Get more context around line 725 to understand the method
sed -n '683,745p' cortex/conflict_predictor.pyRepository: cortexlinux/cortex
Length of output: 2321
🏁 Script executed:
# Check if there are any other instances of TaskType.DEPENDENCY_RESOLUTION in conflict_predictor.py besides the one already found
rg "task_type=TaskType.DEPENDENCY_RESOLUTION" cortex/conflict_predictor.py -nRepository: cortexlinux/cortex
Length of output: 127
Use TaskType.CONFLICT_PREDICTION (not DEPENDENCY_RESOLUTION) for LLM routing.
The current code routes conflict analysis to the wrong LLM provider. TaskType.CONFLICT_PREDICTION exists in the enum and is configured to use KIMI_K2, while DEPENDENCY_RESOLUTION routes elsewhere. This mismatch defeats the purpose of the router integration.
Additionally, the llm_router parameter in __init__ (line 179) lacks a type hint; should be llm_router: Optional[LLMRouter] = None per the type hints requirement.
Proposed fix
response = await self.llm_router.acomplete(
messages=[
{"role": "system", "content": "You are a Linux package dependency expert."},
{"role": "user", "content": prompt},
],
- task_type=TaskType.DEPENDENCY_RESOLUTION,
+ task_type=TaskType.CONFLICT_PREDICTION,
temperature=0.3,
max_tokens=1000,
)🤖 Prompt for AI Agents
In @cortex/conflict_predictor.py around lines 179 - 190, The constructor
__init__ incorrectly types and routes LLM tasks: change the llm_router parameter
to have a proper type hint (llm_router: Optional[LLMRouter] = None) and update
any routing usage so conflict analysis uses TaskType.CONFLICT_PREDICTION instead
of TaskType.DEPENDENCY_RESOLUTION (ensure calls that pass the task enum to the
llm_router or to its route method reference TaskType.CONFLICT_PREDICTION); keep
the rest of initialization (_installed_apt_cache, _installed_pip_cache,
_refresh_caches) intact.
| def _parse_version_constraint(self, constraint: str) -> VersionConstraint | None: | ||
| """Parse a version constraint string like '>=1.0,<2.0'""" | ||
| constraint = constraint.strip() | ||
| if not constraint: | ||
| return None | ||
|
|
||
| # Handle operators | ||
| operators = ["<=", ">=", "==", "!=", "~=", "<", ">"] | ||
| for op in operators: | ||
| if constraint.startswith(op): | ||
| version = constraint[len(op) :].strip() | ||
| return VersionConstraint(operator=op, version=version, original=constraint) | ||
|
|
||
| # No operator means exact match | ||
| return VersionConstraint(operator="==", version=constraint, original=constraint) | ||
|
|
||
| def _compare_versions(self, v1: str, v2: str) -> int: | ||
| """ | ||
| Compare two version strings. | ||
| Returns: -1 if v1 < v2, 0 if equal, 1 if v1 > v2 | ||
| """ | ||
|
|
||
| def normalize(v: str) -> list[int]: | ||
| parts = [] | ||
| for part in re.split(r"[.\-+]", v): | ||
| # Extract numeric prefix | ||
| match = re.match(r"(\d+)", part) | ||
| if match: | ||
| parts.append(int(match.group(1))) | ||
| return parts | ||
|
|
||
| v1_parts = normalize(v1) | ||
| v2_parts = normalize(v2) | ||
|
|
||
| # Pad shorter version with zeros | ||
| max_len = max(len(v1_parts), len(v2_parts)) | ||
| v1_parts.extend([0] * (max_len - len(v1_parts))) | ||
| v2_parts.extend([0] * (max_len - len(v2_parts))) | ||
|
|
||
| for p1, p2 in zip(v1_parts, v2_parts): | ||
| if p1 < p2: | ||
| return -1 | ||
| if p1 > p2: | ||
| return 1 | ||
| return 0 | ||
|
|
||
| def _check_version_satisfies(self, installed_version: str, constraint: VersionConstraint) -> bool: | ||
| """Check if installed version satisfies a constraint""" | ||
| cmp = self._compare_versions(installed_version, constraint.version) | ||
|
|
||
| if constraint.operator == "==": | ||
| return cmp == 0 | ||
| elif constraint.operator == "!=": | ||
| return cmp != 0 | ||
| elif constraint.operator == "<": | ||
| return cmp < 0 | ||
| elif constraint.operator == "<=": | ||
| return cmp <= 0 | ||
| elif constraint.operator == ">": | ||
| return cmp > 0 | ||
| elif constraint.operator == ">=": | ||
| return cmp >= 0 | ||
| elif constraint.operator == "~=": | ||
| # Compatible release (e.g., ~=1.4 means >=1.4, <2.0) | ||
| return cmp >= 0 # Simplified | ||
| return True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's understand the context - find where _parse_version_constraint is called
rg "_parse_version_constraint" --type py -B 2 -A 2Repository: cortexlinux/cortex
Length of output: 180
🏁 Script executed:
# Check if there are any test files for conflict_predictor
fd "test.*conflict" --type fRepository: cortexlinux/cortex
Length of output: 94
🏁 Script executed:
# Search for usage patterns of version constraints in the codebase
rg ">=.*,<|~=" --type py -B 1 -A 1 | head -50Repository: cortexlinux/cortex
Length of output: 2377
🏁 Script executed:
# Look at the full context of the conflict_predictor.py file around the methods in question
wc -l cortex/conflict_predictor.pyRepository: cortexlinux/cortex
Length of output: 94
🏁 Script executed:
# Check if there are imports or dependencies that suggest version parsing libraries
grep -n "import\|from" cortex/conflict_predictor.py | head -20Repository: cortexlinux/cortex
Length of output: 426
🏁 Script executed:
# Read the full test file to understand expected behavior
cat -n tests/test_conflict_predictor.pyRepository: cortexlinux/cortex
Length of output: 27984
🏁 Script executed:
# Search for all calls to _parse_version_constraint to understand how it's used
rg "parse_version_constraint" --type py -B 3 -A 3Repository: cortexlinux/cortex
Length of output: 2950
🏁 Script executed:
# Check how dependency_importer.py handles version constraints
sed -n '1,100p' cortex/dependency_importer.pyRepository: cortexlinux/cortex
Length of output: 2941
🏁 Script executed:
# Look at the section where version specs are extracted in dependency_importer.py
grep -n "version_spec\|version_match" cortex/dependency_importer.py -B 2 -A 2Repository: cortexlinux/cortex
Length of output: 1262
Fix misleading docstring and incomplete ~= operator implementation.
_parse_version_constraint() cannot handle multi-specifier constraints like ">=1.0,<2.0" (it would parse version as "1.0,<2.0"), contradicting the docstring example. Additionally, ~= operator is incomplete—it only checks >= but should enforce compatible release semantics per PEP 440 (e.g., ~=1.4.2 requires >=1.4.2, <1.5.0). Either implement proper multi-specifier parsing and complete ~= semantics, or update the docstring to accurately describe single-specifier support and the simplified ~= behavior.
🤖 Prompt for AI Agents
In @cortex/conflict_predictor.py around lines 255 - 320,
_parse_version_constraint currently only handles a single specifier despite the
docstring example, and _check_version_satisfies implements "~=" incompletely;
fix by either adjusting the docstring to state single-specifier support or
(preferred) implement proper PEP 440 compatible handling: update
_parse_version_constraint to accept only one comma-separated specifier (or
return multiple constraints if you choose multi-specifier support), and modify
_check_version_satisfies to implement "~=" correctly by parsing
constraint.version into numeric parts, computing the upper bound per PEP 440 (if
constraint has one component bump the major; if multiple components increment
the penultimate component and zero following parts), then return True only if
installed_version >= lower_bound and installed_version < upper_bound using the
existing _compare_versions helper.
| def _generate_resolutions( | ||
| self, conflicts: list[PredictedConflict], package_name: str | ||
| ) -> list[ResolutionSuggestion]: | ||
| """Generate resolution suggestions for conflicts""" | ||
| resolutions: list[ResolutionSuggestion] = [] | ||
|
|
||
| if not conflicts: | ||
| return resolutions | ||
|
|
||
| # Group by severity | ||
| critical_conflicts = [c for c in conflicts if c.severity == ConflictSeverity.CRITICAL] | ||
| high_conflicts = [c for c in conflicts if c.severity == ConflictSeverity.HIGH] | ||
| medium_conflicts = [c for c in conflicts if c.severity == ConflictSeverity.MEDIUM] | ||
|
|
||
| # Handle critical conflicts first | ||
| for conflict in critical_conflicts: | ||
| if conflict.conflict_type == "mutual_exclusion": | ||
| resolutions.append( | ||
| ResolutionSuggestion( | ||
| strategy=ResolutionStrategy.REMOVE_CONFLICTING, | ||
| description=f"Remove {conflict.conflicting_package} before installing {package_name}", | ||
| command=f"sudo apt-get remove {conflict.conflicting_package}", | ||
| safety_score=0.4, | ||
| side_effects=[ | ||
| f"Will remove {conflict.conflicting_package} and dependent packages", | ||
| "May affect running services", | ||
| ], | ||
| recommended=False, | ||
| ) | ||
| ) | ||
| resolutions.append( | ||
| ResolutionSuggestion( | ||
| strategy=ResolutionStrategy.SKIP_INSTALL, | ||
| description=f"Skip installing {package_name}, keep {conflict.conflicting_package}", | ||
| command=None, | ||
| safety_score=0.9, | ||
| side_effects=["Target package will not be installed"], | ||
| recommended=True, | ||
| ) | ||
| ) | ||
|
|
||
| # Handle version conflicts | ||
| for conflict in high_conflicts + medium_conflicts: | ||
| if conflict.conflict_type in ["version_too_high", "version_too_low"]: | ||
| # Suggest virtual environment for pip packages | ||
| if conflict.ecosystem == PackageEcosystem.PIP: | ||
| resolutions.append( | ||
| ResolutionSuggestion( | ||
| strategy=ResolutionStrategy.USE_VIRTUALENV, | ||
| description=f"Create virtual environment to isolate {package_name}", | ||
| command=f"python3 -m venv .venv && source .venv/bin/activate && pip install {package_name}", | ||
| safety_score=0.95, | ||
| side_effects=["Package installed in isolated environment only"], | ||
| recommended=True, | ||
| ) | ||
| ) | ||
|
|
||
| # Suggest upgrading/downgrading | ||
| if conflict.conflict_type == "version_too_high": | ||
| resolutions.append( | ||
| ResolutionSuggestion( | ||
| strategy=ResolutionStrategy.UPGRADE_PACKAGE, | ||
| description=f"Install newer version of {package_name} that supports {conflict.conflicting_package} {conflict.conflicting_version}", | ||
| command=f"pip install --upgrade {package_name}", | ||
| safety_score=0.8, | ||
| side_effects=["May get different version than expected"], | ||
| recommended=True, | ||
| ) | ||
| ) | ||
| resolutions.append( | ||
| ResolutionSuggestion( | ||
| strategy=ResolutionStrategy.DOWNGRADE_DEPENDENCY, | ||
| description=f"Downgrade {conflict.conflicting_package} to compatible version", | ||
| command=f"pip install {conflict.conflicting_package}<{conflict.conflicting_version}", | ||
| safety_score=0.5, | ||
| side_effects=[ | ||
| f"May break packages depending on {conflict.conflicting_package}" | ||
| ], | ||
| recommended=False, | ||
| ) | ||
| ) | ||
|
|
||
| if conflict.conflict_type == "port_conflict": | ||
| resolutions.append( | ||
| ResolutionSuggestion( | ||
| strategy=ResolutionStrategy.PIN_VERSION, | ||
| description=f"Configure {package_name} to use a different port", | ||
| command=None, | ||
| safety_score=0.85, | ||
| side_effects=["Requires manual configuration"], | ||
| recommended=True, | ||
| ) | ||
| ) | ||
|
|
||
| # Sort by safety score (highest first) | ||
| resolutions.sort(key=lambda r: (-r.safety_score, not r.recommended)) | ||
|
|
||
| # Mark the safest recommended option | ||
| for i, res in enumerate(resolutions): | ||
| if res.recommended: | ||
| resolutions[i] = ResolutionSuggestion( | ||
| strategy=res.strategy, | ||
| description=res.description + " [RECOMMENDED]", | ||
| command=res.command, | ||
| safety_score=res.safety_score, | ||
| side_effects=res.side_effects, | ||
| recommended=True, | ||
| ) | ||
| break | ||
|
|
||
| return resolutions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolution suggestions: avoid data/presentation mixing + fix “downgrade” command correctness.
- Appending
[RECOMMENDED]intoResolutionSuggestion.descriptioncouples model data to a specific UI. pip install X<{conflicting_version}uses the installed version as the upper bound, which is often not the real compatible max.
At minimum, keep recommended as a boolean only, and consider carrying an explicit required constraint/range on PredictedConflict so commands can be accurate.
| class TestConflictPrediction(unittest.TestCase): | ||
| """Test the main prediction functionality""" | ||
|
|
||
| def setUp(self): | ||
| self.predictor = DependencyConflictPredictor() | ||
|
|
||
| @patch.object(DependencyConflictPredictor, "_run_command") | ||
| def test_predict_no_conflicts(self, mock_run): | ||
| """Test prediction when no conflicts exist""" | ||
| # Mock empty caches | ||
| mock_run.return_value = (True, "", "") | ||
| self.predictor._installed_pip_cache = {} | ||
| self.predictor._installed_apt_cache = {} | ||
|
|
||
| prediction = self.predictor.predict_conflicts("flask") | ||
|
|
||
| self.assertIsInstance(prediction, ConflictPrediction) | ||
| self.assertEqual(len(prediction.conflicts), 0) | ||
| self.assertEqual(prediction.overall_risk, ConflictSeverity.LOW) | ||
| self.assertTrue(prediction.can_proceed) | ||
|
|
||
| def test_predict_with_conflicts(self): | ||
| """Test prediction when conflicts exist""" | ||
| # Set up conflicting state | ||
| self.predictor._installed_pip_cache = { | ||
| "numpy": InstalledPackage( | ||
| name="numpy", | ||
| version="2.1.0", | ||
| ecosystem=PackageEcosystem.PIP, | ||
| source="pip3", | ||
| ), | ||
| } | ||
|
|
||
| prediction = self.predictor.predict_conflicts("tensorflow") | ||
|
|
||
| self.assertIsInstance(prediction, ConflictPrediction) | ||
| self.assertGreater(len(prediction.conflicts), 0) | ||
| self.assertIn( | ||
| prediction.overall_risk, | ||
| [ConflictSeverity.HIGH, ConflictSeverity.CRITICAL], | ||
| ) | ||
|
|
||
| def test_predict_multiple_packages(self): | ||
| """Test predicting conflicts for multiple packages""" | ||
| self.predictor._installed_pip_cache = {} | ||
| self.predictor._installed_apt_cache = {} | ||
|
|
||
| predictions = self.predictor.predict_multiple(["flask", "django", "numpy"]) | ||
|
|
||
| self.assertEqual(len(predictions), 3) | ||
| for pred in predictions: | ||
| self.assertIsInstance(pred, ConflictPrediction) | ||
|
|
||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix flaky tests: predict_conflicts() refreshes caches and will overwrite injected state.
test_predict_with_conflicts and test_predict_multiple_packages set _installed_pip_cache but don’t prevent _refresh_caches() from running inside predict_conflicts(), so expectations can fail depending on the runner environment.
Proposed fix (patch refresh to no-op for these tests)
class TestConflictPrediction(unittest.TestCase):
@@
def test_predict_with_conflicts(self):
"""Test prediction when conflicts exist"""
# Set up conflicting state
self.predictor._installed_pip_cache = {
@@
}
-
- prediction = self.predictor.predict_conflicts("tensorflow")
+ with patch.object(self.predictor, "_refresh_caches", return_value=None):
+ prediction = self.predictor.predict_conflicts("tensorflow")
@@
def test_predict_multiple_packages(self):
"""Test predicting conflicts for multiple packages"""
self.predictor._installed_pip_cache = {}
self.predictor._installed_apt_cache = {}
-
- predictions = self.predictor.predict_multiple(["flask", "django", "numpy"])
+ with patch.object(self.predictor, "_refresh_caches", return_value=None):
+ predictions = self.predictor.predict_multiple(["flask", "django", "numpy"])🤖 Prompt for AI Agents
In @tests/test_conflict_predictor.py around lines 229 - 282, Tests are flaky
because DependencyConflictPredictor.predict_conflicts calls _refresh_caches
which overwrites injected _installed_pip_cache/_installed_apt_cache; update the
failing tests (test_predict_with_conflicts and test_predict_multiple_packages)
to prevent cache refresh by patching or stubbing
DependencyConflictPredictor._refresh_caches to a no-op before calling
predict_conflicts/predict_multiple (e.g., use
patch.object(DependencyConflictPredictor, "_refresh_caches", return_value=None)
or assign predictor._refresh_caches = lambda: None) so the injected cache state
remains intact for assertions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant new feature: AI-powered dependency conflict prediction. The changes are extensive, adding a new cortex/conflict_predictor.py module, updating the CLI in cortex/cli.py with new commands and flags, and adding corresponding tests. Overall, the implementation is well-structured. My main feedback focuses on improving the correctness and robustness of the dependency analysis, which is currently based on some simplifications that could lead to inaccurate predictions. I've also included suggestions to improve code maintainability by removing dead code and refactoring.
| def _compare_versions(self, v1: str, v2: str) -> int: | ||
| """ | ||
| Compare two version strings. | ||
| Returns: -1 if v1 < v2, 0 if equal, 1 if v1 > v2 | ||
| """ | ||
|
|
||
| def normalize(v: str) -> list[int]: | ||
| parts = [] | ||
| for part in re.split(r"[.\-+]", v): | ||
| # Extract numeric prefix | ||
| match = re.match(r"(\d+)", part) | ||
| if match: | ||
| parts.append(int(match.group(1))) | ||
| return parts | ||
|
|
||
| v1_parts = normalize(v1) | ||
| v2_parts = normalize(v2) | ||
|
|
||
| # Pad shorter version with zeros | ||
| max_len = max(len(v1_parts), len(v2_parts)) | ||
| v1_parts.extend([0] * (max_len - len(v1_parts))) | ||
| v2_parts.extend([0] * (max_len - len(v2_parts))) | ||
|
|
||
| for p1, p2 in zip(v1_parts, v2_parts): | ||
| if p1 < p2: | ||
| return -1 | ||
| if p1 > p2: | ||
| return 1 | ||
| return 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The custom version comparison logic in _compare_versions does not fully adhere to PEP 440 standards, particularly for pre-release versions (e.g., 1.0.0-alpha, 2.0.0rc1). This can lead to incorrect conflict detection where pre-releases are not ordered correctly against final releases.
For robust and accurate version comparison, it's highly recommended to use the packaging library, which is the standard for this in the Python ecosystem. This will ensure that all version comparisons are PEP 440 compliant.
You'll need to add packaging as a project dependency.
| def _compare_versions(self, v1: str, v2: str) -> int: | |
| """ | |
| Compare two version strings. | |
| Returns: -1 if v1 < v2, 0 if equal, 1 if v1 > v2 | |
| """ | |
| def normalize(v: str) -> list[int]: | |
| parts = [] | |
| for part in re.split(r"[.\-+]", v): | |
| # Extract numeric prefix | |
| match = re.match(r"(\d+)", part) | |
| if match: | |
| parts.append(int(match.group(1))) | |
| return parts | |
| v1_parts = normalize(v1) | |
| v2_parts = normalize(v2) | |
| # Pad shorter version with zeros | |
| max_len = max(len(v1_parts), len(v2_parts)) | |
| v1_parts.extend([0] * (max_len - len(v1_parts))) | |
| v2_parts.extend([0] * (max_len - len(v2_parts))) | |
| for p1, p2 in zip(v1_parts, v2_parts): | |
| if p1 < p2: | |
| return -1 | |
| if p1 > p2: | |
| return 1 | |
| return 0 | |
| def _compare_versions(self, v1: str, v2: str) -> int: | |
| """ | |
| Compare two version strings using PEP 440 semantics. | |
| Returns: -1 if v1 < v2, 0 if equal, 1 if v1 > v2 | |
| """ | |
| from packaging.version import parse as parse_version | |
| v1_parsed = parse_version(v1) | |
| v2_parsed = parse_version(v2) | |
| if v1_parsed < v2_parsed: | |
| return -1 | |
| if v1_parsed > v2_parsed: | |
| return 1 | |
| return 0 |
| def _check_version_satisfies(self, installed_version: str, constraint: VersionConstraint) -> bool: | ||
| """Check if installed version satisfies a constraint""" | ||
| cmp = self._compare_versions(installed_version, constraint.version) | ||
|
|
||
| if constraint.operator == "==": | ||
| return cmp == 0 | ||
| elif constraint.operator == "!=": | ||
| return cmp != 0 | ||
| elif constraint.operator == "<": | ||
| return cmp < 0 | ||
| elif constraint.operator == "<=": | ||
| return cmp <= 0 | ||
| elif constraint.operator == ">": | ||
| return cmp > 0 | ||
| elif constraint.operator == ">=": | ||
| return cmp >= 0 | ||
| elif constraint.operator == "~=": | ||
| # Compatible release (e.g., ~=1.4 means >=1.4, <2.0) | ||
| return cmp >= 0 # Simplified | ||
| return True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The _check_version_satisfies method appears to be unused within the application code. Maintaining and testing dead code increases the maintenance burden and can lead to confusion. If it's not needed, it should be removed. The corresponding tests in tests/test_conflict_predictor.py should also be removed.
| prompt = f"""Analyze these potential dependency conflicts for installing {package_name}: | ||
| {chr(10).join(conflict_descriptions)} | ||
| Installed packages context: | ||
| - Pip packages: {len(self._installed_pip_cache)} | ||
| - Apt packages: {len(self._installed_apt_cache)} | ||
| Provide: | ||
| 1. Risk assessment (low/medium/high/critical) | ||
| 2. Most likely cause of conflicts | ||
| 3. Best resolution approach | ||
| 4. Any additional conflicts I might have missed | ||
| Be concise and actionable.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using chr(10) to represent a newline character is less common and less readable than using the \n string literal. For better code clarity and maintainability, it's recommended to use \n.
prompt = f"""Analyze these potential dependency conflicts for installing {package_name}:
{"
".join(conflict_descriptions)}
Installed packages context:
- Pip packages: {len(self._installed_pip_cache)}
- Apt packages: {len(self._installed_apt_cache)}
Provide:
1. Risk assessment (low/medium/high/critical)
2. Most likely cause of conflicts
3. Best resolution approach
4. Any additional conflicts I might have missed
Be concise and actionable."""| for prediction in all_conflicts: | ||
| # Format the prediction nicely | ||
| risk_emoji = { | ||
| ConflictSeverity.LOW: "[yellow]", | ||
| ConflictSeverity.MEDIUM: "[orange1]", | ||
| ConflictSeverity.HIGH: "[red]", | ||
| ConflictSeverity.CRITICAL: "[bold red]", | ||
| } | ||
| risk_color = risk_emoji.get(prediction.overall_risk, "") | ||
|
|
||
| console.print( | ||
| f"{risk_color}Conflict predicted[/]: {prediction.package_name}" | ||
| ) | ||
| console.print( | ||
| f" Risk Level: {risk_color}{prediction.overall_risk.value.upper()}[/]" | ||
| ) | ||
| console.print( | ||
| f" Confidence: {prediction.prediction_confidence:.0%}" | ||
| ) | ||
| console.print() | ||
|
|
||
| # Show each conflict | ||
| for i, conflict in enumerate(prediction.conflicts, 1): | ||
| console.print(f" {i}. {conflict.description}") | ||
| console.print( | ||
| f" {conflict.conflicting_package} " | ||
| f"{conflict.conflicting_version} (installed)" | ||
| ) | ||
| console.print() | ||
|
|
||
| # Show top suggestions | ||
| if prediction.resolutions: | ||
| console.print(" [bold cyan]Suggestions (ranked by safety):[/bold cyan]") | ||
| for i, res in enumerate(prediction.resolutions[:3], 1): | ||
| rec_tag = " [RECOMMENDED]" if res.recommended else "" | ||
| console.print(f" {i}. {res.description}{rec_tag}") | ||
| if res.command: | ||
| console.print(f" [dim]$ {res.command}[/dim]") | ||
| console.print() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code for displaying conflict predictions here is very similar to the code in the predict method (lines 810-874). This duplication can make future changes more difficult, as they would need to be applied in two places.
Consider refactoring this display logic into a separate helper function that takes a prediction object and handles the formatting and printing. This would improve maintainability and reduce code duplication.
| class TestVersionConstraintSatisfaction(unittest.TestCase): | ||
| """Test version constraint satisfaction checking""" | ||
|
|
||
| def setUp(self): | ||
| self.predictor = DependencyConflictPredictor() | ||
|
|
||
| def test_equal_constraint_satisfied(self): | ||
| """Test == constraint satisfaction""" | ||
| constraint = VersionConstraint(operator="==", version="1.0.0", original="==1.0.0") | ||
| self.assertTrue(self.predictor._check_version_satisfies("1.0.0", constraint)) | ||
| self.assertFalse(self.predictor._check_version_satisfies("1.0.1", constraint)) | ||
|
|
||
| def test_greater_equal_constraint_satisfied(self): | ||
| """Test >= constraint satisfaction""" | ||
| constraint = VersionConstraint(operator=">=", version="2.0.0", original=">=2.0.0") | ||
| self.assertTrue(self.predictor._check_version_satisfies("2.0.0", constraint)) | ||
| self.assertTrue(self.predictor._check_version_satisfies("3.0.0", constraint)) | ||
| self.assertFalse(self.predictor._check_version_satisfies("1.9.9", constraint)) | ||
|
|
||
| def test_less_than_constraint_satisfied(self): | ||
| """Test < constraint satisfaction""" | ||
| constraint = VersionConstraint(operator="<", version="2.0.0", original="<2.0.0") | ||
| self.assertTrue(self.predictor._check_version_satisfies("1.9.9", constraint)) | ||
| self.assertFalse(self.predictor._check_version_satisfies("2.0.0", constraint)) | ||
| self.assertFalse(self.predictor._check_version_satisfies("2.0.1", constraint)) | ||
|
|
||
| def test_not_equal_constraint_satisfied(self): | ||
| """Test != constraint satisfaction""" | ||
| constraint = VersionConstraint(operator="!=", version="1.5.0", original="!=1.5.0") | ||
| self.assertTrue(self.predictor._check_version_satisfies("1.4.0", constraint)) | ||
| self.assertTrue(self.predictor._check_version_satisfies("1.6.0", constraint)) | ||
| self.assertFalse(self.predictor._check_version_satisfies("1.5.0", constraint)) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test class, TestVersionConstraintSatisfaction, is testing the _check_version_satisfies method. However, this method appears to be dead code as it's not called from anywhere in the application logic. Tests for dead code should be removed along with the unused method to keep the codebase clean and focused.
|
@jeffmarcilliat Someone is already assigned to this Issue before, Feel free to choose any other Issue, thanks. |


Summary
This PR implements the solution for #428.
Changes
matting
Files Modified:
cortex/llm_router.pyCONFLICT_PREDICTIONtoTaskTypeenumcortex/cli.py--predictand--no-predictflags to install command_run_conflict_prediction()method to CortexCLI classpredict()method for standalonecortex predictcommandpredictsubcommand parserpredictcommandtests/test_cli.py--predictflag--no-predictflagpredictcommandFeature Implementation:
The feature meets all acceptance criteria from the bounty:
Dependency graph analysis before install - The system analyzes installed packages from dpkg/pip before installation starts
Conflict prediction with confidence scores - Each predicted conflict includes a confidence score (0.0-1.0)
Resolution suggestions ranked by safety - Suggestions are generated and sorted by safety score, with recommended options clearly marked
Integration with apt/dpkg dependency data - Parses
/var/lib/dpkg/statusand usesapt-cache dependsfor APT packagesWorks with pip packages too - Full support for Python pip packages with known conflict patterns for popular packages like TensorFlow, PyTorch, Pandas, etc.
CLI output shows prediction and suggestions - Rich CLI output with color-coded severity levels and formatted suggestions
Example Usage:
Files Modified
cortex/conflict_predictor.pycortex/llm_router.pytests/test_cli.pytests/test_conflict_predictor.pyortex/cli.pyThis PR was generated by Freelance Agent using Claude Code
Summary by CodeRabbit
New Features
predictcommand detects package conflicts with JSON and verbose output options.installcommand with--predictand--no-predictflags to control conflict checking behavior.Tests
✏️ Tip: You can customize this high-level summary in your review settings.