-
Notifications
You must be signed in to change notification settings - Fork 6
[tutor] Add AI-Powered Installation Tutor #49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Continuation of cortexlinux/cortex#566 Features: - Interactive tutoring using TutorAgent orchestration - LLM-powered lessons, code examples, and Q&A (Claude API) - SQLite-based progress tracking and student profiling - CLI integration via cortex tutor <package> - 167 tests with comprehensive coverage Closes cortexlinux#30
📝 WalkthroughWalkthroughThis PR introduces a comprehensive AI-powered installation tutor for Cortex Linux, featuring LLM-driven lessons via Claude AI, interactive CLI-based learning, lesson caching with fallback templates, SQLite-backed progress tracking, input validation, and rich terminal UI components with extensive test coverage. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as CLI Handler
participant TutorAgent as TutorAgent
participant LessonLoader as LessonLoaderTool
participant Cache as SQLiteStore<br/>(Cache)
participant LLM as LLM<br/>(Claude)
participant Progress as ProgressTrackerTool
participant Storage as SQLiteStore<br/>(Progress)
User->>CLI: cortex tutor docker
CLI->>TutorAgent: teach("docker")
TutorAgent->>LessonLoader: load lesson
LessonLoader->>Cache: check cached lesson
alt Lesson in cache
Cache-->>LessonLoader: return cached lesson
LessonLoader-->>TutorAgent: return cached_source
else Cache miss
LessonLoader->>LLM: generate_lesson("docker")
LLM-->>LessonLoader: LessonResponse + cost
LessonLoader->>Cache: cache new lesson
LessonLoader-->>TutorAgent: return fresh_source
end
TutorAgent->>Progress: get_profile()
Progress->>Storage: fetch student profile
Storage-->>Progress: profile data
Progress-->>TutorAgent: profile with learning_style
TutorAgent-->>CLI: lesson + metadata
CLI->>User: display lesson & menu
User->>CLI: select option (e.g., ask question)
CLI->>TutorAgent: ask("docker", "What is a container?")
TutorAgent->>LLM: answer_question(...)
LLM-->>TutorAgent: QAResponse
TutorAgent-->>CLI: answer + code_example
CLI->>User: display answer
User->>CLI: mark complete
CLI->>Progress: mark_completed("docker", "containers")
Progress->>Storage: upsert progress
Storage-->>Progress: ✓ updated
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @srikrishnavansi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates an intelligent, interactive AI tutor into Cortex Linux, designed to provide users with comprehensive education on various software packages and best practices. The system dynamically generates lessons and answers questions using the Claude API, while also tracking user progress and preferences. This feature aims to make learning about Linux packages more engaging and accessible directly from the command line. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new AI-Powered Installation Tutor for Cortex Linux, designed to teach users about packages and best practices through interactive, LLM-powered sessions. The changes include a comprehensive AI_TUTOR.md documentation file detailing its features, architecture, usage, and API. The core implementation involves TutorAgent and InteractiveTutor classes in agent.py for orchestrating lessons and Q&A, branding.py for Rich-based terminal UI, cli.py for command-line argument parsing and dispatching, config.py for managing settings and API keys, contracts.py defining Pydantic data models for lessons and progress, llm.py for Anthropic API interactions with structured outputs, sqlite_store.py for SQLite-based persistence of learning progress and cache, and tools.py for deterministic operations like caching and progress tracking. Review comments highlight several areas for improvement: correcting a typo from cost_gbp to cost_usd in contracts.py and the documentation, updating an incorrect default LLM model name (claude-sonnet-4-20250514) to a valid one (claude-3-sonnet-20240229) in config.py and the documentation, moving local imports of branding functions to the top of agent.py for better readability, removing an unused import (Text) from branding.py, and simplifying the logic for retrieving concept arguments in _add_mastered_concept and _add_weak_concept methods within tools.py to prevent potential KeyError and improve clarity.
| ) | ||
|
|
||
| if step.get("code"): | ||
| console.print(f"\n[cyan]Code:[/cyan] {step['code']}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using dictionary-style access step['code'] could raise a KeyError if the 'code' key is not present in the step dictionary, as the TutorialStep contract defines code as optional. To prevent a potential crash, it's safer to use step.get('code'), which is used elsewhere in this file and will gracefully return None if the key is missing.
| console.print(f"\n[cyan]Code:[/cyan] {step['code']}") | |
| console.print(f"\n[cyan]Code:[/cyan] {step.get('code')}") |
|
|
||
| # Default settings (named constants with clear purpose) | ||
| DEFAULT_TUTOR_TOPICS_COUNT = 5 # Default topic count when actual count unavailable | ||
| DEFAULT_MODEL_NAME = "claude-sonnet-4-20250514" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default model name claude-sonnet-4-20250514 appears to be incorrect and will likely cause runtime errors when making API calls. Please update it to a valid and current model name, for example, claude-3-sonnet-20240229.
| DEFAULT_MODEL_NAME = "claude-sonnet-4-20250514" | |
| DEFAULT_MODEL_NAME = "claude-3-sonnet-20240229" |
| le=1.0, | ||
| ) | ||
| cached: bool = Field(default=False, description="Whether result came from cache") | ||
| cost_gbp: float = Field(default=0.0, description="Cost for LLM calls", ge=0.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The field cost_gbp seems to be a typo and is inconsistent with the rest of the codebase, which uses cost_usd (for example, in llm.py and agent.py). To ensure consistency and avoid confusion, this should be renamed to cost_usd.
| cost_gbp: float = Field(default=0.0, description="Cost for LLM calls", ge=0.0) | |
| cost_usd: float = Field(default=0.0, description="Cost for LLM calls in USD", ge=0.0) |
| # "best_practices": [...] | ||
| # }, | ||
| # "cache_hit": True, | ||
| # "cost_usd": 0.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cost_usd in this example is inconsistent with cost_gbp defined in the LessonContext data contract (packages/cortex-tutor/contracts.py). The rest of the codebase appears to calculate and use USD. To ensure consistency, this documentation should align with the implementation. I've suggested a fix in contracts.py to use cost_usd.
| # Optional: LLM model name (default: claude-sonnet-4-20250514) | ||
| export TUTOR_MODEL_NAME=claude-sonnet-4-20250514 | ||
|
|
||
| # Optional: Cache TTL in hours (default: 24) | ||
| export TUTOR_CACHE_TTL_HOURS=24 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default model name claude-sonnet-4-20250514 mentioned here and in the config.yaml example (line 621) appears to be incorrect, as '4' and the future date are likely typos. This could cause confusion for users copying the configuration. It should be updated to a valid Anthropic model name, such as claude-3-sonnet-20240229.
| from cortex.tutor.branding import ( | ||
| get_user_input, | ||
| print_best_practice, | ||
| print_code_example, | ||
| print_lesson_header, | ||
| print_markdown, | ||
| print_menu, | ||
| print_progress_summary, | ||
| print_tutorial_step, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These branding functions are imported locally within start() and several other methods in this class. Since there doesn't appear to be a circular dependency between agent.py and branding.py, these imports can be moved to the top of the file. This would improve readability and avoid repeated import overhead.
| from rich.progress import BarColumn, Progress, SpinnerColumn, TaskProgressColumn, TextColumn | ||
| from rich.syntax import Syntax | ||
| from rich.table import Table | ||
| from rich.text import Text |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| concept: str | None = None, | ||
| **kwargs: Any, | ||
| ) -> dict[str, Any]: | ||
| """Add a mastered concept to student profile.""" | ||
| concept = kwargs.get("concept") or concept | ||
| if not concept: | ||
| return {"success": False, "error": "concept required"} | ||
| self.store.add_mastered_concept(concept) | ||
| return {"success": True, "message": f"Added mastered concept: {concept}"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic to retrieve the concept value is a bit confusing as it seems to try getting it from both the named argument and kwargs. This can be simplified for better readability and maintainability. Consider refactoring to only get the value from kwargs.
def _add_mastered_concept(self, **kwargs: Any) -> dict[str, Any]:
"""Add a mastered concept to student profile."""
concept = kwargs.get("concept")
if not concept:
return {"success": False, "error": "concept required"}
self.store.add_mastered_concept(concept)
return {"success": True, "message": f"Added mastered concept: {concept}"}| def _add_weak_concept( | ||
| self, | ||
| concept: str | None = None, | ||
| **kwargs: Any, | ||
| ) -> dict[str, Any]: | ||
| """Add a weak concept to student profile.""" | ||
| concept = kwargs.get("concept") or concept | ||
| if not concept: | ||
| return {"success": False, "error": "concept required"} | ||
| self.store.add_weak_concept(concept) | ||
| return {"success": True, "message": f"Added weak concept: {concept}"} | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to _add_mastered_concept, the logic here for retrieving the concept is a bit confusing. It can be simplified for better readability and to follow a more standard pattern of handling keyword arguments.
def _add_weak_concept(self, **kwargs: Any) -> dict[str, Any]:
"""Add a weak concept to student profile."""
concept = kwargs.get("concept")
if not concept:
return {"success": False, "error": "concept required"}
self.store.add_weak_concept(concept)
return {"success": True, "message": f"Added weak concept: {concept}"}|
Hey @Anshgrover23, opened the new PR here as requested ,let me know Further steps. Once code rabbit completes its review i will modify the changes accordingly. |
|
@srikrishnavansi Already told you we are setting up this repository as currently Distro is in progress, so will update if we have any issues for you.Thanks, closing this one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🤖 Fix all issues with AI agents
In `@docs/AI_TUTOR.md`:
- Line 91: Several fenced code blocks in AI_TUTOR.md are missing language
identifiers (markdownlint MD040); update the empty triple-backtick fences at the
flagged locations (lines ~91, 249, 309, 348, 390, 432) to include an appropriate
language tag (use "text" for plain output or "bash" for shell snippets) so each
fence becomes ```text or ```bash as appropriate; ensure you apply the same
change to every matching empty fence in the file.
In `@packages/cortex-tutor/agent.py`:
- Around line 308-335: The tutorial is currently marked complete unconditionally
at the end of _run_tutorial; change this so
agent.mark_completed(self.package_name, "tutorial", 0.9) is only called when the
user finishes all steps (i.e., does not quit early). Implement this by tracking
completion (e.g., a flag like tutorial_completed = True, set to False if
response.lower() == "q" and break) or by using the for/else pattern: put
agent.mark_completed(...) in the for-else else block so it only runs when the
loop wasn't exited via break. Ensure you reference _run_tutorial,
get_user_input/response, and agent.mark_completed in the change.
- Around line 354-380: The except block in _ask_question duplicates the "Invalid
question" prefix when ValueError already contains that text; update the handler
for the exception raised by self.agent.ask to print only the exception message
(e.g., tutor_print(str(e), "error")) instead of tutor_print(f"Invalid question:
{e}", "error") so you don't repeat the prefix. Ensure the change is applied
inside the _ask_question method where the ValueError is caught.
In `@packages/cortex-tutor/config.py`:
- Around line 103-112: The current ensure_data_dir uses a race-prone exists()
check and doesn't guard against the path being an existing file; replace the
exists()-then-mkdir pattern in ensure_data_dir with an atomic attempt to create
the directory using self.data_dir.mkdir(parents=True, mode=0o700, exist_ok=True)
inside a try/except that catches FileExistsError and raises a clear error if the
path exists but is not a directory, and after creation (or if it already existed
as a directory) ensure the permissions are set explicitly
(os.chmod(self.data_dir, 0o700)) so misconfigured files or TOCTOU races are
handled safely.
In `@packages/cortex-tutor/contracts.py`:
- Around line 17-131: The model field cost_gbp is misnamed for USD values;
update LessonContext to consistently use USD by renaming cost_gbp to cost_usd
(update the Field name and description, keep ge=0.0) and adjust any code that
sets or reads it (e.g., places in llm.py that assign response cost and any
serialization/deserialization using model_dump_json/model_validate_json and
to_display_dict consumers); alternatively, if you prefer to keep the field name,
convert incoming cost_usd values to GBP before assignment—ensure the unique
symbol LessonContext.cost_gbp (or the new LessonContext.cost_usd), the
to_json/from_json methods, and any call sites in llm.py are updated to match.
In `@packages/cortex-tutor/tools.py`:
- Around line 32-108: Normalize package_name at the start of each public cache
method to avoid case-sensitive cache misses: in _run, cache_lesson, and
clear_cache, set a local normalized_name = package_name.lower() (or use a chosen
canonicalizer) and use normalized_name for calls to
self.store.get_cached_lesson, self.store.cache_lesson, and for clearing logic
instead of the original package_name; ensure logs and returned reasons still
reference the original package_name if desired, but all store interactions must
use the normalized value so entries and lookups are consistent.
In `@packages/cortex-tutor/validators.py`:
- Around line 165-199: The validate_score function currently treats booleans as
valid because bool is a subclass of int; update validate_score to explicitly
reject booleans before the numeric isinstance check (e.g., check
isinstance(score, bool) and return False with an appropriate message), so
True/False no longer pass as 1/0; keep the rest of the numeric range checks
(score < 0.0 or score > 1.0) unchanged and return the same tuple structure.
In `@tests/tutor/test_integration.py`:
- Around line 172-178: The tests test_tutor_print_success and
test_tutor_print_error declare the unused capsys fixture causing Ruff ARG002;
rename the fixture parameter from capsys to _capsys in both functions (or add a
noqa comment) so the linter recognizes it as intentionally unused—update the
function signatures for test_tutor_print_success and test_tutor_print_error
accordingly.
In `@tests/tutor/test_interactive_tutor.py`:
- Around line 38-40: The test function test_start_loads_lesson (and the other
test functions referenced) declare several injected pytest fixtures that are
unused and trigger Ruff ARG002; rename the unused parameters by prefixing them
with an underscore (e.g., mock_console -> _mock_console, mock_tutor_print ->
_mock_tutor_print, mock_header -> _mock_header, mock_menu -> _mock_menu,
mock_input -> _mock_input, mock_agent_class -> _mock_agent_class) in the
function signature for test_start_loads_lesson and apply the same
underscore-prefix change to the other affected test functions mentioned so
linting no longer reports ARG002 (alternatively add a per-parameter noqa if you
prefer).
In `@tests/tutor/test_llm.py`:
- Around line 8-52: Reset the global config and llm._client around the failing
test to avoid cross-test state bleed: call reset_config() and set llm._client =
None before invoking llm.get_client() in test_get_client_raises_without_api_key
(and restore/reset after the assertion); ensure you import reset_config from
cortex.tutor.config and reference llm._client and get_client so the test is
isolated regardless of test ordering.
🧹 Nitpick comments (3)
packages/cortex-tutor/llm.py (1)
51-112: Configure explicit timeout for Anthropic API calls.The Anthropic SDK defaults to a 10-minute timeout with 2 automatic retries, which may cause CLI requests to hang when users expect faster feedback. Set an explicit timeout (e.g., 20–30 seconds) and configure max_retries in the
get_client()function viaAnthropic(timeout=..., max_retries=...). For long-running requests, consider streaming instead.packages/cortex-tutor/sqlite_store.py (1)
16-49: UseField(default_factory=list)for list defaults inStudentProfile.Mutable list defaults should use
default_factoryto ensure each instance gets a fresh list, following Pydantic best practices.🛠️ Proposed fix
-from pydantic import BaseModel +from pydantic import BaseModel, Field @@ - mastered_concepts: list[str] = [] - weak_concepts: list[str] = [] + mastered_concepts: list[str] = Field(default_factory=list) + weak_concepts: list[str] = Field(default_factory=list)packages/cortex-tutor/agent.py (1)
171-177: Reuse validator for learning style to avoid drift.♻️ Proposed refactor
-from cortex.tutor.validators import validate_package_name, validate_question +from cortex.tutor.validators import ( + validate_learning_style, + validate_package_name, + validate_question, +)- valid_styles = {"visual", "reading", "hands-on"} - if style not in valid_styles: - return False + is_valid, _ = validate_learning_style(style) + if not is_valid: + return False
|
|
||
| ### Output | ||
|
|
||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add language identifiers to fenced code blocks.
markdownlint MD040 flags these fences without a language. Using text (or bash where appropriate) will keep lint clean and improve rendering.
🛠️ Suggested fix (apply to each flagged fence)
-```
+```textAlso applies to: 249-249, 309-309, 348-348, 390-390, 432-432
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
91-91: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@docs/AI_TUTOR.md` at line 91, Several fenced code blocks in AI_TUTOR.md are
missing language identifiers (markdownlint MD040); update the empty
triple-backtick fences at the flagged locations (lines ~91, 249, 309, 348, 390,
432) to include an appropriate language tag (use "text" for plain output or
"bash" for shell snippets) so each fence becomes ```text or ```bash as
appropriate; ensure you apply the same change to every matching empty fence in
the file.
| def _run_tutorial(self) -> None: | ||
| """Run step-by-step tutorial.""" | ||
| from cortex.tutor.branding import get_user_input, print_tutorial_step | ||
|
|
||
| if not self.lesson: | ||
| return | ||
|
|
||
| steps = self.lesson.get("tutorial_steps", []) | ||
| if not steps: | ||
| tutor_print("No tutorial available", "info") | ||
| return | ||
|
|
||
| for step in steps: | ||
| print_tutorial_step( | ||
| step.get("content", ""), | ||
| step.get("step_number", 1), | ||
| len(steps), | ||
| ) | ||
|
|
||
| if step.get("code"): | ||
| console.print(f"\n[cyan]Code:[/cyan] {step['code']}") | ||
|
|
||
| response = get_user_input("Press Enter to continue (or 'q' to quit)") | ||
| if response.lower() == "q": | ||
| break | ||
|
|
||
| self.agent.mark_completed(self.package_name, "tutorial", 0.9) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t mark the tutorial complete if the user quits early.
Currently, progress is marked complete even when the user exits mid-tutorial, which skews completion stats.
✅ Proposed fix
- for step in steps:
+ completed_all = True
+ for step in steps:
print_tutorial_step(
step.get("content", ""),
step.get("step_number", 1),
len(steps),
)
@@
response = get_user_input("Press Enter to continue (or 'q' to quit)")
if response.lower() == "q":
+ completed_all = False
break
- self.agent.mark_completed(self.package_name, "tutorial", 0.9)
+ if completed_all:
+ self.agent.mark_completed(self.package_name, "tutorial", 0.9)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _run_tutorial(self) -> None: | |
| """Run step-by-step tutorial.""" | |
| from cortex.tutor.branding import get_user_input, print_tutorial_step | |
| if not self.lesson: | |
| return | |
| steps = self.lesson.get("tutorial_steps", []) | |
| if not steps: | |
| tutor_print("No tutorial available", "info") | |
| return | |
| for step in steps: | |
| print_tutorial_step( | |
| step.get("content", ""), | |
| step.get("step_number", 1), | |
| len(steps), | |
| ) | |
| if step.get("code"): | |
| console.print(f"\n[cyan]Code:[/cyan] {step['code']}") | |
| response = get_user_input("Press Enter to continue (or 'q' to quit)") | |
| if response.lower() == "q": | |
| break | |
| self.agent.mark_completed(self.package_name, "tutorial", 0.9) | |
| def _run_tutorial(self) -> None: | |
| """Run step-by-step tutorial.""" | |
| from cortex.tutor.branding import get_user_input, print_tutorial_step | |
| if not self.lesson: | |
| return | |
| steps = self.lesson.get("tutorial_steps", []) | |
| if not steps: | |
| tutor_print("No tutorial available", "info") | |
| return | |
| completed_all = True | |
| for step in steps: | |
| print_tutorial_step( | |
| step.get("content", ""), | |
| step.get("step_number", 1), | |
| len(steps), | |
| ) | |
| if step.get("code"): | |
| console.print(f"\n[cyan]Code:[/cyan] {step['code']}") | |
| response = get_user_input("Press Enter to continue (or 'q' to quit)") | |
| if response.lower() == "q": | |
| completed_all = False | |
| break | |
| if completed_all: | |
| self.agent.mark_completed(self.package_name, "tutorial", 0.9) |
🤖 Prompt for AI Agents
In `@packages/cortex-tutor/agent.py` around lines 308 - 335, The tutorial is
currently marked complete unconditionally at the end of _run_tutorial; change
this so agent.mark_completed(self.package_name, "tutorial", 0.9) is only called
when the user finishes all steps (i.e., does not quit early). Implement this by
tracking completion (e.g., a flag like tutorial_completed = True, set to False
if response.lower() == "q" and break) or by using the for/else pattern: put
agent.mark_completed(...) in the for-else else block so it only runs when the
loop wasn't exited via break. Ensure you reference _run_tutorial,
get_user_input/response, and agent.mark_completed in the change.
| def _ask_question(self) -> None: | ||
| """Handle Q&A.""" | ||
| from cortex.tutor.branding import get_user_input, print_markdown | ||
|
|
||
| question = get_user_input("Your question") | ||
| if not question: | ||
| return | ||
|
|
||
| tutor_print("Thinking...", "info") | ||
| try: | ||
| result = self.agent.ask(self.package_name, question) | ||
| except ValueError as e: | ||
| tutor_print(f"Invalid question: {e}", "error") | ||
| return | ||
|
|
||
| if result.get("validation_passed"): | ||
| content = result.get("content", {}) | ||
| answer = content.get("answer", "I couldn't find an answer.") | ||
| print_markdown(f"\n**Answer:** {answer}") | ||
|
|
||
| if content.get("code_example"): | ||
| from cortex.tutor.branding import print_code_example | ||
|
|
||
| ex = content["code_example"] | ||
| print_code_example(ex.get("code", ""), ex.get("language", "bash")) | ||
| else: | ||
| tutor_print("Sorry, I couldn't answer that question.", "error") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid duplicating the “Invalid question” prefix.
✂️ Proposed fix
- except ValueError as e:
- tutor_print(f"Invalid question: {e}", "error")
+ except ValueError as e:
+ tutor_print(str(e), "error")🤖 Prompt for AI Agents
In `@packages/cortex-tutor/agent.py` around lines 354 - 380, The except block in
_ask_question duplicates the "Invalid question" prefix when ValueError already
contains that text; update the handler for the exception raised by
self.agent.ask to print only the exception message (e.g., tutor_print(str(e),
"error")) instead of tutor_print(f"Invalid question: {e}", "error") so you don't
repeat the prefix. Ensure the change is applied inside the _ask_question method
where the ValueError is caught.
| def ensure_data_dir(self) -> None: | ||
| """ | ||
| Ensure the data directory exists with proper permissions. | ||
|
|
||
| Creates the directory if it doesn't exist with 0o700 permissions | ||
| for security (owner read/write/execute only). | ||
| """ | ||
| if not self.data_dir.exists(): | ||
| self.data_dir.mkdir(parents=True, mode=0o700) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against TOCTOU and non-directory data_dir.
Concurrent runs can race the exists() check, and a misconfigured path that is a file will currently pass silently until later failures.
🛡️ Proposed fix
- if not self.data_dir.exists():
- self.data_dir.mkdir(parents=True, mode=0o700)
+ self.data_dir.mkdir(parents=True, mode=0o700, exist_ok=True)
+ if not self.data_dir.is_dir():
+ raise ValueError(f"Data directory path is not a directory: {self.data_dir}")🤖 Prompt for AI Agents
In `@packages/cortex-tutor/config.py` around lines 103 - 112, The current
ensure_data_dir uses a race-prone exists() check and doesn't guard against the
path being an existing file; replace the exists()-then-mkdir pattern in
ensure_data_dir with an atomic attempt to create the directory using
self.data_dir.mkdir(parents=True, mode=0o700, exist_ok=True) inside a try/except
that catches FileExistsError and raises a clear error if the path exists but is
not a directory, and after creation (or if it already existed as a directory)
ensure the permissions are set explicitly (os.chmod(self.data_dir, 0o700)) so
misconfigured files or TOCTOU races are handled safely.
| class CodeExample(BaseModel): | ||
| """A code example with description.""" | ||
|
|
||
| title: str = Field(..., description="Title of the code example") | ||
| code: str = Field(..., description="The actual code snippet") | ||
| language: str = Field( | ||
| default="bash", description="Programming language for syntax highlighting" | ||
| ) | ||
| description: str = Field(..., description="Explanation of what the code does") | ||
|
|
||
|
|
||
| class TutorialStep(BaseModel): | ||
| """A step in a tutorial sequence.""" | ||
|
|
||
| step_number: int = Field(..., ge=1, description="Step number in sequence") | ||
| title: str = Field(..., description="Brief title for this step") | ||
| content: str = Field(..., description="Detailed instruction for this step") | ||
| code: str | None = Field(default=None, description="Optional code for this step") | ||
| expected_output: str | None = Field( | ||
| default=None, description="Expected output if code is executed" | ||
| ) | ||
|
|
||
|
|
||
| class LessonContext(BaseModel): | ||
| """ | ||
| Output contract for lesson generation. | ||
|
|
||
| Contains all the content generated for a package lesson including | ||
| explanations, best practices, code examples, and tutorials. | ||
| """ | ||
|
|
||
| # Core content | ||
| package_name: str = Field(..., description="Name of the package being taught") | ||
| summary: str = Field( | ||
| ..., | ||
| description="Brief 1-2 sentence summary of what the package does", | ||
| max_length=500, | ||
| ) | ||
| explanation: str = Field( | ||
| ..., | ||
| description="Detailed explanation of the package functionality", | ||
| max_length=5000, | ||
| ) | ||
| use_cases: list[str] = Field( | ||
| default_factory=list, | ||
| description="Common use cases for this package", | ||
| ) | ||
| best_practices: list[str] = Field( | ||
| default_factory=list, | ||
| description="Best practices when using this package", | ||
| ) | ||
| code_examples: list[CodeExample] = Field( | ||
| default_factory=list, | ||
| description="Code examples demonstrating package usage", | ||
| ) | ||
| tutorial_steps: list[TutorialStep] = Field( | ||
| default_factory=list, | ||
| description="Step-by-step tutorial for hands-on learning", | ||
| ) | ||
|
|
||
| # Package metadata | ||
| installation_command: str = Field( | ||
| ..., description="Command to install the package (apt, pip, etc.)" | ||
| ) | ||
| official_docs_url: str | None = Field(default=None, description="URL to official documentation") | ||
| related_packages: list[str] = Field( | ||
| default_factory=list, | ||
| description="Related packages the user might want to learn", | ||
| ) | ||
|
|
||
| # Metadata | ||
| confidence: float = Field( | ||
| ..., | ||
| description="Confidence score (0-1) based on knowledge quality", | ||
| ge=0.0, | ||
| le=1.0, | ||
| ) | ||
| cached: bool = Field(default=False, description="Whether result came from cache") | ||
| cost_gbp: float = Field(default=0.0, description="Cost for LLM calls", ge=0.0) | ||
| generated_at: datetime = Field( | ||
| default_factory=lambda: datetime.now(timezone.utc), | ||
| description="Timestamp of generation (UTC)", | ||
| ) | ||
|
|
||
| def to_json(self) -> str: | ||
| """Serialize to JSON for caching.""" | ||
| return self.model_dump_json() | ||
|
|
||
| @classmethod | ||
| def from_json(cls, json_str: str) -> "LessonContext": | ||
| """Deserialize from JSON cache.""" | ||
| return cls.model_validate_json(json_str) | ||
|
|
||
| def get_total_steps(self) -> int: | ||
| """Get total number of tutorial steps.""" | ||
| return len(self.tutorial_steps) | ||
|
|
||
| def get_practice_count(self) -> int: | ||
| """Get count of best practices.""" | ||
| return len(self.best_practices) | ||
|
|
||
| def to_display_dict(self) -> dict[str, Any]: | ||
| """Convert to dictionary for display purposes.""" | ||
| return { | ||
| "package": self.package_name, | ||
| "summary": self.summary, | ||
| "explanation": self.explanation, | ||
| "use_cases": self.use_cases, | ||
| "best_practices": self.best_practices, | ||
| "examples_count": len(self.code_examples), | ||
| "tutorial_steps_count": len(self.tutorial_steps), | ||
| "installation": self.installation_command, | ||
| "confidence": f"{self.confidence:.0%}", | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Align lesson cost currency naming.
LessonContext.cost_gbp suggests GBP, but llm.py returns cost_usd. To avoid mislabeling, either rename to USD or convert before storing/displaying.
🤖 Prompt for AI Agents
In `@packages/cortex-tutor/contracts.py` around lines 17 - 131, The model field
cost_gbp is misnamed for USD values; update LessonContext to consistently use
USD by renaming cost_gbp to cost_usd (update the Field name and description,
keep ge=0.0) and adjust any code that sets or reads it (e.g., places in llm.py
that assign response cost and any serialization/deserialization using
model_dump_json/model_validate_json and to_display_dict consumers);
alternatively, if you prefer to keep the field name, convert incoming cost_usd
values to GBP before assignment—ensure the unique symbol LessonContext.cost_gbp
(or the new LessonContext.cost_usd), the to_json/from_json methods, and any call
sites in llm.py are updated to match.
| def _run( | ||
| self, | ||
| package_name: str, | ||
| force_fresh: bool = False, | ||
| ) -> dict[str, Any]: | ||
| """Load cached lesson content.""" | ||
| if force_fresh: | ||
| return { | ||
| "success": True, | ||
| "cache_hit": False, | ||
| "lesson": None, | ||
| "reason": "Force fresh requested", | ||
| } | ||
|
|
||
| try: | ||
| cached = self.store.get_cached_lesson(package_name) | ||
|
|
||
| if cached: | ||
| return { | ||
| "success": True, | ||
| "cache_hit": True, | ||
| "lesson": cached, | ||
| "cost_saved_gbp": 0.02, | ||
| } | ||
|
|
||
| return { | ||
| "success": True, | ||
| "cache_hit": False, | ||
| "lesson": None, | ||
| "reason": "No valid cache found", | ||
| } | ||
|
|
||
| except Exception as e: | ||
| logger.exception("Lesson loader failed for package '%s'", package_name) | ||
| return { | ||
| "success": False, | ||
| "cache_hit": False, | ||
| "lesson": None, | ||
| "error": str(e), | ||
| } | ||
|
|
||
| def cache_lesson( | ||
| self, | ||
| package_name: str, | ||
| lesson: dict[str, Any], | ||
| ttl_hours: int | None = None, | ||
| ) -> bool: | ||
| """Cache a lesson for future retrieval.""" | ||
| try: | ||
| if ttl_hours is None: | ||
| config = get_config() | ||
| ttl_hours = config.cache_ttl_hours | ||
| self.store.cache_lesson(package_name, lesson, ttl_hours) | ||
| return True | ||
| except Exception: | ||
| logger.exception("Failed to cache lesson for package '%s'", package_name) | ||
| return False | ||
|
|
||
| def clear_cache(self, package_name: str | None = None) -> int: | ||
| """Clear cached lessons. | ||
|
|
||
| Args: | ||
| package_name: If provided, clears cache for specific package. | ||
| If None, clears only expired cache entries. | ||
|
|
||
| Returns: | ||
| Number of cache entries cleared. | ||
| """ | ||
| if package_name: | ||
| try: | ||
| self.store.cache_lesson(package_name, {}, ttl_hours=0) | ||
| return 1 | ||
| except Exception: | ||
| logger.exception("Failed to clear cache for package '%s'", package_name) | ||
| return 0 | ||
| else: | ||
| return self.store.clear_expired_cache() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Normalize package_name to avoid cache misses and duplicate keys.
Fallback uses lowercase, but cache lookups don’t. Mixed-case calls can bypass cache and inflate LLM usage.
♻️ Proposed fix (normalize once per method)
def _run(
self,
package_name: str,
force_fresh: bool = False,
) -> dict[str, Any]:
"""Load cached lesson content."""
+ normalized = package_name.strip().lower()
if force_fresh:
return {
"success": True,
"cache_hit": False,
"lesson": None,
"reason": "Force fresh requested",
}
try:
- cached = self.store.get_cached_lesson(package_name)
+ cached = self.store.get_cached_lesson(normalized)
if cached:
return {
"success": True,
"cache_hit": True,
"lesson": cached,
"cost_saved_gbp": 0.02,
}
@@
def cache_lesson(
self,
package_name: str,
lesson: dict[str, Any],
ttl_hours: int | None = None,
) -> bool:
"""Cache a lesson for future retrieval."""
+ normalized = package_name.strip().lower()
try:
if ttl_hours is None:
config = get_config()
ttl_hours = config.cache_ttl_hours
- self.store.cache_lesson(package_name, lesson, ttl_hours)
+ self.store.cache_lesson(normalized, lesson, ttl_hours)
return True
except Exception:
logger.exception("Failed to cache lesson for package '%s'", package_name)
return False
@@
def clear_cache(self, package_name: str | None = None) -> int:
@@
- if package_name:
+ if package_name:
+ normalized = package_name.strip().lower()
try:
- self.store.cache_lesson(package_name, {}, ttl_hours=0)
+ self.store.cache_lesson(normalized, {}, ttl_hours=0)
return 1
except Exception:
logger.exception("Failed to clear cache for package '%s'", package_name)
return 0🧰 Tools
🪛 Ruff (0.14.13)
57-62: Consider moving this statement to an else block
(TRY300)
85-85: Consider moving this statement to an else block
(TRY300)
103-103: Consider moving this statement to an else block
(TRY300)
🤖 Prompt for AI Agents
In `@packages/cortex-tutor/tools.py` around lines 32 - 108, Normalize package_name
at the start of each public cache method to avoid case-sensitive cache misses:
in _run, cache_lesson, and clear_cache, set a local normalized_name =
package_name.lower() (or use a chosen canonicalizer) and use normalized_name for
calls to self.store.get_cached_lesson, self.store.cache_lesson, and for clearing
logic instead of the original package_name; ensure logs and returned reasons
still reference the original package_name if desired, but all store interactions
must use the normalized value so entries and lookups are consistent.
| def validate_score(score: float) -> tuple[bool, str | None]: | ||
| """ | ||
| Validate a score value. | ||
|
|
||
| Args: | ||
| score: The score to validate (should be 0.0 to 1.0). | ||
|
|
||
| Returns: | ||
| Tuple of (is_valid, error_message). | ||
| """ | ||
| if not isinstance(score, (int, float)): | ||
| return False, "Score must be a number" | ||
|
|
||
| if score < 0.0 or score > 1.0: | ||
| return False, "Score must be between 0.0 and 1.0" | ||
|
|
||
| return True, None | ||
|
|
||
|
|
||
| def validate_learning_style(style: str) -> tuple[bool, str | None]: | ||
| """ | ||
| Validate a learning style preference. | ||
|
|
||
| Args: | ||
| style: The learning style to validate. | ||
|
|
||
| Returns: | ||
| Tuple of (is_valid, error_message). | ||
| """ | ||
| valid_styles = ["visual", "reading", "hands-on"] | ||
|
|
||
| if not style or style.lower() not in valid_styles: | ||
| return False, f"Learning style must be one of: {', '.join(valid_styles)}" | ||
|
|
||
| return True, None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's verify the file exists and examine the context around the function
wc -l packages/cortex-tutor/validators.pyRepository: cortexlinux/cortex-distro
Length of output: 194
🏁 Script executed:
# Examine the validate_score function and surrounding context
sed -n '160,205p' packages/cortex-tutor/validators.pyRepository: cortexlinux/cortex-distro
Length of output: 1181
🏁 Script executed:
# Search for any tests related to validate_score
fd -e py -e txt | xargs grep -l "validate_score" 2>/dev/null | head -20Repository: cortexlinux/cortex-distro
Length of output: 135
🏁 Script executed:
# Search for usage of validate_score in the codebase
rg "validate_score" -B 2 -A 2Repository: cortexlinux/cortex-distro
Length of output: 1823
🏁 Script executed:
# Examine the test file to see complete test cases for validate_score
cat -n tests/tutor/test_validators.py | grep -A 30 "class TestValidateScore"Repository: cortexlinux/cortex-distro
Length of output: 1309
🏁 Script executed:
# Check if there are any tests explicitly for boolean inputs
rg "validate_score.*True|validate_score.*False" tests/Repository: cortexlinux/cortex-distro
Length of output: 51
🏁 Script executed:
# Let's also verify Python's behavior with booleans to confirm the claim
python3 << 'EOF'
# Verify that bool is a subclass of int
print("bool is subclass of int:", issubclass(bool, int))
print("isinstance(True, int):", isinstance(True, int))
print("isinstance(False, int):", isinstance(False, int))
print("isinstance(True, (int, float)):", isinstance(True, (int, float)))
print("isinstance(False, (int, float)):", isinstance(False, (int, float)))
# Test the current validation logic
def validate_score_current(score: float) -> tuple[bool, str | None]:
if not isinstance(score, (int, float)):
return False, "Score must be a number"
if score < 0.0 or score > 1.0:
return False, "Score must be between 0.0 and 1.0"
return True, None
# Test with booleans
print("\nCurrent validation results:")
print("validate_score(True):", validate_score_current(True))
print("validate_score(False):", validate_score_current(False))
print("validate_score(0.5):", validate_score_current(0.5))
EOFRepository: cortexlinux/cortex-distro
Length of output: 155
Exclude booleans from score validation.
bool is a subclass of int in Python, so isinstance(True, (int, float)) returns True. This causes True and False to pass validation as scores (coerced to 1 and 0 respectively). Since the function signature specifies score: float and no tests cover boolean inputs, this appears unintended. Add explicit boolean rejection.
Proposed fix
- if not isinstance(score, (int, float)):
+ if isinstance(score, bool) or not isinstance(score, (int, float)):
return False, "Score must be a number"🤖 Prompt for AI Agents
In `@packages/cortex-tutor/validators.py` around lines 165 - 199, The
validate_score function currently treats booleans as valid because bool is a
subclass of int; update validate_score to explicitly reject booleans before the
numeric isinstance check (e.g., check isinstance(score, bool) and return False
with an appropriate message), so True/False no longer pass as 1/0; keep the rest
of the numeric range checks (score < 0.0 or score > 1.0) unchanged and return
the same tuple structure.
| def test_tutor_print_success(self, capsys): | ||
| """Test tutor_print with success status.""" | ||
| tutor_print("Test message", "success") | ||
|
|
||
| def test_tutor_print_error(self, capsys): | ||
| """Test tutor_print with error status.""" | ||
| tutor_print("Error message", "error") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Silence Ruff ARG002 for unused capsys fixtures.
The capsys fixture is unused in both tests; rename to _capsys (or add a noqa) to avoid lint failures.
🔧 Suggested fix
- def test_tutor_print_success(self, capsys):
+ def test_tutor_print_success(self, _capsys):
@@
- def test_tutor_print_error(self, capsys):
+ def test_tutor_print_error(self, _capsys):📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_tutor_print_success(self, capsys): | |
| """Test tutor_print with success status.""" | |
| tutor_print("Test message", "success") | |
| def test_tutor_print_error(self, capsys): | |
| """Test tutor_print with error status.""" | |
| tutor_print("Error message", "error") | |
| def test_tutor_print_success(self, _capsys): | |
| """Test tutor_print with success status.""" | |
| tutor_print("Test message", "success") | |
| def test_tutor_print_error(self, _capsys): | |
| """Test tutor_print with error status.""" | |
| tutor_print("Error message", "error") |
🧰 Tools
🪛 Ruff (0.14.13)
172-172: Unused method argument: capsys
(ARG002)
176-176: Unused method argument: capsys
(ARG002)
🤖 Prompt for AI Agents
In `@tests/tutor/test_integration.py` around lines 172 - 178, The tests
test_tutor_print_success and test_tutor_print_error declare the unused capsys
fixture causing Ruff ARG002; rename the fixture parameter from capsys to _capsys
in both functions (or add a noqa comment) so the linter recognizes it as
intentionally unused—update the function signatures for test_tutor_print_success
and test_tutor_print_error accordingly.
| def test_start_loads_lesson( | ||
| self, mock_console, mock_tutor_print, mock_header, mock_menu, mock_input, mock_agent_class | ||
| ): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Silence Ruff ARG002 for unused patched mocks.
Several injected mocks are unused and will trigger ARG002. Prefix them with _ (or add a noqa) to keep lint green.
🔧 Suggested fix
- def test_start_loads_lesson(
- self, mock_console, mock_tutor_print, mock_header, mock_menu, mock_input, mock_agent_class
- ):
+ def test_start_loads_lesson(
+ self, _mock_console, _mock_tutor_print, mock_header, _mock_menu, mock_input, mock_agent_class
+ ):
@@
- def test_show_examples(self, mock_console, mock_code_example, mock_tutor):
+ def test_show_examples(self, _mock_console, mock_code_example, mock_tutor):
@@
- def test_run_tutorial(self, mock_console, mock_input, mock_step, mock_tutor):
+ def test_run_tutorial(self, _mock_console, mock_input, mock_step, mock_tutor):
@@
- def test_run_tutorial_quit(self, mock_input, mock_step, mock_tutor):
+ def test_run_tutorial_quit(self, mock_input, _mock_step, mock_tutor):
@@
- def test_show_best_practices(self, mock_console, mock_practice, mock_tutor):
+ def test_show_best_practices(self, _mock_console, mock_practice, mock_tutor):
@@
- def test_ask_question(self, mock_print, mock_markdown, mock_input, mock_tutor):
+ def test_ask_question(self, _mock_print, mock_markdown, mock_input, mock_tutor):
@@
- def test_methods_with_no_lesson(self, mock_agent_class):
+ def test_methods_with_no_lesson(self, _mock_agent_class):Also applies to: 134-134, 152-152, 163-163, 182-182, 200-200, 250-250
🧰 Tools
🪛 Ruff (0.14.13)
39-39: Unused method argument: mock_console
(ARG002)
39-39: Unused method argument: mock_tutor_print
(ARG002)
39-39: Unused method argument: mock_menu
(ARG002)
🤖 Prompt for AI Agents
In `@tests/tutor/test_interactive_tutor.py` around lines 38 - 40, The test
function test_start_loads_lesson (and the other test functions referenced)
declare several injected pytest fixtures that are unused and trigger Ruff
ARG002; rename the unused parameters by prefixing them with an underscore (e.g.,
mock_console -> _mock_console, mock_tutor_print -> _mock_tutor_print,
mock_header -> _mock_header, mock_menu -> _mock_menu, mock_input -> _mock_input,
mock_agent_class -> _mock_agent_class) in the function signature for
test_start_loads_lesson and apply the same underscore-prefix change to the other
affected test functions mentioned so linting no longer reports ARG002
(alternatively add a per-parameter noqa if you prefer).
| class TestGetClient: | ||
| """Tests for get_client function.""" | ||
|
|
||
| def test_get_client_creates_singleton(self): | ||
| """Test that get_client creates a singleton instance.""" | ||
| from cortex.tutor import llm | ||
|
|
||
| # Reset the global client | ||
| llm._client = None | ||
|
|
||
| with patch.dict("os.environ", {"ANTHROPIC_API_KEY": "test-key"}): | ||
| with patch.object(llm.anthropic, "Anthropic") as mock_anthropic: | ||
| mock_client = MagicMock() | ||
| mock_anthropic.return_value = mock_client | ||
|
|
||
| client1 = llm.get_client() | ||
| client2 = llm.get_client() | ||
|
|
||
| # Should only create one instance | ||
| assert mock_anthropic.call_count == 1 | ||
| assert client1 is client2 | ||
|
|
||
| # Clean up | ||
| llm._client = None | ||
|
|
||
| def test_get_client_raises_without_api_key(self): | ||
| """Test that get_client raises error without API key.""" | ||
| from cortex.tutor import llm | ||
| from cortex.tutor.config import reset_config | ||
|
|
||
| llm._client = None | ||
| reset_config() | ||
|
|
||
| with patch.dict("os.environ", {}, clear=True): | ||
| # Remove ANTHROPIC_API_KEY if it exists | ||
| import os | ||
|
|
||
| os.environ.pop("ANTHROPIC_API_KEY", None) | ||
|
|
||
| with pytest.raises(ValueError, match="ANTHROPIC_API_KEY"): | ||
| llm.get_client() | ||
|
|
||
| llm._client = None | ||
| reset_config() | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reset config to avoid cross-test state bleed.
get_client() depends on the global config singleton; if another test initializes it without an API key, this test can become order-dependent. Reset before/after to keep it isolated.
🧪 Proposed fix
def test_get_client_creates_singleton(self):
"""Test that get_client creates a singleton instance."""
from cortex.tutor import llm
+ from cortex.tutor.config import reset_config
# Reset the global client
llm._client = None
+ reset_config()
with patch.dict("os.environ", {"ANTHROPIC_API_KEY": "test-key"}):
with patch.object(llm.anthropic, "Anthropic") as mock_anthropic:
mock_client = MagicMock()
mock_anthropic.return_value = mock_client
@@
# Clean up
llm._client = None
+ reset_config()🤖 Prompt for AI Agents
In `@tests/tutor/test_llm.py` around lines 8 - 52, Reset the global config and
llm._client around the failing test to avoid cross-test state bleed: call
reset_config() and set llm._client = None before invoking llm.get_client() in
test_get_client_raises_without_api_key (and restore/reset after the assertion);
ensure you import reset_config from cortex.tutor.config and reference
llm._client and get_client so the test is isolated regardless of test ordering.
Summary
Add comprehensive AI-powered tutor for package education. This is a continuation of cortexlinux/cortex#566.
Features:
cortex tutor <package>Demo
Demo.mov
Related Issue
Closes #30
Type of Change
AI Disclosure
Testing
pytest tests/tutor/ -vChecklist
Summary by CodeRabbit
Release Notes
New Features
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.