Skip to content

Conversation

@srikrishnavansi
Copy link

@srikrishnavansi srikrishnavansi commented Jan 21, 2026

Summary

Add comprehensive AI-powered tutor for package education. This is a continuation of cortexlinux/cortex#566.

Features:

  • Interactive tutoring using TutorAgent orchestration
  • LLM-powered lessons, code examples, and Q&A (Claude API)
  • SQLite-based progress tracking and student profiling
  • CLI integration via cortex tutor <package>
  • 167 tests with comprehensive coverage

Demo

Demo.mov

Related Issue

Closes #30

Type of Change

  • New feature

AI Disclosure

  • AI/IDE/Agents used: Claude Code CLI for implementation and review fixes

Testing

  • Run: pytest tests/tutor/ -v
  • 167 tests passing

Checklist

  • Tests pass locally
  • Code follows style guide
  • Documentation updated (AI_TUTOR.md)

Summary by CodeRabbit

Release Notes

  • New Features

    • AI-powered interactive tutor delivering personalized lessons with code examples and tutorials
    • Q&A system for quick package learning and clarification
    • Comprehensive progress tracking with learning history and milestones
    • Built-in caching for performance with offline fallback lessons for common packages
    • CLI interface for seamless tutorial access and management
  • Documentation

    • Complete AI Tutor feature documentation with setup, usage, and examples

✏️ Tip: You can customize this high-level summary in your review settings.

Continuation of cortexlinux/cortex#566

Features:
- Interactive tutoring using TutorAgent orchestration
- LLM-powered lessons, code examples, and Q&A (Claude API)
- SQLite-based progress tracking and student profiling
- CLI integration via cortex tutor <package>
- 167 tests with comprehensive coverage

Closes cortexlinux#30
@coderabbitai
Copy link

coderabbitai bot commented Jan 21, 2026

📝 Walkthrough

Walkthrough

This PR introduces a comprehensive AI-powered installation tutor for Cortex Linux, featuring LLM-driven lessons via Claude AI, interactive CLI-based learning, lesson caching with fallback templates, SQLite-backed progress tracking, input validation, and rich terminal UI components with extensive test coverage.

Changes

Cohort / File(s) Summary
Documentation
docs/AI_TUTOR.md
Comprehensive markdown guide covering AI Tutor overview, features, quick-start, usage examples, architecture, technical design, API references, configuration, testing, troubleshooting, and contribution guidelines.
Package Initialization
packages/cortex-tutor/__init__.py
Module initializer exposing public API: Config, agent, console, tutor_print.
Core Agent & Interactive Tutor
packages/cortex-tutor/agent.py
Introduces TutorAgent for lesson generation, Q&A, progress tracking, and profile management; InteractiveTutor for menu-driven interactive learning sessions with concept/code/tutorial/practice workflows.
LLM Integration
packages/cortex-tutor/llm.py
Anthropic client wrapper providing generate_lesson() and answer_question() functions with structured output, cost calculation, and error handling.
Deterministic Tools
packages/cortex-tutor/tools.py
LessonLoaderTool for cache management and fallback lessons; ProgressTrackerTool for tracking and updating learning progress; includes hardcoded fallback lesson templates for docker, git, nginx.
Configuration Management
packages/cortex-tutor/config.py
Config Pydantic model with environment variable loading, API key validation, data directory management, and singleton accessors for centralized app configuration.
Data Models & Contracts
packages/cortex-tutor/contracts.py
Comprehensive Pydantic models: LessonContext, TutorialStep, CodeExample for lesson data; ProgressContext, PackageProgress, TopicProgress for tracking; LessonResponse, QAResponse for LLM outputs; serialization/display helpers.
SQLite Storage Backend
packages/cortex-tutor/sqlite_store.py
SQLiteStore with thread-safe persistence: manages LearningProgress, QuizResult, StudentProfile tables; provides lesson caching with TTL, progress queries, and profile management.
Terminal UI Branding
packages/cortex-tutor/branding.py
Rich-based terminal utilities: tutor_print(), print_banner(), print_menu(), print_code_example(), print_progress_summary(), get_user_input(), and various display helpers for cohesive CLI experience.
Input Validation
packages/cortex-tutor/validators.py
Deterministic validators: validate_package_name(), validate_question(), validate_learning_style(); sanitization, blocked-pattern checking, ValidationResult container, batch validation support.
CLI Interface
packages/cortex-tutor/cli.py
Command-line parser and handlers: cmd_teach() (interactive lessons), cmd_question() (quick Q&A), cmd_progress(), cmd_list_packages(), cmd_reset() with structured error handling and lazy imports for API-key-dependent components.
Unit Tests
tests/tutor/test_*.py (10 files)
Comprehensive test suite: CLI parsing/commands, agent/tutor workflows, LLM integration, tools (caching, fallbacks, progress), SQLite storage, validators, configuration, contracts, branding, and integration tests with extensive mocking.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant CLI as CLI Handler
    participant TutorAgent as TutorAgent
    participant LessonLoader as LessonLoaderTool
    participant Cache as SQLiteStore<br/>(Cache)
    participant LLM as LLM<br/>(Claude)
    participant Progress as ProgressTrackerTool
    participant Storage as SQLiteStore<br/>(Progress)

    User->>CLI: cortex tutor docker
    CLI->>TutorAgent: teach("docker")
    
    TutorAgent->>LessonLoader: load lesson
    LessonLoader->>Cache: check cached lesson
    
    alt Lesson in cache
        Cache-->>LessonLoader: return cached lesson
        LessonLoader-->>TutorAgent: return cached_source
    else Cache miss
        LessonLoader->>LLM: generate_lesson("docker")
        LLM-->>LessonLoader: LessonResponse + cost
        LessonLoader->>Cache: cache new lesson
        LessonLoader-->>TutorAgent: return fresh_source
    end
    
    TutorAgent->>Progress: get_profile()
    Progress->>Storage: fetch student profile
    Storage-->>Progress: profile data
    Progress-->>TutorAgent: profile with learning_style
    
    TutorAgent-->>CLI: lesson + metadata
    CLI->>User: display lesson & menu
    
    User->>CLI: select option (e.g., ask question)
    CLI->>TutorAgent: ask("docker", "What is a container?")
    
    TutorAgent->>LLM: answer_question(...)
    LLM-->>TutorAgent: QAResponse
    TutorAgent-->>CLI: answer + code_example
    
    CLI->>User: display answer
    User->>CLI: mark complete
    CLI->>Progress: mark_completed("docker", "containers")
    Progress->>Storage: upsert progress
    Storage-->>Progress: ✓ updated
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

  • Issue #30: AI-Powered Installation Tutor — This PR comprehensively implements all acceptance criteria: LLM-powered explanations via generate_lesson(), interactive Q&A through TutorAgent.ask(), best practice recommendations in lesson context, code examples and step-by-step tutorials, learning progress tracking via SQLiteStore/ProgressTrackerTool, extensive unit tests (>80% coverage across 10 test modules), and full documentation in docs/AI_TUTOR.md with example usage matching the issue's example format.

Poem

🐰 A tutor hops through CLI calls,
Claude whispers wisdom through digital halls,
Cache holds lessons, progress tracked true,
Rich colors dance as you learn something new,
One command—docker!—and off we go,
Interactive learning with each keystroke flow. 🎓✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Add AI-Powered Installation Tutor' directly and clearly summarizes the main change—introduction of an AI-powered tutor feature for package education.
Linked Issues check ✅ Passed The pull request implements all coding requirements from issue #30: LLM-powered explanations via Claude API, interactive Q&A, best practice recommendations, code examples, step-by-step tutorials, learning progress tracking with SQLite storage, comprehensive testing (167 tests), and CLI integration with documentation.
Out of Scope Changes check ✅ Passed All changes are directly related to implementing the AI-Powered Installation Tutor feature from issue #30. The PR adds core modules (agent, CLI, LLM integration, storage, validation), supporting utilities (config, branding, contracts), comprehensive test coverage, and documentation—all aligned with the feature scope.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @srikrishnavansi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates an intelligent, interactive AI tutor into Cortex Linux, designed to provide users with comprehensive education on various software packages and best practices. The system dynamically generates lessons and answers questions using the Claude API, while also tracking user progress and preferences. This feature aims to make learning about Linux packages more engaging and accessible directly from the command line.

Highlights

  • AI-Powered Tutor: Introduces a new AI-powered interactive tutor for package education within Cortex Linux, leveraging the Claude API for dynamic content generation.
  • Interactive Learning: Features interactive tutoring sessions, LLM-powered lessons, code examples, and Q&A capabilities to enhance user learning.
  • Progress Tracking: Implements SQLite-based progress tracking and student profiling to monitor learning journeys and personalize content.
  • CLI Integration: Provides seamless command-line interface integration via cortex tutor <package> for easy access to tutoring features.
  • Comprehensive Testing & Documentation: Includes 167 passing tests for robust functionality and new documentation (AI_TUTOR.md) detailing its usage and architecture.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new AI-Powered Installation Tutor for Cortex Linux, designed to teach users about packages and best practices through interactive, LLM-powered sessions. The changes include a comprehensive AI_TUTOR.md documentation file detailing its features, architecture, usage, and API. The core implementation involves TutorAgent and InteractiveTutor classes in agent.py for orchestrating lessons and Q&A, branding.py for Rich-based terminal UI, cli.py for command-line argument parsing and dispatching, config.py for managing settings and API keys, contracts.py defining Pydantic data models for lessons and progress, llm.py for Anthropic API interactions with structured outputs, sqlite_store.py for SQLite-based persistence of learning progress and cache, and tools.py for deterministic operations like caching and progress tracking. Review comments highlight several areas for improvement: correcting a typo from cost_gbp to cost_usd in contracts.py and the documentation, updating an incorrect default LLM model name (claude-sonnet-4-20250514) to a valid one (claude-3-sonnet-20240229) in config.py and the documentation, moving local imports of branding functions to the top of agent.py for better readability, removing an unused import (Text) from branding.py, and simplifying the logic for retrieving concept arguments in _add_mastered_concept and _add_weak_concept methods within tools.py to prevent potential KeyError and improve clarity.

)

if step.get("code"):
console.print(f"\n[cyan]Code:[/cyan] {step['code']}")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using dictionary-style access step['code'] could raise a KeyError if the 'code' key is not present in the step dictionary, as the TutorialStep contract defines code as optional. To prevent a potential crash, it's safer to use step.get('code'), which is used elsewhere in this file and will gracefully return None if the key is missing.

Suggested change
console.print(f"\n[cyan]Code:[/cyan] {step['code']}")
console.print(f"\n[cyan]Code:[/cyan] {step.get('code')}")


# Default settings (named constants with clear purpose)
DEFAULT_TUTOR_TOPICS_COUNT = 5 # Default topic count when actual count unavailable
DEFAULT_MODEL_NAME = "claude-sonnet-4-20250514"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The default model name claude-sonnet-4-20250514 appears to be incorrect and will likely cause runtime errors when making API calls. Please update it to a valid and current model name, for example, claude-3-sonnet-20240229.

Suggested change
DEFAULT_MODEL_NAME = "claude-sonnet-4-20250514"
DEFAULT_MODEL_NAME = "claude-3-sonnet-20240229"

le=1.0,
)
cached: bool = Field(default=False, description="Whether result came from cache")
cost_gbp: float = Field(default=0.0, description="Cost for LLM calls", ge=0.0)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The field cost_gbp seems to be a typo and is inconsistent with the rest of the codebase, which uses cost_usd (for example, in llm.py and agent.py). To ensure consistency and avoid confusion, this should be renamed to cost_usd.

Suggested change
cost_gbp: float = Field(default=0.0, description="Cost for LLM calls", ge=0.0)
cost_usd: float = Field(default=0.0, description="Cost for LLM calls in USD", ge=0.0)

# "best_practices": [...]
# },
# "cache_hit": True,
# "cost_usd": 0.0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The cost_usd in this example is inconsistent with cost_gbp defined in the LessonContext data contract (packages/cortex-tutor/contracts.py). The rest of the codebase appears to calculate and use USD. To ensure consistency, this documentation should align with the implementation. I've suggested a fix in contracts.py to use cost_usd.

Comment on lines +607 to +611
# Optional: LLM model name (default: claude-sonnet-4-20250514)
export TUTOR_MODEL_NAME=claude-sonnet-4-20250514

# Optional: Cache TTL in hours (default: 24)
export TUTOR_CACHE_TTL_HOURS=24

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The default model name claude-sonnet-4-20250514 mentioned here and in the config.yaml example (line 621) appears to be incorrect, as '4' and the future date are likely typos. This could cause confusion for users copying the configuration. It should be updated to a valid Anthropic model name, such as claude-3-sonnet-20240229.

Comment on lines +214 to +223
from cortex.tutor.branding import (
get_user_input,
print_best_practice,
print_code_example,
print_lesson_header,
print_markdown,
print_menu,
print_progress_summary,
print_tutorial_step,
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These branding functions are imported locally within start() and several other methods in this class. Since there doesn't appear to be a circular dependency between agent.py and branding.py, these imports can be moved to the top of the file. This would improve readability and avoid repeated import overhead.

from rich.progress import BarColumn, Progress, SpinnerColumn, TaskProgressColumn, TextColumn
from rich.syntax import Syntax
from rich.table import Table
from rich.text import Text

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The Text class is imported from rich.text but is not used anywhere in this file. This import can be removed to keep the code clean.

Comment on lines +430 to +438
concept: str | None = None,
**kwargs: Any,
) -> dict[str, Any]:
"""Add a mastered concept to student profile."""
concept = kwargs.get("concept") or concept
if not concept:
return {"success": False, "error": "concept required"}
self.store.add_mastered_concept(concept)
return {"success": True, "message": f"Added mastered concept: {concept}"}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to retrieve the concept value is a bit confusing as it seems to try getting it from both the named argument and kwargs. This can be simplified for better readability and maintainability. Consider refactoring to only get the value from kwargs.

    def _add_mastered_concept(self, **kwargs: Any) -> dict[str, Any]:
        """Add a mastered concept to student profile."""
        concept = kwargs.get("concept")
        if not concept:
            return {"success": False, "error": "concept required"}
        self.store.add_mastered_concept(concept)
        return {"success": True, "message": f"Added mastered concept: {concept}"}

Comment on lines +440 to +451
def _add_weak_concept(
self,
concept: str | None = None,
**kwargs: Any,
) -> dict[str, Any]:
"""Add a weak concept to student profile."""
concept = kwargs.get("concept") or concept
if not concept:
return {"success": False, "error": "concept required"}
self.store.add_weak_concept(concept)
return {"success": True, "message": f"Added weak concept: {concept}"}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to _add_mastered_concept, the logic here for retrieving the concept is a bit confusing. It can be simplified for better readability and to follow a more standard pattern of handling keyword arguments.

    def _add_weak_concept(self, **kwargs: Any) -> dict[str, Any]:
        """Add a weak concept to student profile."""
        concept = kwargs.get("concept")
        if not concept:
            return {"success": False, "error": "concept required"}
        self.store.add_weak_concept(concept)
        return {"success": True, "message": f"Added weak concept: {concept}"}

@srikrishnavansi
Copy link
Author

srikrishnavansi commented Jan 21, 2026

Hey @Anshgrover23, opened the new PR here as requested ,let me know Further steps. Once code rabbit completes its review i will modify the changes accordingly.

@Anshgrover23
Copy link

@srikrishnavansi Already told you we are setting up this repository as currently Distro is in progress, so will update if we have any issues for you.Thanks, closing this one.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🤖 Fix all issues with AI agents
In `@docs/AI_TUTOR.md`:
- Line 91: Several fenced code blocks in AI_TUTOR.md are missing language
identifiers (markdownlint MD040); update the empty triple-backtick fences at the
flagged locations (lines ~91, 249, 309, 348, 390, 432) to include an appropriate
language tag (use "text" for plain output or "bash" for shell snippets) so each
fence becomes ```text or ```bash as appropriate; ensure you apply the same
change to every matching empty fence in the file.

In `@packages/cortex-tutor/agent.py`:
- Around line 308-335: The tutorial is currently marked complete unconditionally
at the end of _run_tutorial; change this so
agent.mark_completed(self.package_name, "tutorial", 0.9) is only called when the
user finishes all steps (i.e., does not quit early). Implement this by tracking
completion (e.g., a flag like tutorial_completed = True, set to False if
response.lower() == "q" and break) or by using the for/else pattern: put
agent.mark_completed(...) in the for-else else block so it only runs when the
loop wasn't exited via break. Ensure you reference _run_tutorial,
get_user_input/response, and agent.mark_completed in the change.
- Around line 354-380: The except block in _ask_question duplicates the "Invalid
question" prefix when ValueError already contains that text; update the handler
for the exception raised by self.agent.ask to print only the exception message
(e.g., tutor_print(str(e), "error")) instead of tutor_print(f"Invalid question:
{e}", "error") so you don't repeat the prefix. Ensure the change is applied
inside the _ask_question method where the ValueError is caught.

In `@packages/cortex-tutor/config.py`:
- Around line 103-112: The current ensure_data_dir uses a race-prone exists()
check and doesn't guard against the path being an existing file; replace the
exists()-then-mkdir pattern in ensure_data_dir with an atomic attempt to create
the directory using self.data_dir.mkdir(parents=True, mode=0o700, exist_ok=True)
inside a try/except that catches FileExistsError and raises a clear error if the
path exists but is not a directory, and after creation (or if it already existed
as a directory) ensure the permissions are set explicitly
(os.chmod(self.data_dir, 0o700)) so misconfigured files or TOCTOU races are
handled safely.

In `@packages/cortex-tutor/contracts.py`:
- Around line 17-131: The model field cost_gbp is misnamed for USD values;
update LessonContext to consistently use USD by renaming cost_gbp to cost_usd
(update the Field name and description, keep ge=0.0) and adjust any code that
sets or reads it (e.g., places in llm.py that assign response cost and any
serialization/deserialization using model_dump_json/model_validate_json and
to_display_dict consumers); alternatively, if you prefer to keep the field name,
convert incoming cost_usd values to GBP before assignment—ensure the unique
symbol LessonContext.cost_gbp (or the new LessonContext.cost_usd), the
to_json/from_json methods, and any call sites in llm.py are updated to match.

In `@packages/cortex-tutor/tools.py`:
- Around line 32-108: Normalize package_name at the start of each public cache
method to avoid case-sensitive cache misses: in _run, cache_lesson, and
clear_cache, set a local normalized_name = package_name.lower() (or use a chosen
canonicalizer) and use normalized_name for calls to
self.store.get_cached_lesson, self.store.cache_lesson, and for clearing logic
instead of the original package_name; ensure logs and returned reasons still
reference the original package_name if desired, but all store interactions must
use the normalized value so entries and lookups are consistent.

In `@packages/cortex-tutor/validators.py`:
- Around line 165-199: The validate_score function currently treats booleans as
valid because bool is a subclass of int; update validate_score to explicitly
reject booleans before the numeric isinstance check (e.g., check
isinstance(score, bool) and return False with an appropriate message), so
True/False no longer pass as 1/0; keep the rest of the numeric range checks
(score < 0.0 or score > 1.0) unchanged and return the same tuple structure.

In `@tests/tutor/test_integration.py`:
- Around line 172-178: The tests test_tutor_print_success and
test_tutor_print_error declare the unused capsys fixture causing Ruff ARG002;
rename the fixture parameter from capsys to _capsys in both functions (or add a
noqa comment) so the linter recognizes it as intentionally unused—update the
function signatures for test_tutor_print_success and test_tutor_print_error
accordingly.

In `@tests/tutor/test_interactive_tutor.py`:
- Around line 38-40: The test function test_start_loads_lesson (and the other
test functions referenced) declare several injected pytest fixtures that are
unused and trigger Ruff ARG002; rename the unused parameters by prefixing them
with an underscore (e.g., mock_console -> _mock_console, mock_tutor_print ->
_mock_tutor_print, mock_header -> _mock_header, mock_menu -> _mock_menu,
mock_input -> _mock_input, mock_agent_class -> _mock_agent_class) in the
function signature for test_start_loads_lesson and apply the same
underscore-prefix change to the other affected test functions mentioned so
linting no longer reports ARG002 (alternatively add a per-parameter noqa if you
prefer).

In `@tests/tutor/test_llm.py`:
- Around line 8-52: Reset the global config and llm._client around the failing
test to avoid cross-test state bleed: call reset_config() and set llm._client =
None before invoking llm.get_client() in test_get_client_raises_without_api_key
(and restore/reset after the assertion); ensure you import reset_config from
cortex.tutor.config and reference llm._client and get_client so the test is
isolated regardless of test ordering.
🧹 Nitpick comments (3)
packages/cortex-tutor/llm.py (1)

51-112: Configure explicit timeout for Anthropic API calls.

The Anthropic SDK defaults to a 10-minute timeout with 2 automatic retries, which may cause CLI requests to hang when users expect faster feedback. Set an explicit timeout (e.g., 20–30 seconds) and configure max_retries in the get_client() function via Anthropic(timeout=..., max_retries=...). For long-running requests, consider streaming instead.

packages/cortex-tutor/sqlite_store.py (1)

16-49: Use Field(default_factory=list) for list defaults in StudentProfile.

Mutable list defaults should use default_factory to ensure each instance gets a fresh list, following Pydantic best practices.

🛠️ Proposed fix
-from pydantic import BaseModel
+from pydantic import BaseModel, Field
@@
-    mastered_concepts: list[str] = []
-    weak_concepts: list[str] = []
+    mastered_concepts: list[str] = Field(default_factory=list)
+    weak_concepts: list[str] = Field(default_factory=list)
packages/cortex-tutor/agent.py (1)

171-177: Reuse validator for learning style to avoid drift.

♻️ Proposed refactor
-from cortex.tutor.validators import validate_package_name, validate_question
+from cortex.tutor.validators import (
+    validate_learning_style,
+    validate_package_name,
+    validate_question,
+)
-        valid_styles = {"visual", "reading", "hands-on"}
-        if style not in valid_styles:
-            return False
+        is_valid, _ = validate_learning_style(style)
+        if not is_valid:
+            return False


### Output

```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add language identifiers to fenced code blocks.

markdownlint MD040 flags these fences without a language. Using text (or bash where appropriate) will keep lint clean and improve rendering.

🛠️ Suggested fix (apply to each flagged fence)
-```
+```text

Also applies to: 249-249, 309-309, 348-348, 390-390, 432-432

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

91-91: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@docs/AI_TUTOR.md` at line 91, Several fenced code blocks in AI_TUTOR.md are
missing language identifiers (markdownlint MD040); update the empty
triple-backtick fences at the flagged locations (lines ~91, 249, 309, 348, 390,
432) to include an appropriate language tag (use "text" for plain output or
"bash" for shell snippets) so each fence becomes ```text or ```bash as
appropriate; ensure you apply the same change to every matching empty fence in
the file.

Comment on lines +308 to +335
def _run_tutorial(self) -> None:
"""Run step-by-step tutorial."""
from cortex.tutor.branding import get_user_input, print_tutorial_step

if not self.lesson:
return

steps = self.lesson.get("tutorial_steps", [])
if not steps:
tutor_print("No tutorial available", "info")
return

for step in steps:
print_tutorial_step(
step.get("content", ""),
step.get("step_number", 1),
len(steps),
)

if step.get("code"):
console.print(f"\n[cyan]Code:[/cyan] {step['code']}")

response = get_user_input("Press Enter to continue (or 'q' to quit)")
if response.lower() == "q":
break

self.agent.mark_completed(self.package_name, "tutorial", 0.9)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t mark the tutorial complete if the user quits early.

Currently, progress is marked complete even when the user exits mid-tutorial, which skews completion stats.

✅ Proposed fix
-        for step in steps:
+        completed_all = True
+        for step in steps:
             print_tutorial_step(
                 step.get("content", ""),
                 step.get("step_number", 1),
                 len(steps),
             )
@@
             response = get_user_input("Press Enter to continue (or 'q' to quit)")
             if response.lower() == "q":
+                completed_all = False
                 break
 
-        self.agent.mark_completed(self.package_name, "tutorial", 0.9)
+        if completed_all:
+            self.agent.mark_completed(self.package_name, "tutorial", 0.9)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _run_tutorial(self) -> None:
"""Run step-by-step tutorial."""
from cortex.tutor.branding import get_user_input, print_tutorial_step
if not self.lesson:
return
steps = self.lesson.get("tutorial_steps", [])
if not steps:
tutor_print("No tutorial available", "info")
return
for step in steps:
print_tutorial_step(
step.get("content", ""),
step.get("step_number", 1),
len(steps),
)
if step.get("code"):
console.print(f"\n[cyan]Code:[/cyan] {step['code']}")
response = get_user_input("Press Enter to continue (or 'q' to quit)")
if response.lower() == "q":
break
self.agent.mark_completed(self.package_name, "tutorial", 0.9)
def _run_tutorial(self) -> None:
"""Run step-by-step tutorial."""
from cortex.tutor.branding import get_user_input, print_tutorial_step
if not self.lesson:
return
steps = self.lesson.get("tutorial_steps", [])
if not steps:
tutor_print("No tutorial available", "info")
return
completed_all = True
for step in steps:
print_tutorial_step(
step.get("content", ""),
step.get("step_number", 1),
len(steps),
)
if step.get("code"):
console.print(f"\n[cyan]Code:[/cyan] {step['code']}")
response = get_user_input("Press Enter to continue (or 'q' to quit)")
if response.lower() == "q":
completed_all = False
break
if completed_all:
self.agent.mark_completed(self.package_name, "tutorial", 0.9)
🤖 Prompt for AI Agents
In `@packages/cortex-tutor/agent.py` around lines 308 - 335, The tutorial is
currently marked complete unconditionally at the end of _run_tutorial; change
this so agent.mark_completed(self.package_name, "tutorial", 0.9) is only called
when the user finishes all steps (i.e., does not quit early). Implement this by
tracking completion (e.g., a flag like tutorial_completed = True, set to False
if response.lower() == "q" and break) or by using the for/else pattern: put
agent.mark_completed(...) in the for-else else block so it only runs when the
loop wasn't exited via break. Ensure you reference _run_tutorial,
get_user_input/response, and agent.mark_completed in the change.

Comment on lines +354 to +380
def _ask_question(self) -> None:
"""Handle Q&A."""
from cortex.tutor.branding import get_user_input, print_markdown

question = get_user_input("Your question")
if not question:
return

tutor_print("Thinking...", "info")
try:
result = self.agent.ask(self.package_name, question)
except ValueError as e:
tutor_print(f"Invalid question: {e}", "error")
return

if result.get("validation_passed"):
content = result.get("content", {})
answer = content.get("answer", "I couldn't find an answer.")
print_markdown(f"\n**Answer:** {answer}")

if content.get("code_example"):
from cortex.tutor.branding import print_code_example

ex = content["code_example"]
print_code_example(ex.get("code", ""), ex.get("language", "bash"))
else:
tutor_print("Sorry, I couldn't answer that question.", "error")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid duplicating the “Invalid question” prefix.

✂️ Proposed fix
-        except ValueError as e:
-            tutor_print(f"Invalid question: {e}", "error")
+        except ValueError as e:
+            tutor_print(str(e), "error")
🤖 Prompt for AI Agents
In `@packages/cortex-tutor/agent.py` around lines 354 - 380, The except block in
_ask_question duplicates the "Invalid question" prefix when ValueError already
contains that text; update the handler for the exception raised by
self.agent.ask to print only the exception message (e.g., tutor_print(str(e),
"error")) instead of tutor_print(f"Invalid question: {e}", "error") so you don't
repeat the prefix. Ensure the change is applied inside the _ask_question method
where the ValueError is caught.

Comment on lines +103 to +112
def ensure_data_dir(self) -> None:
"""
Ensure the data directory exists with proper permissions.

Creates the directory if it doesn't exist with 0o700 permissions
for security (owner read/write/execute only).
"""
if not self.data_dir.exists():
self.data_dir.mkdir(parents=True, mode=0o700)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard against TOCTOU and non-directory data_dir.

Concurrent runs can race the exists() check, and a misconfigured path that is a file will currently pass silently until later failures.

🛡️ Proposed fix
-        if not self.data_dir.exists():
-            self.data_dir.mkdir(parents=True, mode=0o700)
+        self.data_dir.mkdir(parents=True, mode=0o700, exist_ok=True)
+        if not self.data_dir.is_dir():
+            raise ValueError(f"Data directory path is not a directory: {self.data_dir}")
🤖 Prompt for AI Agents
In `@packages/cortex-tutor/config.py` around lines 103 - 112, The current
ensure_data_dir uses a race-prone exists() check and doesn't guard against the
path being an existing file; replace the exists()-then-mkdir pattern in
ensure_data_dir with an atomic attempt to create the directory using
self.data_dir.mkdir(parents=True, mode=0o700, exist_ok=True) inside a try/except
that catches FileExistsError and raises a clear error if the path exists but is
not a directory, and after creation (or if it already existed as a directory)
ensure the permissions are set explicitly (os.chmod(self.data_dir, 0o700)) so
misconfigured files or TOCTOU races are handled safely.

Comment on lines +17 to +131
class CodeExample(BaseModel):
"""A code example with description."""

title: str = Field(..., description="Title of the code example")
code: str = Field(..., description="The actual code snippet")
language: str = Field(
default="bash", description="Programming language for syntax highlighting"
)
description: str = Field(..., description="Explanation of what the code does")


class TutorialStep(BaseModel):
"""A step in a tutorial sequence."""

step_number: int = Field(..., ge=1, description="Step number in sequence")
title: str = Field(..., description="Brief title for this step")
content: str = Field(..., description="Detailed instruction for this step")
code: str | None = Field(default=None, description="Optional code for this step")
expected_output: str | None = Field(
default=None, description="Expected output if code is executed"
)


class LessonContext(BaseModel):
"""
Output contract for lesson generation.

Contains all the content generated for a package lesson including
explanations, best practices, code examples, and tutorials.
"""

# Core content
package_name: str = Field(..., description="Name of the package being taught")
summary: str = Field(
...,
description="Brief 1-2 sentence summary of what the package does",
max_length=500,
)
explanation: str = Field(
...,
description="Detailed explanation of the package functionality",
max_length=5000,
)
use_cases: list[str] = Field(
default_factory=list,
description="Common use cases for this package",
)
best_practices: list[str] = Field(
default_factory=list,
description="Best practices when using this package",
)
code_examples: list[CodeExample] = Field(
default_factory=list,
description="Code examples demonstrating package usage",
)
tutorial_steps: list[TutorialStep] = Field(
default_factory=list,
description="Step-by-step tutorial for hands-on learning",
)

# Package metadata
installation_command: str = Field(
..., description="Command to install the package (apt, pip, etc.)"
)
official_docs_url: str | None = Field(default=None, description="URL to official documentation")
related_packages: list[str] = Field(
default_factory=list,
description="Related packages the user might want to learn",
)

# Metadata
confidence: float = Field(
...,
description="Confidence score (0-1) based on knowledge quality",
ge=0.0,
le=1.0,
)
cached: bool = Field(default=False, description="Whether result came from cache")
cost_gbp: float = Field(default=0.0, description="Cost for LLM calls", ge=0.0)
generated_at: datetime = Field(
default_factory=lambda: datetime.now(timezone.utc),
description="Timestamp of generation (UTC)",
)

def to_json(self) -> str:
"""Serialize to JSON for caching."""
return self.model_dump_json()

@classmethod
def from_json(cls, json_str: str) -> "LessonContext":
"""Deserialize from JSON cache."""
return cls.model_validate_json(json_str)

def get_total_steps(self) -> int:
"""Get total number of tutorial steps."""
return len(self.tutorial_steps)

def get_practice_count(self) -> int:
"""Get count of best practices."""
return len(self.best_practices)

def to_display_dict(self) -> dict[str, Any]:
"""Convert to dictionary for display purposes."""
return {
"package": self.package_name,
"summary": self.summary,
"explanation": self.explanation,
"use_cases": self.use_cases,
"best_practices": self.best_practices,
"examples_count": len(self.code_examples),
"tutorial_steps_count": len(self.tutorial_steps),
"installation": self.installation_command,
"confidence": f"{self.confidence:.0%}",
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Align lesson cost currency naming.

LessonContext.cost_gbp suggests GBP, but llm.py returns cost_usd. To avoid mislabeling, either rename to USD or convert before storing/displaying.

🤖 Prompt for AI Agents
In `@packages/cortex-tutor/contracts.py` around lines 17 - 131, The model field
cost_gbp is misnamed for USD values; update LessonContext to consistently use
USD by renaming cost_gbp to cost_usd (update the Field name and description,
keep ge=0.0) and adjust any code that sets or reads it (e.g., places in llm.py
that assign response cost and any serialization/deserialization using
model_dump_json/model_validate_json and to_display_dict consumers);
alternatively, if you prefer to keep the field name, convert incoming cost_usd
values to GBP before assignment—ensure the unique symbol LessonContext.cost_gbp
(or the new LessonContext.cost_usd), the to_json/from_json methods, and any call
sites in llm.py are updated to match.

Comment on lines +32 to +108
def _run(
self,
package_name: str,
force_fresh: bool = False,
) -> dict[str, Any]:
"""Load cached lesson content."""
if force_fresh:
return {
"success": True,
"cache_hit": False,
"lesson": None,
"reason": "Force fresh requested",
}

try:
cached = self.store.get_cached_lesson(package_name)

if cached:
return {
"success": True,
"cache_hit": True,
"lesson": cached,
"cost_saved_gbp": 0.02,
}

return {
"success": True,
"cache_hit": False,
"lesson": None,
"reason": "No valid cache found",
}

except Exception as e:
logger.exception("Lesson loader failed for package '%s'", package_name)
return {
"success": False,
"cache_hit": False,
"lesson": None,
"error": str(e),
}

def cache_lesson(
self,
package_name: str,
lesson: dict[str, Any],
ttl_hours: int | None = None,
) -> bool:
"""Cache a lesson for future retrieval."""
try:
if ttl_hours is None:
config = get_config()
ttl_hours = config.cache_ttl_hours
self.store.cache_lesson(package_name, lesson, ttl_hours)
return True
except Exception:
logger.exception("Failed to cache lesson for package '%s'", package_name)
return False

def clear_cache(self, package_name: str | None = None) -> int:
"""Clear cached lessons.

Args:
package_name: If provided, clears cache for specific package.
If None, clears only expired cache entries.

Returns:
Number of cache entries cleared.
"""
if package_name:
try:
self.store.cache_lesson(package_name, {}, ttl_hours=0)
return 1
except Exception:
logger.exception("Failed to clear cache for package '%s'", package_name)
return 0
else:
return self.store.clear_expired_cache()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Normalize package_name to avoid cache misses and duplicate keys.

Fallback uses lowercase, but cache lookups don’t. Mixed-case calls can bypass cache and inflate LLM usage.

♻️ Proposed fix (normalize once per method)
     def _run(
         self,
         package_name: str,
         force_fresh: bool = False,
     ) -> dict[str, Any]:
         """Load cached lesson content."""
+        normalized = package_name.strip().lower()
         if force_fresh:
             return {
                 "success": True,
                 "cache_hit": False,
                 "lesson": None,
                 "reason": "Force fresh requested",
             }

         try:
-            cached = self.store.get_cached_lesson(package_name)
+            cached = self.store.get_cached_lesson(normalized)

             if cached:
                 return {
                     "success": True,
                     "cache_hit": True,
                     "lesson": cached,
                     "cost_saved_gbp": 0.02,
                 }
@@
     def cache_lesson(
         self,
         package_name: str,
         lesson: dict[str, Any],
         ttl_hours: int | None = None,
     ) -> bool:
         """Cache a lesson for future retrieval."""
+        normalized = package_name.strip().lower()
         try:
             if ttl_hours is None:
                 config = get_config()
                 ttl_hours = config.cache_ttl_hours
-            self.store.cache_lesson(package_name, lesson, ttl_hours)
+            self.store.cache_lesson(normalized, lesson, ttl_hours)
             return True
         except Exception:
             logger.exception("Failed to cache lesson for package '%s'", package_name)
             return False
@@
     def clear_cache(self, package_name: str | None = None) -> int:
@@
-        if package_name:
+        if package_name:
+            normalized = package_name.strip().lower()
             try:
-                self.store.cache_lesson(package_name, {}, ttl_hours=0)
+                self.store.cache_lesson(normalized, {}, ttl_hours=0)
                 return 1
             except Exception:
                 logger.exception("Failed to clear cache for package '%s'", package_name)
                 return 0
🧰 Tools
🪛 Ruff (0.14.13)

57-62: Consider moving this statement to an else block

(TRY300)


85-85: Consider moving this statement to an else block

(TRY300)


103-103: Consider moving this statement to an else block

(TRY300)

🤖 Prompt for AI Agents
In `@packages/cortex-tutor/tools.py` around lines 32 - 108, Normalize package_name
at the start of each public cache method to avoid case-sensitive cache misses:
in _run, cache_lesson, and clear_cache, set a local normalized_name =
package_name.lower() (or use a chosen canonicalizer) and use normalized_name for
calls to self.store.get_cached_lesson, self.store.cache_lesson, and for clearing
logic instead of the original package_name; ensure logs and returned reasons
still reference the original package_name if desired, but all store interactions
must use the normalized value so entries and lookups are consistent.

Comment on lines +165 to +199
def validate_score(score: float) -> tuple[bool, str | None]:
"""
Validate a score value.

Args:
score: The score to validate (should be 0.0 to 1.0).

Returns:
Tuple of (is_valid, error_message).
"""
if not isinstance(score, (int, float)):
return False, "Score must be a number"

if score < 0.0 or score > 1.0:
return False, "Score must be between 0.0 and 1.0"

return True, None


def validate_learning_style(style: str) -> tuple[bool, str | None]:
"""
Validate a learning style preference.

Args:
style: The learning style to validate.

Returns:
Tuple of (is_valid, error_message).
"""
valid_styles = ["visual", "reading", "hands-on"]

if not style or style.lower() not in valid_styles:
return False, f"Learning style must be one of: {', '.join(valid_styles)}"

return True, None
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's verify the file exists and examine the context around the function
wc -l packages/cortex-tutor/validators.py

Repository: cortexlinux/cortex-distro

Length of output: 194


🏁 Script executed:

# Examine the validate_score function and surrounding context
sed -n '160,205p' packages/cortex-tutor/validators.py

Repository: cortexlinux/cortex-distro

Length of output: 1181


🏁 Script executed:

# Search for any tests related to validate_score
fd -e py -e txt | xargs grep -l "validate_score" 2>/dev/null | head -20

Repository: cortexlinux/cortex-distro

Length of output: 135


🏁 Script executed:

# Search for usage of validate_score in the codebase
rg "validate_score" -B 2 -A 2

Repository: cortexlinux/cortex-distro

Length of output: 1823


🏁 Script executed:

# Examine the test file to see complete test cases for validate_score
cat -n tests/tutor/test_validators.py | grep -A 30 "class TestValidateScore"

Repository: cortexlinux/cortex-distro

Length of output: 1309


🏁 Script executed:

# Check if there are any tests explicitly for boolean inputs
rg "validate_score.*True|validate_score.*False" tests/

Repository: cortexlinux/cortex-distro

Length of output: 51


🏁 Script executed:

# Let's also verify Python's behavior with booleans to confirm the claim
python3 << 'EOF'
# Verify that bool is a subclass of int
print("bool is subclass of int:", issubclass(bool, int))
print("isinstance(True, int):", isinstance(True, int))
print("isinstance(False, int):", isinstance(False, int))
print("isinstance(True, (int, float)):", isinstance(True, (int, float)))
print("isinstance(False, (int, float)):", isinstance(False, (int, float)))

# Test the current validation logic
def validate_score_current(score: float) -> tuple[bool, str | None]:
    if not isinstance(score, (int, float)):
        return False, "Score must be a number"
    if score < 0.0 or score > 1.0:
        return False, "Score must be between 0.0 and 1.0"
    return True, None

# Test with booleans
print("\nCurrent validation results:")
print("validate_score(True):", validate_score_current(True))
print("validate_score(False):", validate_score_current(False))
print("validate_score(0.5):", validate_score_current(0.5))
EOF

Repository: cortexlinux/cortex-distro

Length of output: 155


Exclude booleans from score validation.

bool is a subclass of int in Python, so isinstance(True, (int, float)) returns True. This causes True and False to pass validation as scores (coerced to 1 and 0 respectively). Since the function signature specifies score: float and no tests cover boolean inputs, this appears unintended. Add explicit boolean rejection.

Proposed fix
-    if not isinstance(score, (int, float)):
+    if isinstance(score, bool) or not isinstance(score, (int, float)):
         return False, "Score must be a number"
🤖 Prompt for AI Agents
In `@packages/cortex-tutor/validators.py` around lines 165 - 199, The
validate_score function currently treats booleans as valid because bool is a
subclass of int; update validate_score to explicitly reject booleans before the
numeric isinstance check (e.g., check isinstance(score, bool) and return False
with an appropriate message), so True/False no longer pass as 1/0; keep the rest
of the numeric range checks (score < 0.0 or score > 1.0) unchanged and return
the same tuple structure.

Comment on lines +172 to +178
def test_tutor_print_success(self, capsys):
"""Test tutor_print with success status."""
tutor_print("Test message", "success")

def test_tutor_print_error(self, capsys):
"""Test tutor_print with error status."""
tutor_print("Error message", "error")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Silence Ruff ARG002 for unused capsys fixtures.

The capsys fixture is unused in both tests; rename to _capsys (or add a noqa) to avoid lint failures.

🔧 Suggested fix
-    def test_tutor_print_success(self, capsys):
+    def test_tutor_print_success(self, _capsys):
@@
-    def test_tutor_print_error(self, capsys):
+    def test_tutor_print_error(self, _capsys):
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_tutor_print_success(self, capsys):
"""Test tutor_print with success status."""
tutor_print("Test message", "success")
def test_tutor_print_error(self, capsys):
"""Test tutor_print with error status."""
tutor_print("Error message", "error")
def test_tutor_print_success(self, _capsys):
"""Test tutor_print with success status."""
tutor_print("Test message", "success")
def test_tutor_print_error(self, _capsys):
"""Test tutor_print with error status."""
tutor_print("Error message", "error")
🧰 Tools
🪛 Ruff (0.14.13)

172-172: Unused method argument: capsys

(ARG002)


176-176: Unused method argument: capsys

(ARG002)

🤖 Prompt for AI Agents
In `@tests/tutor/test_integration.py` around lines 172 - 178, The tests
test_tutor_print_success and test_tutor_print_error declare the unused capsys
fixture causing Ruff ARG002; rename the fixture parameter from capsys to _capsys
in both functions (or add a noqa comment) so the linter recognizes it as
intentionally unused—update the function signatures for test_tutor_print_success
and test_tutor_print_error accordingly.

Comment on lines +38 to +40
def test_start_loads_lesson(
self, mock_console, mock_tutor_print, mock_header, mock_menu, mock_input, mock_agent_class
):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Silence Ruff ARG002 for unused patched mocks.

Several injected mocks are unused and will trigger ARG002. Prefix them with _ (or add a noqa) to keep lint green.

🔧 Suggested fix
-    def test_start_loads_lesson(
-        self, mock_console, mock_tutor_print, mock_header, mock_menu, mock_input, mock_agent_class
-    ):
+    def test_start_loads_lesson(
+        self, _mock_console, _mock_tutor_print, mock_header, _mock_menu, mock_input, mock_agent_class
+    ):
@@
-    def test_show_examples(self, mock_console, mock_code_example, mock_tutor):
+    def test_show_examples(self, _mock_console, mock_code_example, mock_tutor):
@@
-    def test_run_tutorial(self, mock_console, mock_input, mock_step, mock_tutor):
+    def test_run_tutorial(self, _mock_console, mock_input, mock_step, mock_tutor):
@@
-    def test_run_tutorial_quit(self, mock_input, mock_step, mock_tutor):
+    def test_run_tutorial_quit(self, mock_input, _mock_step, mock_tutor):
@@
-    def test_show_best_practices(self, mock_console, mock_practice, mock_tutor):
+    def test_show_best_practices(self, _mock_console, mock_practice, mock_tutor):
@@
-    def test_ask_question(self, mock_print, mock_markdown, mock_input, mock_tutor):
+    def test_ask_question(self, _mock_print, mock_markdown, mock_input, mock_tutor):
@@
-    def test_methods_with_no_lesson(self, mock_agent_class):
+    def test_methods_with_no_lesson(self, _mock_agent_class):

Also applies to: 134-134, 152-152, 163-163, 182-182, 200-200, 250-250

🧰 Tools
🪛 Ruff (0.14.13)

39-39: Unused method argument: mock_console

(ARG002)


39-39: Unused method argument: mock_tutor_print

(ARG002)


39-39: Unused method argument: mock_menu

(ARG002)

🤖 Prompt for AI Agents
In `@tests/tutor/test_interactive_tutor.py` around lines 38 - 40, The test
function test_start_loads_lesson (and the other test functions referenced)
declare several injected pytest fixtures that are unused and trigger Ruff
ARG002; rename the unused parameters by prefixing them with an underscore (e.g.,
mock_console -> _mock_console, mock_tutor_print -> _mock_tutor_print,
mock_header -> _mock_header, mock_menu -> _mock_menu, mock_input -> _mock_input,
mock_agent_class -> _mock_agent_class) in the function signature for
test_start_loads_lesson and apply the same underscore-prefix change to the other
affected test functions mentioned so linting no longer reports ARG002
(alternatively add a per-parameter noqa if you prefer).

Comment on lines +8 to +52
class TestGetClient:
"""Tests for get_client function."""

def test_get_client_creates_singleton(self):
"""Test that get_client creates a singleton instance."""
from cortex.tutor import llm

# Reset the global client
llm._client = None

with patch.dict("os.environ", {"ANTHROPIC_API_KEY": "test-key"}):
with patch.object(llm.anthropic, "Anthropic") as mock_anthropic:
mock_client = MagicMock()
mock_anthropic.return_value = mock_client

client1 = llm.get_client()
client2 = llm.get_client()

# Should only create one instance
assert mock_anthropic.call_count == 1
assert client1 is client2

# Clean up
llm._client = None

def test_get_client_raises_without_api_key(self):
"""Test that get_client raises error without API key."""
from cortex.tutor import llm
from cortex.tutor.config import reset_config

llm._client = None
reset_config()

with patch.dict("os.environ", {}, clear=True):
# Remove ANTHROPIC_API_KEY if it exists
import os

os.environ.pop("ANTHROPIC_API_KEY", None)

with pytest.raises(ValueError, match="ANTHROPIC_API_KEY"):
llm.get_client()

llm._client = None
reset_config()

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Reset config to avoid cross-test state bleed.

get_client() depends on the global config singleton; if another test initializes it without an API key, this test can become order-dependent. Reset before/after to keep it isolated.

🧪 Proposed fix
     def test_get_client_creates_singleton(self):
         """Test that get_client creates a singleton instance."""
         from cortex.tutor import llm
+        from cortex.tutor.config import reset_config
 
         # Reset the global client
         llm._client = None
+        reset_config()
 
         with patch.dict("os.environ", {"ANTHROPIC_API_KEY": "test-key"}):
             with patch.object(llm.anthropic, "Anthropic") as mock_anthropic:
                 mock_client = MagicMock()
                 mock_anthropic.return_value = mock_client
@@
         # Clean up
         llm._client = None
+        reset_config()
🤖 Prompt for AI Agents
In `@tests/tutor/test_llm.py` around lines 8 - 52, Reset the global config and
llm._client around the failing test to avoid cross-test state bleed: call
reset_config() and set llm._client = None before invoking llm.get_client() in
test_get_client_raises_without_api_key (and restore/reset after the assertion);
ensure you import reset_config from cortex.tutor.config and reference
llm._client and get_client so the test is isolated regardless of test ordering.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AI-Powered Installation Tutor

2 participants