diff --git a/CHANGELOG.md b/CHANGELOG.md index 46f29081..ea11105a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,38 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [0.5.0] - 2025-10-30 + +### Added - Virtual Packages +- **Virtual Package Support**: Install individual files directly from any repository without requiring full APM package structure + - Individual file packages: `apm install owner/repo/path/to/file.prompt.md` +- **Collection Support**: Install curated collections of primitives from [Awesome Copilot](https://github.com/github/awesome-copilot): `apm install github/awesome-copilot/collections/collection-name` + - Collection manifest parser for `.collection.yml` format + - Batch download of collection items into organized `.apm/` structure + - Integration with github/awesome-copilot collections + +### Added - Runnable Prompts +- **Auto-Discovery of Prompts**: Run installed prompts without manual script configuration + - `apm run ` automatically discovers and executes prompts without having to wire a script in `apm.yml` + - Search priority: local root → .apm/prompts → .github/prompts → dependencies + - Qualified path support: `apm run owner/repo/prompt-name` for disambiguation + - Collision detection with helpful error messages when multiple prompts found + - Explicit scripts in apm.yml always take precedence over auto-discovery +- **Automatic Runtime Detection**: Detects installed runtime (copilot > codex) and generates proper commands +- **Zero-Configuration Execution**: Install and run prompts immediately without apm.yml scripts section + +### Changed +- Enhanced dependency resolution to support virtual package unique keys +- Improved GitHub downloader with virtual file and collection package support +- Extended `DependencyReference.parse()` to detect and validate virtual packages (3+ path segments) +- Script runner now falls back to prompt discovery when script not found in apm.yml + +### Developer Experience +- Streamlined workflow: `apm install ` → `apm run ` works immediately +- No manual script configuration needed for simple use cases +- Power users retain full control via explicit scripts in apm.yml +- Better error messages for ambiguous prompt names with disambiguation guidance + ## [0.4.3] - 2025-10-29 ### Added diff --git a/README.md b/README.md index f9bc614c..ce1f6031 100644 --- a/README.md +++ b/README.md @@ -11,8 +11,6 @@ ✅ **2-minute setup** - zero config ✅ **Team collaboration** - composable context, without wheel reinvention -**Compound innovation** - reuse [packages built with APM by the community](#built-with-apm) - ## What Goes in Packages 📦 **Mix and match what your team needs**: @@ -22,7 +20,7 @@ ![APM Demo](docs/apm-demo.gif) -## Quick Start (2 minutes) +## Quick Start > [!NOTE] > **📋 Prerequisites**: Get tokens at [github.com/settings/personal-access-tokens/new](https://github.com/settings/personal-access-tokens/new) @@ -31,29 +29,41 @@ > > 📖 **Complete Setup Guide**: [Getting Started](docs/getting-started.md) -```bash -# 1. Set your GitHub token (minimal setup) -export GITHUB_COPILOT_PAT=your_fine_grained_token_here +### 30 Seconds: Zero-Config Prompt Execution -# 2. Install APM CLI +```bash +# Set up APM (one-time) +export GITHUB_COPILOT_PAT=your_token_here curl -sSL "https://raw.githubusercontent.com/danielmeppiel/apm/main/install.sh" | sh -# 3. Set up runtime (GitHub Copilot CLI with native MCP support) +# 3. Set up GitHub Copilot CLI apm runtime setup copilot -# 3. Create your first AI package +# Run any prompt from GitHub - zero config needed +apm run github/awesome-copilot/prompts/architecture-blueprint-generator +``` + +### 2 Minutes: Guardrailing with packaged context + +```bash +# Create project with layered context from multiple APM packages apm init my-project && cd my-project -# 4. Install APM and MCP dependencies -apm install +# Install context + workflows from packages +apm install danielmeppiel/design-guidelines +apm install danielmeppiel/compliance-rules + +# Compile into single AGENTS.md guardrails +# Now all agents respect design + compliance rules automatically +apm compile -# 5. Run your first workflow -apm compile && apm run start --param name="" +# Run a prompt from the installed packages above +apm run design-review ``` **That's it!** Your project now has reliable AI workflows that work with any coding agent. -**GitHub Enterprise Support:** Works with Enterprise Server and Data Residency Cloud. [Configuration →](docs/getting-started.md#github-enterprise-support) +**GitHub Enterprise**: Works with GitHub Enterprise Server and Data Residency Cloud. [Configuration →](docs/getting-started.md#github-enterprise-support) ### Example `apm.yml` - Like package.json for AI Native projects diff --git a/docs/getting-started.md b/docs/getting-started.md index fe661e8a..6e4198d1 100644 --- a/docs/getting-started.md +++ b/docs/getting-started.md @@ -443,22 +443,62 @@ apm install myorg/templates/chatmodes/assistant.chatmode.md@v2.1.0 # 2. Install specific file apm install awesome-org/best-practices/prompts/security-scan.prompt.md -# 3. Use immediately -apm compile -apm run security-scan +# 3. Use immediately - no apm.yml configuration needed! +apm run security-scan --param target="./src" -# 4. Add to apm.yml for team -echo " - awesome-org/best-practices/prompts/security-scan.prompt.md" >> apm.yml +# 4. Or add explicit script to apm.yml for custom flags +# scripts: +# security: "copilot --full-auto -p security-scan.prompt.md" ``` **Benefits:** - ✅ **Zero overhead** - No package creation required - ✅ **Instant reuse** - Install any file from any repository +- ✅ **Auto-discovery** - Run installed prompts without script configuration - ✅ **Automatic structure** - APM creates package layout for you - ✅ **Full compatibility** - Works with `apm compile` and all commands - ✅ **Version control** - Support for branches and tags +### Runnable Prompts (Auto-Discovery) + +Starting with v0.5.0, installed prompts are **immediately runnable** without manual configuration: + +```bash +# Install a prompt +apm install github/awesome-copilot/prompts/architecture-blueprint-generator.prompt.md + +# Run immediately - APM auto-discovers it! +apm run architecture-blueprint-generator --param project_name="my-app" + +# Auto-discovery works for: +# - Installed virtual packages +# - Local prompts (./my-prompt.prompt.md) +# - Prompts in .apm/prompts/ or .github/prompts/ +# - All prompts from installed regular packages +``` + +**How auto-discovery works:** + +1. **No script found in apm.yml?** APM searches for matching prompt files +2. **Runtime detection:** Automatically uses GitHub Copilot CLI (preferred) or Codex +3. **Smart defaults:** Applies recommended flags for chosen runtime +4. **Collision handling:** If multiple prompts found, use qualified path: `owner/repo/prompt-name` + +**Priority:** +- Explicit scripts in `apm.yml` **always win** (power user control) +- Auto-discovery provides zero-config convenience for simple cases + +**Disambiguation with qualified paths:** + +```bash +# If you have prompts from multiple sources +apm run github/awesome-copilot/code-review +apm run acme/standards/code-review +``` + +See [Prompts Guide](prompts.md#running-prompts) for complete auto-discovery documentation. + ### 5. Run Your First Workflow Execute the default "start" workflow: diff --git a/docs/prompts.md b/docs/prompts.md index dce3e23a..d4640396 100644 --- a/docs/prompts.md +++ b/docs/prompts.md @@ -226,24 +226,89 @@ Verify the successful deployment of ${input:service_name} version ${input:deploy ## Running Prompts -Prompts are executed through scripts defined in your `apm.yml`. When a script references a `.prompt.md` file, APM compiles it with parameter substitution before execution: +APM provides two ways to run prompts: **explicit scripts** (configured in `apm.yml`) and **auto-discovery** (zero configuration). + +### Auto-Discovery (Zero Configuration) + +Starting with v0.5.0, APM can automatically discover and run prompts without manual script configuration: ```bash -# Run scripts that reference .prompt.md files -apm run start --param service_name=api-gateway --param time_window="1h" -apm run llm --param service_name=api-gateway --param time_window="1h" -apm run debug --param service_name=api-gateway --param time_window="1h" +# Install a prompt from any repository +apm install github/awesome-copilot/prompts/code-review.prompt.md -# Preview compiled prompts before execution -apm preview start --param service_name=api-gateway --param time_window="1h" +# Run it immediately - no apm.yml configuration needed! +apm run code-review ``` -**Script Configuration (apm.yml):** +**How it works:** + +1. APM searches for prompts with matching names in this priority order: + - Local root: `./prompt-name.prompt.md` + - APM prompts directory: `.apm/prompts/prompt-name.prompt.md` + - GitHub convention: `.github/prompts/prompt-name.prompt.md` + - Dependencies: `apm_modules/**/.apm/prompts/prompt-name.prompt.md` + +2. When found, APM automatically: + - Detects installed runtime (GitHub Copilot CLI or Codex) + - Generates appropriate command with recommended flags + - Compiles prompt with parameters + - Executes through the runtime + +**Qualified paths for disambiguation:** + +If you have multiple prompts with the same name from different sources: + +```bash +# Collision detected - APM shows all matches with guidance +apm run code-review +# Error: Multiple prompts found for 'code-review': +# - github/awesome-copilot (apm_modules/github/awesome-copilot-code-review/...) +# - acme/standards (apm_modules/acme/standards/...) +# +# Use qualified path: +# apm run github/awesome-copilot/code-review +# apm run acme/standards/code-review + +# Run specific version using qualified path +apm run github/awesome-copilot/code-review --param pr_url=... +``` + +**Local prompts always take precedence** over dependency prompts with the same name. + +### Explicit Scripts (Power Users) + +For advanced use cases, define scripts explicitly in `apm.yml`: + ```yaml scripts: - start: "codex analyze-logs.prompt.md" + # Custom runtime flags + start: "copilot --full-auto -p analyze-logs.prompt.md" + + # Specific model selection llm: "llm analyze-logs.prompt.md -m github/gpt-4o-mini" + + # Environment variables debug: "RUST_LOG=debug codex analyze-logs.prompt.md" + + # Friendly aliases + review: "copilot -p code-review.prompt.md" +``` + +**Explicit scripts always take precedence** over auto-discovery. This gives power users full control while maintaining zero-config convenience for simple cases. + +### Running Scripts + +```bash +# With auto-discovery (no apm.yml scripts needed) +apm run code-review --param pull_request_url="https://github.com/org/repo/pull/123" + +# With explicit scripts +apm run start --param service_name=api-gateway --param time_window="1h" +apm run llm --param service_name=api-gateway --param time_window="1h" +apm run debug --param service_name=api-gateway --param time_window="1h" + +# Preview compiled prompts before execution +apm preview start --param service_name=api-gateway --param time_window="1h" ``` ### Example Project Structure diff --git a/pyproject.toml b/pyproject.toml index 64fcd5b1..ba9da8c6 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta" [project] name = "apm-cli" -version = "0.4.3" +version = "0.5.0" description = "MCP configuration tool" readme = "README.md" requires-python = ">=3.9" @@ -72,4 +72,5 @@ warn_unused_configs = true [tool.pytest.ini_options] markers = [ "integration: marks tests as integration tests that may require network access", + "slow: marks tests as slow running tests", ] diff --git a/scripts/github-token-helper.sh b/scripts/github-token-helper.sh index ff6eb0af..73dccf86 100755 --- a/scripts/github-token-helper.sh +++ b/scripts/github-token-helper.sh @@ -25,16 +25,6 @@ setup_github_tokens() { echo -e "${BLUE}Setting up GitHub tokens...${NC}" fi - # DEBUG: Show all GitHub-related environment variables - echo -e "${YELLOW}🔍 DEBUG: GitHub token environment analysis:${NC}" - echo " GITHUB_TOKEN: ${GITHUB_TOKEN:+SET(${#GITHUB_TOKEN} chars)}${GITHUB_TOKEN:-UNSET}" - echo " GITHUB_APM_PAT: ${GITHUB_APM_PAT:+SET(${#GITHUB_APM_PAT} chars)}${GITHUB_APM_PAT:-UNSET}" - echo " GH_MODELS_PAT: ${GH_MODELS_PAT:+SET(${#GH_MODELS_PAT} chars)}${GH_MODELS_PAT:-UNSET}" - echo " GH_CLI_PAT: ${GH_CLI_PAT:+SET(${#GH_CLI_PAT} chars)}${GH_CLI_PAT:-UNSET}" - echo " GH_PKG_PAT: ${GH_PKG_PAT:+SET(${#GH_PKG_PAT} chars)}${GH_PKG_PAT:-UNSET}" - echo " CI environment: ${CI:-UNSET}" - echo " GITHUB_ACTIONS: ${GITHUB_ACTIONS:-UNSET}" - # CRITICAL: Preserve existing GITHUB_TOKEN if set (for Models access) local preserve_github_token="" if [[ -n "${GITHUB_TOKEN:-}" ]]; then @@ -72,13 +62,7 @@ setup_github_tokens() { if [[ -n "${GITHUB_TOKEN:-}" ]] && [[ -z "${GITHUB_MODELS_KEY:-}" ]]; then export GITHUB_MODELS_KEY="${GITHUB_TOKEN}" fi - - # DEBUG: Show final token state - echo -e "${YELLOW}🔍 DEBUG: Final token configuration:${NC}" - echo " GITHUB_TOKEN: ${GITHUB_TOKEN:+SET(${#GITHUB_TOKEN} chars)}${GITHUB_TOKEN:-UNSET}" - echo " GITHUB_APM_PAT: ${GITHUB_APM_PAT:+SET(${#GITHUB_APM_PAT} chars)}${GITHUB_APM_PAT:-UNSET}" - echo " GITHUB_MODELS_KEY: ${GITHUB_MODELS_KEY:+SET(${#GITHUB_MODELS_KEY} chars)}${GITHUB_MODELS_KEY:-UNSET}" - + if [[ "$quiet_mode" != "true" ]]; then echo -e "${GREEN}✅ GitHub token environment configured${NC}" fi diff --git a/scripts/runtime/setup-codex.sh b/scripts/runtime/setup-codex.sh index 043cb481..83ac8829 100755 --- a/scripts/runtime/setup-codex.sh +++ b/scripts/runtime/setup-codex.sh @@ -91,11 +91,6 @@ setup_codex() { local latest_release_url="https://api.github.com/repos/$CODEX_REPO/releases/latest" local latest_tag - # DEBUG: Show token state right before API call - log_info "🔍 DEBUG: Token state before GitHub API call:" - log_info " GITHUB_TOKEN: ${GITHUB_TOKEN:+SET(${#GITHUB_TOKEN} chars)}${GITHUB_TOKEN:-UNSET}" - log_info " GITHUB_APM_PAT: ${GITHUB_APM_PAT:+SET(${#GITHUB_APM_PAT} chars)}${GITHUB_APM_PAT:-UNSET}" - # Try to get the latest release tag using curl if command -v curl >/dev/null 2>&1; then # Use authenticated request if GITHUB_TOKEN or GITHUB_APM_PAT is available diff --git a/scripts/test-integration.sh b/scripts/test-integration.sh index d6971d81..61f7a4af 100755 --- a/scripts/test-integration.sh +++ b/scripts/test-integration.sh @@ -185,10 +185,17 @@ setup_binary_for_testing() { log_success "APM binary ready for testing: $version" } -# Set up runtimes (codex/llm) - Integration Testing Coverage! +# Set up runtimes (codex/llm/copilot) - Integration Testing Coverage! setup_runtimes() { log_info "=== Setting up runtimes for integration tests ===" + # Set up GitHub Copilot CLI runtime (recommended default) + log_info "Setting up GitHub Copilot CLI runtime..." + if ! ./apm runtime setup copilot; then + log_error "Failed to set up GitHub Copilot CLI runtime" + exit 1 + fi + # Set up codex runtime log_info "Setting up Codex runtime..." if ! ./apm runtime setup codex; then @@ -211,6 +218,15 @@ setup_runtimes() { # Verify runtimes are available log_info "Verifying runtime installations..." + # Check GitHub Copilot CLI + if command -v copilot >/dev/null 2>&1; then + local copilot_version=$(copilot --version 2>&1 || echo "unknown") + log_success "GitHub Copilot CLI ready: $copilot_version" + else + log_error "GitHub Copilot CLI not found in PATH after setup" + exit 1 + fi + # Check codex if command -v codex >/dev/null 2>&1; then local codex_version=$(codex --version 2>&1 || echo "unknown") @@ -232,15 +248,7 @@ setup_runtimes() { exit 1 fi - # Check Codex CLI (if available) - if command -v codex >/dev/null 2>&1; then - local codex_version=$(codex --version 2>&1 || echo "unknown") - log_success "Codex runtime ready: $codex_version" - else - log_info "Codex not found in PATH (optional)" - fi - - log_success "All runtimes configured successfully" + log_success "All runtimes configured successfully (Copilot, Codex, LLM)" } # Install test dependencies (like CI does) @@ -265,14 +273,12 @@ install_test_dependencies() { run_e2e_tests() { log_info "=== Running integration tests (mirroring CI) ===" log_info "Testing comprehensive runtime scenarios:" - log_info " - Codex runtime integration" - log_info " - LLM runtime integration" - log_info " - Dual runtime interoperability" - log_info " - Template bundling verification" - log_info " - Authentication edge cases" - log_info " - MCP registry integration (NEW)" - log_info " - Environment variable handling (NEW)" - log_info " - Docker args processing (NEW)" + log_info " - Zero-config auto-install (NEW HERO SCENARIO 1)" + log_info " - 2-minute guardrailing (NEW HERO SCENARIO 2)" + log_info " - MCP registry integration" + log_info " - APM Dependencies with real repositories" + log_info " - Environment variable handling" + log_info " - Docker args processing" # Set environment variables (like CI does) export APM_E2E_TESTS="1" @@ -300,17 +306,31 @@ run_e2e_tests() { source .venv/bin/activate fi - # Run golden scenario tests (existing) - log_info "Running golden scenario E2E tests..." - echo "Command: pytest tests/integration/test_golden_scenario_e2e.py -v -s --tb=short" + # Run NEW hero scenario test (zero-config auto-install) + log_info "Running NEW HERO SCENARIO 1: Zero-config auto-install test..." + echo "Command: pytest tests/integration/test_auto_install_e2e.py -v -s --tb=short" - if pytest tests/integration/test_golden_scenario_e2e.py -v -s --tb=short; then - log_success "Golden scenario tests passed!" + if pytest tests/integration/test_auto_install_e2e.py -v -s --tb=short; then + log_success "Zero-config auto-install tests passed!" else - log_error "Golden scenario tests failed!" + log_error "Zero-config auto-install tests failed!" exit 1 fi + # Run NEW hero scenario test (2-minute guardrailing) + log_info "Running NEW HERO SCENARIO 2: 2-minute guardrailing test..." + echo "Command: pytest tests/integration/test_guardrailing_hero_e2e.py -v -s --tb=short" + + if pytest tests/integration/test_guardrailing_hero_e2e.py -v -s --tb=short; then + log_success "2-minute guardrailing tests passed!" + else + log_error "2-minute guardrailing tests failed!" + exit 1 + fi + + # NOTE: Legacy golden scenario tests removed - replaced by faster auto-install tests above + # The auto-install tests cover the same hero scenario but with early termination for speed + # Run MCP registry E2E tests (new - covers our implemented functionality) log_info "Running MCP registry E2E tests..." echo "Command: pytest tests/integration/test_mcp_registry_e2e.py -v -s --tb=short" @@ -364,20 +384,28 @@ main() { echo "✅ Local mode: Built binary and validated full integration process" fi echo "" - echo "Integration validation complete - COMPREHENSIVE RUNTIME TESTING:" + echo "Integration validation complete - COMPREHENSIVE TESTING:" echo " 1. Prerequisites (GITHUB_TOKEN) ✅" - echo " 2. Codex runtime integration ✅" - echo " 3. LLM runtime integration ✅" - echo " 4. Dual runtime interoperability ✅" - echo " 5. Template bundling verification ✅" - echo " 6. Authentication edge cases ✅" - echo " 7. MCP registry search & show ✅" - echo " 8. Registry-based installation ✅" - echo " 9. Environment variable handling ✅" - echo " 10. Docker args with -e flags ✅" - echo " 11. Empty string & defaults logic ✅" - echo " 12. Cross-adapter consistency ✅" - echo " 13. Duplication prevention ✅" + echo "" + echo " HERO SCENARIO 1: 30-Second Zero-Config ✨" + echo " - Run virtual package directly ✅" + echo " - Auto-install on first run ✅" + echo " - Use cached package on second run ✅" + echo "" + echo " HERO SCENARIO 2: 2-Minute Guardrailing ✨" + echo " - Project initialization ✅" + echo " - Install multiple APM packages ✅" + echo " - Compile to AGENTS.md with combined guardrails ✅" + echo " - Run prompts from installed packages ✅" + echo "" + echo " 3. MCP registry search & show ✅" + echo " 4. Registry-based installation ✅" + echo " 5. APM Dependencies integration ✅" + echo " 6. Environment variable handling ✅" + echo " 7. Docker args with -e flags ✅" + echo " 8. Empty string & defaults logic ✅" + echo " 9. Cross-adapter consistency ✅" + echo " 10. Duplication prevention ✅" echo "" log_success "Ready for release validation!" } diff --git a/scripts/test-release-validation.sh b/scripts/test-release-validation.sh index 5f6718ae..2ed9678a 100755 --- a/scripts/test-release-validation.sh +++ b/scripts/test-release-validation.sh @@ -108,15 +108,30 @@ check_prerequisites() { fi } -# Test Step 2: apm runtime setup codex +# Test Step 2: apm runtime setup (both copilot and codex for full coverage) test_runtime_setup() { - log_test "README Step 2: apm runtime setup codex" + log_test "README Step 2: apm runtime setup" - # Test runtime setup (this may take a moment) + # Install GitHub Copilot CLI (recommended default, used by guardrailing hero scenario) + echo "Running: $BINARY_PATH runtime setup copilot" + echo "--- Command Output Start ---" + "$BINARY_PATH" runtime setup copilot 2>&1 + local exit_code=$? + echo "--- Command Output End ---" + echo "Exit code: $exit_code" + + if [[ $exit_code -ne 0 ]]; then + log_error "apm runtime setup copilot failed with exit code $exit_code" + return 1 + fi + + log_success "Copilot CLI runtime setup completed" + + # Also install Codex CLI (for zero-config scenario and fallback) echo "Running: $BINARY_PATH runtime setup codex" echo "--- Command Output Start ---" "$BINARY_PATH" runtime setup codex 2>&1 - local exit_code=$? + exit_code=$? echo "--- Command Output End ---" echo "Exit code: $exit_code" @@ -125,169 +140,186 @@ test_runtime_setup() { return 1 fi - log_success "Runtime setup completed" + log_success "Codex CLI runtime setup completed" + log_success "Both runtimes (Copilot, Codex) configured successfully" } -# Test Step 3: apm init my-ai-native-project -test_init_project() { - log_test "README Step 3: apm init my-ai-native-project" +# Helper function for cross-platform timeout +run_with_timeout() { + local timeout_duration=$1 + shift + local cmd="$@" + + # Use perl for cross-platform timeout support + perl -e "alarm $timeout_duration; exec @ARGV" -- sh -c "$cmd" 2>&1 & + local pid=$! + + # Wait for the command to complete or timeout + wait $pid 2>/dev/null + local exit_code=$? + + # Exit code 142 (SIGALRM) means timeout + if [[ $exit_code -eq 142 ]]; then + return 124 # Return timeout code like GNU timeout + fi + + return $exit_code +} + +# HERO SCENARIO 1: 30-Second Zero-Config +# Test the exact README flow: runtime setup → run virtual package +test_hero_zero_config() { + log_test "HERO SCENARIO 1: 30-Second Zero-Config (README lines 35-44)" - # Test init with the exact project name from README - echo "Running: $BINARY_PATH init my-ai-native-project --yes" + # Create temporary directory for this test + mkdir -p zero-config-test && cd zero-config-test + + # Runtime setup is already done in test_runtime_setup() + # Just test the virtual package run + + echo "Running: $BINARY_PATH run github/awesome-copilot/prompts/architecture-blueprint-generator (with 15s timeout)" echo "--- Command Output Start ---" - "$BINARY_PATH" init my-ai-native-project --yes 2>&1 + run_with_timeout 15 "$BINARY_PATH run github/awesome-copilot/prompts/architecture-blueprint-generator" local exit_code=$? echo "--- Command Output End ---" echo "Exit code: $exit_code" - if [[ $exit_code -ne 0 ]]; then - log_error "apm init my-ai-native-project failed with exit code $exit_code" - return 1 - fi - - # Check that the project directory was created - if [[ ! -d "my-ai-native-project" ]]; then - log_error "my-ai-native-project directory not created" + if [[ $exit_code -eq 124 ]]; then + # Exit code 124 is timeout, which is expected and OK (prompt execution started) + log_success "Zero-config auto-install worked! Package installed and prompt started." + elif [[ $exit_code -eq 0 ]]; then + # Command completed successfully within timeout + log_success "Zero-config auto-install completed successfully" + else + log_error "Zero-config auto-install failed immediately with exit code $exit_code" + cd .. return 1 fi - # Check that apm.yml was created (minimal mode - only apm.yml) - if [[ ! -f "my-ai-native-project/apm.yml" ]]; then - log_error "apm.yml not created in project" + # Verify package was actually installed + if [[ ! -d "apm_modules/github/awesome-copilot-architecture-blueprint-generator" ]]; then + log_error "Package was not installed by auto-install" + cd .. return 1 fi - log_success "Project initialization completed (minimal mode)" + log_success "Package auto-installed to apm_modules/" - # NEW: Create minimal project structure for testing (simulating user workflow) - log_info "Creating minimal project structure for validation testing..." + # Test second run (should use cached package, no re-download) + echo "Testing second run (should use cache)..." + run_with_timeout 10 "$BINARY_PATH run github/awesome-copilot/prompts/architecture-blueprint-generator" | head -20 + local second_exit_code=${PIPESTATUS[0]} - cd my-ai-native-project + if [[ $second_exit_code -eq 124 || $second_exit_code -eq 0 ]]; then + log_success "Second run used cached package (fast, no re-download)" + fi - # Create .apm directory with minimal instruction - mkdir -p .apm/instructions - cat > .apm/instructions/test.instructions.md << 'EOF' ---- -applyTo: "**" -description: Test instructions for release validation ---- - -# Test Instructions - -Basic instructions for release validation testing. -EOF - - # Create a simple prompt file for testing - cat > hello-world.prompt.md << 'EOF' ---- -description: Hello World prompt for validation ---- - -# Hello World - -This is a test prompt for {{name}}. + cd .. + log_success "HERO SCENARIO 1: 30-second zero-config PASSED ✨" +} -Say hello to {{name}}! -EOF +# HERO SCENARIO 2: 2-Minute Guardrailing +# Test the exact README flow: init → install packages → compile → run +test_hero_guardrailing() { + log_test "HERO SCENARIO 2: 2-Minute Guardrailing (README lines 46-60)" - # Update apm.yml to add start script - # Note: Using simple append to avoid Python/YAML dependency issues in isolated test - cat >> apm.yml << 'EOF' - -# Scripts added for release validation testing -scripts: - start: "codex hello-world.prompt.md" -EOF + # Step 1: apm init my-project + echo "Running: $BINARY_PATH init my-project --yes" + echo "--- Command Output Start ---" + "$BINARY_PATH" init my-project --yes 2>&1 + local exit_code=$? + echo "--- Command Output End ---" + echo "Exit code: $exit_code" - cd .. - log_info "Project structure created for testing" + if [[ $exit_code -ne 0 ]]; then + log_error "apm init my-project failed with exit code $exit_code" + return 1 + fi - log_success "Project initialization and setup completed" -} - -# Test Step 4: cd my-ai-native-project && apm compile -test_compile() { - log_test "README Step 4: cd my-ai-native-project && apm compile" + if [[ ! -d "my-project" || ! -f "my-project/apm.yml" ]]; then + log_error "my-project directory or apm.yml not created" + return 1 + fi - cd my-ai-native-project + log_success "Project initialized" - # Test compile (the critical step that was failing) - show actual error - echo "Running: $BINARY_PATH compile" + cd my-project + + # Step 2: apm install danielmeppiel/design-guidelines + echo "Running: $BINARY_PATH install danielmeppiel/design-guidelines" echo "--- Command Output Start ---" - "$BINARY_PATH" compile 2>&1 - local exit_code=$? + APM_E2E_TESTS="${APM_E2E_TESTS:-}" "$BINARY_PATH" install danielmeppiel/design-guidelines 2>&1 + exit_code=$? echo "--- Command Output End ---" echo "Exit code: $exit_code" if [[ $exit_code -ne 0 ]]; then - log_error "apm compile failed with exit code $exit_code" + log_error "apm install danielmeppiel/design-guidelines failed" cd .. return 1 fi - # Check that agents.md was created - if [[ ! -f "AGENTS.md" ]]; then - log_error "AGENTS.md not created by compile" + log_success "design-guidelines installed" + + # Step 3: apm install danielmeppiel/compliance-rules + echo "Running: $BINARY_PATH install danielmeppiel/compliance-rules" + echo "--- Command Output Start ---" + APM_E2E_TESTS="${APM_E2E_TESTS:-}" "$BINARY_PATH" install danielmeppiel/compliance-rules 2>&1 + exit_code=$? + echo "--- Command Output End ---" + echo "Exit code: $exit_code" + + if [[ $exit_code -ne 0 ]]; then + log_error "apm install danielmeppiel/compliance-rules failed" cd .. return 1 fi - cd .. - log_success "Compilation completed" -} - -# Test Step 5 Part 1: apm install -test_install() { - log_test "README Step 5: apm install" - - cd my-ai-native-project + log_success "compliance-rules installed" - # Test install - preserve APM_E2E_TESTS for automated testing - echo "Running: $BINARY_PATH install" + # Step 4: apm compile + echo "Running: $BINARY_PATH compile" echo "--- Command Output Start ---" - APM_E2E_TESTS="${APM_E2E_TESTS:-}" "$BINARY_PATH" install 2>&1 - local exit_code=$? + "$BINARY_PATH" compile 2>&1 + exit_code=$? echo "--- Command Output End ---" echo "Exit code: $exit_code" if [[ $exit_code -ne 0 ]]; then - log_error "apm install failed with exit code $exit_code" + log_error "apm compile failed" cd .. return 1 fi - cd .. - log_success "Install completed" -} - -# Test Step 5 Part 2: apm run start --param name="" -test_run_command() { - log_test "README Step 5: apm run start --param name=\"Developer\"" + if [[ ! -f "AGENTS.md" ]]; then + log_error "AGENTS.md not created by compile" + cd .. + return 1 + fi - cd my-ai-native-project + log_success "Compiled to AGENTS.md (guardrails active)" - # Test run (this may not fully complete but should at least start) - # We'll check that it doesn't fail immediately due to binary issues - echo "Running: $BINARY_PATH run start --param name=\"danielmeppiel\" (with 10s timeout)" + # Step 5: apm run design-review (from installed package) + echo "Running: $BINARY_PATH run design-review (with 10s timeout)" echo "--- Command Output Start ---" - timeout 10s "$BINARY_PATH" run start --param name="danielmeppiel" 2>&1 - local exit_code=$? + run_with_timeout 10 "$BINARY_PATH run design-review" + exit_code=$? echo "--- Command Output End ---" echo "Exit code: $exit_code" if [[ $exit_code -eq 124 ]]; then - # Exit code 124 is timeout, which is expected and OK - log_success "Run command started successfully (timed out as expected)" + # Timeout is expected and OK - prompt started executing + log_success "design-review prompt executed with compiled guardrails" elif [[ $exit_code -eq 0 ]]; then - # Command completed successfully within timeout - log_success "Run command completed successfully" + log_success "design-review completed successfully" else - log_error "apm run command failed immediately with exit code $exit_code" + log_error "apm run design-review failed immediately" cd .. return 1 fi cd .. + log_success "HERO SCENARIO 2: 2-minute guardrailing PASSED ✨" } # Test basic commands (sanity check) @@ -350,7 +382,7 @@ echo "" echo "Binary found and executable: $BINARY_PATH" local tests_passed=0 - local tests_total=6 + local tests_total=5 # Prerequisites, basic commands, runtime setup, 2 hero scenarios local dependency_tests_run=false # Add dependency tests to total if available and GITHUB token is present @@ -368,7 +400,7 @@ echo "" test_dir="binary-golden-scenario-$$" # Make it global for cleanup mkdir "$test_dir" && cd "$test_dir" - # Run the exact README golden scenario + # Run prerequisites and basic tests if check_prerequisites; then ((tests_passed++)) else @@ -387,29 +419,20 @@ echo "" log_error "Runtime setup test failed" fi - if test_init_project; then - ((tests_passed++)) - else - log_error "Project init test failed" - fi - - if test_compile; then + # HERO SCENARIO 1: 30-second zero-config + if test_hero_zero_config; then ((tests_passed++)) else - log_error "Compile test failed" + log_error "Hero scenario 1 (30-sec zero-config) failed" fi - if test_install; then + # HERO SCENARIO 2: 2-minute guardrailing + if test_hero_guardrailing; then ((tests_passed++)) else - log_error "Install test failed" + log_error "Hero scenario 2 (2-min guardrailing) failed" fi - # Note: Skipping the run test for now as it requires more complex setup - # if test_run_command; then - # ((tests_passed++)) - # fi - # Run dependency integration tests if available and GitHub token is set if [[ "$dependency_tests_run" == "true" ]]; then log_info "Running dependency integration tests with real GitHub repositories" @@ -424,32 +447,41 @@ echo "" cd .. echo "" - echo "Results: $tests_passed/$tests_total golden scenario steps passed" + echo "Results: $tests_passed/$tests_total tests passed" if [[ $tests_passed -eq $tests_total ]]; then echo "✅ RELEASE VALIDATION PASSED!" echo "" echo "🚀 Binary is ready for production release" echo "📦 End-user experience validated successfully" - echo "🎯 Hero flow works exactly as documented" + echo "🎯 Both README hero scenarios work perfectly" echo "" - echo "Validated user journey:" + echo "Validated user journeys:" echo " 1. Prerequisites (GITHUB_TOKEN) ✅" echo " 2. Binary accessibility ✅" - echo " 3. Runtime setup ✅" - echo " 4. Project initialization ✅" - echo " 5. Agent compilation ✅" - echo " 6. Dependency installation ✅" + echo " 3. Runtime setup (copilot) ✅" + echo "" + echo " HERO SCENARIO 1: 30-Second Zero-Config ✨" + echo " - Run virtual package directly ✅" + echo " - Auto-install on first run ✅" + echo " - Use cached package on second run ✅" + echo "" + echo " HERO SCENARIO 2: 2-Minute Guardrailing ✨" + echo " - Project initialization ✅" + echo " - Install APM packages ✅" + echo " - Compile to AGENTS.md guardrails ✅" + echo " - Run prompts with guardrails ✅" if [[ "$dependency_tests_run" == "true" ]]; then - echo " 7. Real dependency integration ✅" + echo "" + echo " BONUS: Real dependency integration ✅" fi echo "" - log_success "README Golden Scenario works perfectly! ✨" + log_success "README Hero Scenarios work perfectly! ✨" echo "" echo "🎉 The binary delivers the exact README experience - real users will love it!" exit 0 else - log_error "Some golden scenario steps failed" + log_error "Some tests failed" echo "" echo "⚠️ The binary doesn't match the README promise" exit 1 diff --git a/src/apm_cli/cli.py b/src/apm_cli/cli.py index b6f1e95b..23e8404d 100644 --- a/src/apm_cli/cli.py +++ b/src/apm_cli/cli.py @@ -138,13 +138,23 @@ def _check_orphaned_packages(): try: apm_package = APMPackage.from_apm_yml(Path("apm.yml")) declared_deps = apm_package.get_apm_dependencies() - declared_repos = set(dep.repo_url for dep in declared_deps) - declared_names = set() + + # Build set of expected installed package paths + # For virtual packages, use the sanitized package name from get_virtual_package_name() + # For regular packages, use repo_url as-is + expected_installed = set() for dep in declared_deps: - if "/" in dep.repo_url: - declared_names.add(dep.repo_url.split("/")[-1]) - else: - declared_names.add(dep.repo_url) + repo_parts = dep.repo_url.split('/') + if len(repo_parts) >= 2: + org_name = repo_parts[0] + if dep.is_virtual: + # Virtual package: org/repo-name-package-name + package_name = dep.get_virtual_package_name() + expected_installed.add(f"{org_name}/{package_name}") + else: + # Regular package: org/repo-name + repo_name = repo_parts[1] + expected_installed.add(f"{org_name}/{repo_name}") except Exception: return [] # If can't parse apm.yml, assume no orphans @@ -157,7 +167,7 @@ def _check_orphaned_packages(): org_repo_name = f"{org_dir.name}/{repo_dir.name}" # Check if orphaned - if org_repo_name not in declared_repos: + if org_repo_name not in expected_installed: orphaned_packages.append(org_repo_name) return orphaned_packages @@ -401,43 +411,13 @@ def _validate_package_exists(package): from apm_cli.models.apm_package import DependencyReference dep_ref = DependencyReference.parse(package) - # For virtual packages, validate the file exists via GitHub API or raw URL + # For virtual packages, use the downloader's validation method if dep_ref.is_virtual: - import requests - host = dep_ref.host or "github.com" - - # Build raw file URL - if host == "github.com": - base_url = "https://raw.githubusercontent.com" - else: - base_url = f"https://{host}/raw" - - ref = dep_ref.reference or "main" - file_url = f"{base_url}/{dep_ref.repo_url}/{ref}/{dep_ref.virtual_path}" - - try: - # Try HEAD request first (faster) - response = requests.head(file_url, timeout=10, allow_redirects=True) - if response.status_code == 200: - return True - - # If HEAD fails, try GET with main/master fallback - response = requests.get(file_url, timeout=10) - if response.status_code == 200: - return True - - # Try master as fallback - if ref == "main": - file_url = f"{base_url}/{dep_ref.repo_url}/master/{dep_ref.virtual_path}" - response = requests.get(file_url, timeout=10) - return response.status_code == 200 - - return False - except requests.exceptions.RequestException: - return False + from apm_cli.deps.github_downloader import GitHubPackageDownloader + downloader = GitHubPackageDownloader() + return downloader.validate_virtual_package_exists(dep_ref) # For regular packages, use git ls-remote - # Try to do a shallow clone to test accessibility with tempfile.TemporaryDirectory() as temp_dir: try: # Try cloning with minimal fetch @@ -651,17 +631,23 @@ def prune(ctx, dry_run): try: apm_package = APMPackage.from_apm_yml(Path("apm.yml")) declared_deps = apm_package.get_apm_dependencies() - # Keep full org/repo format (e.g., "danielmeppiel/design-guidelines") - declared_repos = set() - declared_names = set() # For directory name matching + + # Build set of expected installed package paths + # For virtual packages, use the sanitized package name from get_virtual_package_name() + # For regular packages, use repo_url as-is + expected_installed = set() for dep in declared_deps: - declared_repos.add(dep.repo_url) - # Also track directory names for filesystem matching - if "/" in dep.repo_url: - package_name = dep.repo_url.split("/")[-1] - declared_names.add(package_name) - else: - declared_names.add(dep.repo_url) + repo_parts = dep.repo_url.split('/') + if len(repo_parts) >= 2: + org_name = repo_parts[0] + if dep.is_virtual: + # Virtual package: org/repo-name-package-name + package_name = dep.get_virtual_package_name() + expected_installed.add(f"{org_name}/{package_name}") + else: + # Regular package: org/repo-name + repo_name = repo_parts[1] + expected_installed.add(f"{org_name}/{repo_name}") except Exception as e: _rich_error(f"Failed to parse apm.yml: {e}") sys.exit(1) @@ -682,7 +668,7 @@ def prune(ctx, dry_run): # Find orphaned packages (installed but not declared) orphaned_packages = {} for org_repo_name, display_name in installed_packages.items(): - if org_repo_name not in declared_repos: + if org_repo_name not in expected_installed: orphaned_packages[org_repo_name] = display_name if not orphaned_packages: diff --git a/src/apm_cli/commands/deps.py b/src/apm_cli/commands/deps.py index 98393ade..c55b74d2 100644 --- a/src/apm_cli/commands/deps.py +++ b/src/apm_cli/commands/deps.py @@ -611,12 +611,20 @@ def _update_single_package(package_name: str, project_deps: List, apm_modules_pa _rich_error(f"Package '{package_name}' not found in apm.yml dependencies") return - # Find the installed package directory + # Find the installed package directory using namespaced structure package_dir = None if target_dep.alias: package_dir = apm_modules_path / target_dep.alias else: - package_dir = apm_modules_path / package_name + # Parse owner/repo from repo_url + repo_parts = target_dep.repo_url.split('/') + if len(repo_parts) >= 2: + owner = repo_parts[0] + repo = repo_parts[1] + package_dir = apm_modules_path / owner / repo + else: + # Fallback to simple name matching + package_dir = apm_modules_path / package_name if not package_dir.exists(): _rich_error(f"Package '{package_name}' not installed in apm_modules/") @@ -648,11 +656,20 @@ def _update_all_packages(project_deps: List, apm_modules_path: Path): updated_count = 0 for dep in project_deps: - # Determine package directory + # Determine package directory using namespaced structure + # APM packages are installed as: apm_modules/owner/repo-name/ if dep.alias: package_dir = apm_modules_path / dep.alias else: - package_dir = apm_modules_path / dep.repo_url.split('/')[-1] + # Parse owner/repo from repo_url + repo_parts = dep.repo_url.split('/') + if len(repo_parts) >= 2: + owner = repo_parts[0] + repo = repo_parts[1] + package_dir = apm_modules_path / owner / repo + else: + # Fallback to simple repo name (shouldn't happen) + package_dir = apm_modules_path / dep.repo_url if not package_dir.exists(): _rich_warning(f"⚠️ {dep.repo_url} not installed - skipping") diff --git a/src/apm_cli/core/script_runner.py b/src/apm_cli/core/script_runner.py index dd91cdd6..c8afe11c 100644 --- a/src/apm_cli/core/script_runner.py +++ b/src/apm_cli/core/script_runner.py @@ -45,10 +45,19 @@ def run_script(self, script_name: str, params: Dict[str, str]) -> bool: for line in header_lines: print(line) - # Load apm.yml configuration + # Check if this is a virtual package (before loading config) + is_virtual_package = self._is_virtual_package_reference(script_name) + + # Load apm.yml configuration (or create minimal one for virtual packages) config = self._load_config() if not config: - raise RuntimeError("No apm.yml found in current directory") + if is_virtual_package: + # Create minimal config for zero-config virtual package execution + print(f" ℹ️ Creating minimal apm.yml for zero-config execution...") + self._create_minimal_config() + config = self._load_config() + else: + raise RuntimeError("No apm.yml found in current directory") # 1. Check explicit scripts first (existing behavior - highest priority) scripts = config.get('scripts', {}) @@ -56,7 +65,7 @@ def run_script(self, script_name: str, params: Dict[str, str]) -> bool: command = scripts[script_name] return self._execute_script_command(command, params) - # 2. Auto-discover prompt file (NEW fallback) + # 2. Auto-discover prompt file (fallback) discovered_prompt = self._discover_prompt_file(script_name) if discovered_prompt: @@ -73,6 +82,24 @@ def run_script(self, script_name: str, params: Dict[str, str]) -> bool: # Execute with existing logic return self._execute_script_command(command, params) + # 2.5 Try auto-install if it looks like a virtual package reference + if self._is_virtual_package_reference(script_name): + print(f"\n📦 Auto-installing virtual package: {script_name}") + if self._auto_install_virtual_package(script_name): + # Retry discovery after install + discovered_prompt = self._discover_prompt_file(script_name) + if discovered_prompt: + runtime = self._detect_installed_runtime() + command = self._generate_runtime_command(runtime, discovered_prompt) + print(f"\n✨ Package installed and ready to run\n") + return self._execute_script_command(command, params) + else: + raise RuntimeError( + f"Package installed successfully but prompt not found.\n" + f"The package may not contain the expected prompt file.\n" + f"Check {Path('apm_modules')} for installed files." + ) + # 3. Not found anywhere available = ', '.join(scripts.keys()) if scripts else 'none' @@ -606,6 +633,165 @@ def _handle_prompt_collision(self, name: str, matches: list[Path]) -> None: raise RuntimeError(error_msg) + def _is_virtual_package_reference(self, name: str) -> bool: + """Check if a name looks like a virtual package reference. + + Virtual packages have format: owner/repo/path/to/file.prompt.md + or: owner/repo/collections/name + + Args: + name: Name to check + + Returns: + True if this looks like a virtual package reference + """ + # Must have at least one slash + if '/' not in name: + return False + + # Import exception types upfront + from ..models.apm_package import DependencyReference, InvalidVirtualPackageExtensionError + + # Try to parse as dependency reference + try: + dep_ref = DependencyReference.parse(name) + return dep_ref.is_virtual + except InvalidVirtualPackageExtensionError: + # Invalid extension error - only reject if it already has a file extension + # If no extension, we should retry with .prompt.md + has_extension = '.' in name.split('/')[-1] + if has_extension: + # Path like owner/repo/path/file.txt - has wrong extension, don't retry + return False + # Path like owner/repo/path/file - no extension, fall through to retry + except Exception: + # Other parsing errors - check if we should retry + pass + + # Retry with .prompt.md if: + # 1. It doesn't already have a file extension + # 2. It might be a collection reference + has_extension = '.' in name.split('/')[-1] + is_collection_path = '/collections/' in name + + if not has_extension or is_collection_path: + # Try again with .prompt.md for paths without extensions or collections + try: + dep_ref = DependencyReference.parse(f"{name}.prompt.md") + return dep_ref.is_virtual + except Exception: + pass + return False + + def _auto_install_virtual_package(self, package_ref: str) -> bool: + """Auto-install a virtual package. + + Args: + package_ref: Virtual package reference (owner/repo/path/to/file.prompt.md) + + Returns: + True if installation succeeded, False otherwise + """ + try: + from ..models.apm_package import DependencyReference + from ..deps.github_downloader import GitHubPackageDownloader + + # Normalize the reference - add .prompt.md if missing + normalized_ref = package_ref if package_ref.endswith('.prompt.md') else f"{package_ref}.prompt.md" + + # Parse the reference + dep_ref = DependencyReference.parse(normalized_ref) + + if not dep_ref.is_virtual: + return False + + # Ensure apm_modules exists + apm_modules = Path("apm_modules") + apm_modules.mkdir(parents=True, exist_ok=True) + + # Create target path for virtual package + # Format: apm_modules/owner/package-name/ + owner = dep_ref.repo_url.split('/')[0] + package_name = dep_ref.get_virtual_package_name() + target_path = apm_modules / owner / package_name + + # Check if already installed + if target_path.exists(): + print(f" ℹ️ Package already installed at {target_path}") + return True + + # Download the virtual package + downloader = GitHubPackageDownloader() + + print(f" 📥 Downloading from {dep_ref.to_github_url()}") + + if dep_ref.is_virtual_collection(): + # Download collection + package_info = downloader.download_virtual_collection_package(dep_ref, target_path) + else: + # Download individual file + package_info = downloader.download_virtual_file_package(dep_ref, target_path) + + # PackageInfo has a 'package' attribute which is an APMPackage + print(f" ✅ Installed {package_info.package.name} v{package_info.package.version}") + + # Update apm.yml to include this dependency (use normalized reference) + self._add_dependency_to_config(normalized_ref) + + return True + + except Exception as e: + print(f" ❌ Auto-install failed: {e}") + return False + + def _add_dependency_to_config(self, package_ref: str) -> None: + """Add a virtual package dependency to apm.yml. + + Args: + package_ref: Virtual package reference to add + """ + config_path = Path('apm.yml') + + # Skip if apm.yml doesn't exist (e.g., in test environments) + if not config_path.exists(): + return + + # Load current config + with open(config_path, 'r') as f: + config = yaml.safe_load(f) or {} + + # Ensure dependencies.apm section exists + if 'dependencies' not in config: + config['dependencies'] = {} + if 'apm' not in config['dependencies']: + config['dependencies']['apm'] = [] + + # Add the dependency if not already present + if package_ref not in config['dependencies']['apm']: + config['dependencies']['apm'].append(package_ref) + + # Write back to file + with open(config_path, 'w') as f: + yaml.dump(config, f, default_flow_style=False, sort_keys=False) + + print(f" ℹ️ Added {package_ref} to apm.yml dependencies") + + def _create_minimal_config(self) -> None: + """Create a minimal apm.yml for zero-config usage. + + This enables running virtual packages without apm init. + """ + minimal_config = { + "name": Path.cwd().name, + "version": "1.0.0", + "description": "Auto-generated for zero-config virtual package execution" + } + + with open("apm.yml", "w") as f: + yaml.dump(minimal_config, f, default_flow_style=False, sort_keys=False) + + print(f" ℹ️ Created minimal apm.yml for zero-config execution") + def _detect_installed_runtime(self) -> str: """Detect installed runtime with priority order. @@ -647,8 +833,8 @@ def _generate_runtime_command(self, runtime: str, prompt_file: Path) -> str: # GitHub Copilot CLI with all recommended flags return f"copilot --log-level all --log-dir copilot-logs --allow-all-tools -p {prompt_file}" elif runtime == "codex": - # Codex CLI default - return f"codex {prompt_file}" + # Codex CLI with default sandbox and git repo check skip + return f"codex -s workspace-write --skip-git-repo-check {prompt_file}" else: raise ValueError(f"Unsupported runtime: {runtime}") diff --git a/src/apm_cli/deps/collection_parser.py b/src/apm_cli/deps/collection_parser.py new file mode 100644 index 00000000..22080163 --- /dev/null +++ b/src/apm_cli/deps/collection_parser.py @@ -0,0 +1,123 @@ +"""Parser for APM collection manifest files (.collection.yml).""" + +import yaml +from dataclasses import dataclass +from typing import List, Optional, Dict, Any + + +@dataclass +class CollectionItem: + """Represents a single item in a collection manifest.""" + path: str # Relative path to the file (e.g., "prompts/code-review.prompt.md") + kind: str # Type of primitive (e.g., "prompt", "instruction", "chat-mode") + + @property + def subdirectory(self) -> str: + """Get the .apm subdirectory for this item based on its kind. + + Returns: + str: Subdirectory name (e.g., "prompts", "instructions", "chatmodes") + """ + kind_to_subdir = { + 'prompt': 'prompts', + 'instruction': 'instructions', + 'chat-mode': 'chatmodes', + 'chatmode': 'chatmodes', + 'agent': 'agents', + 'context': 'contexts', + } + return kind_to_subdir.get(self.kind.lower(), 'prompts') # Default to prompts + + +@dataclass +class CollectionManifest: + """Represents a parsed collection manifest (.collection.yml).""" + id: str + name: str + description: str + items: List[CollectionItem] + tags: Optional[List[str]] = None + display: Optional[Dict[str, Any]] = None + + @property + def item_count(self) -> int: + """Get the number of items in this collection.""" + return len(self.items) + + def get_items_by_kind(self, kind: str) -> List[CollectionItem]: + """Get all items of a specific kind. + + Args: + kind: The kind to filter by (e.g., "prompt", "instruction") + + Returns: + List of items matching the kind + """ + return [item for item in self.items if item.kind.lower() == kind.lower()] + + +def parse_collection_yml(content: bytes) -> CollectionManifest: + """Parse a collection YAML manifest. + + Args: + content: Raw YAML content as bytes + + Returns: + CollectionManifest: Parsed and validated collection manifest + + Raises: + ValueError: If the YAML is invalid or missing required fields + yaml.YAMLError: If YAML parsing fails + """ + try: + # Parse YAML + data = yaml.safe_load(content) + + if not isinstance(data, dict): + raise ValueError("Collection YAML must be a dictionary") + + # Validate required fields + required_fields = ['id', 'name', 'description', 'items'] + missing_fields = [field for field in required_fields if field not in data] + + if missing_fields: + raise ValueError(f"Collection manifest missing required fields: {', '.join(missing_fields)}") + + # Validate items array + items_data = data.get('items', []) + if not isinstance(items_data, list): + raise ValueError("Collection 'items' must be a list") + + if not items_data: + raise ValueError("Collection must contain at least one item") + + # Parse items + items = [] + for idx, item_data in enumerate(items_data): + if not isinstance(item_data, dict): + raise ValueError(f"Collection item {idx} must be a dictionary") + + # Validate item fields + if 'path' not in item_data: + raise ValueError(f"Collection item {idx} missing required field 'path'") + + if 'kind' not in item_data: + raise ValueError(f"Collection item {idx} missing required field 'kind'") + + items.append(CollectionItem( + path=item_data['path'], + kind=item_data['kind'] + )) + + # Create manifest + return CollectionManifest( + id=data['id'], + name=data['name'], + description=data['description'], + items=items, + tags=data.get('tags'), + display=data.get('display') + ) + + except yaml.YAMLError as e: + raise ValueError(f"Invalid YAML format: {e}") diff --git a/src/apm_cli/deps/github_downloader.py b/src/apm_cli/deps/github_downloader.py index da90ec95..309d5fb4 100644 --- a/src/apm_cli/deps/github_downloader.py +++ b/src/apm_cli/deps/github_downloader.py @@ -279,7 +279,7 @@ def resolve_git_reference(self, repo_ref: str) -> ResolvedReference: ) def download_raw_file(self, dep_ref: DependencyReference, file_path: str, ref: str = "main") -> bytes: - """Download a single file from GitHub repository via raw.githubusercontent.com. + """Download a single file from GitHub repository. Args: dep_ref: Parsed dependency reference @@ -294,24 +294,30 @@ def download_raw_file(self, dep_ref: DependencyReference, file_path: str, ref: s """ host = dep_ref.host or default_host() - # Build raw file URL - # Format: https://raw.githubusercontent.com/owner/repo/ref/path/to/file + # Parse owner/repo from repo_url + owner, repo = dep_ref.repo_url.split('/', 1) + + # Build GitHub API URL - format differs by host type if host == "github.com": - base_url = "https://raw.githubusercontent.com" + # GitHub.com: https://api.github.com/repos/owner/repo/contents/path + api_url = f"https://api.github.com/repos/{owner}/{repo}/contents/{file_path}?ref={ref}" + elif host.endswith(".ghe.com"): + # GitHub Enterprise Cloud Data Residency: https://api.{subdomain}.ghe.com/repos/owner/repo/contents/path + api_url = f"https://api.{host}/repos/{owner}/{repo}/contents/{file_path}?ref={ref}" else: - # For GitHub Enterprise, use the API endpoint - base_url = f"https://{host}/raw" - - file_url = f"{base_url}/{dep_ref.repo_url}/{ref}/{file_path}" + # GitHub Enterprise Server: https://{host}/api/v3/repos/owner/repo/contents/path + api_url = f"https://{host}/api/v3/repos/{owner}/{repo}/contents/{file_path}?ref={ref}" # Set up authentication headers - headers = {} + headers = { + 'Accept': 'application/vnd.github.v3.raw' # Returns raw content directly + } if self.github_token: headers['Authorization'] = f'token {self.github_token}' # Try to download with the specified ref try: - response = requests.get(file_url, headers=headers, timeout=30) + response = requests.get(api_url, headers=headers, timeout=30) response.raise_for_status() return response.content except requests.exceptions.HTTPError as e: @@ -323,12 +329,21 @@ def download_raw_file(self, dep_ref: DependencyReference, file_path: str, ref: s # Try the other default branch fallback_ref = "master" if ref == "main" else "main" - fallback_url = f"{base_url}/{dep_ref.repo_url}/{fallback_ref}/{file_path}" + + # Build fallback API URL + if host == "github.com": + fallback_url = f"https://api.github.com/repos/{owner}/{repo}/contents/{file_path}?ref={fallback_ref}" + elif host.endswith(".ghe.com"): + fallback_url = f"https://api.{host}/repos/{owner}/{repo}/contents/{file_path}?ref={fallback_ref}" + else: + fallback_url = f"https://{host}/api/v3/repos/{owner}/{repo}/contents/{file_path}?ref={fallback_ref}" try: response = requests.get(fallback_url, headers=headers, timeout=30) response.raise_for_status() return response.content + response.raise_for_status() + return response.content except requests.exceptions.HTTPError: raise RuntimeError( f"File not found: {file_path} in {dep_ref.repo_url} " @@ -346,6 +361,33 @@ def download_raw_file(self, dep_ref: DependencyReference, file_path: str, ref: s except requests.exceptions.RequestException as e: raise RuntimeError(f"Network error downloading {file_path}: {e}") + def validate_virtual_package_exists(self, dep_ref: DependencyReference) -> bool: + """Validate that a virtual package (file or collection) exists on GitHub. + + Args: + dep_ref: Parsed dependency reference for virtual package + + Returns: + bool: True if the package exists and is accessible, False otherwise + """ + if not dep_ref.is_virtual: + raise ValueError("Can only validate virtual packages with this method") + + ref = dep_ref.reference or "main" + file_path = dep_ref.virtual_path + + # For collections, check for .collection.yml file + if 'collections/' in dep_ref.virtual_path: + file_path = f"{dep_ref.virtual_path}.collection.yml" + + # Try to download the file (will use existing auth and host detection) + try: + self.download_raw_file(dep_ref, file_path, ref) + return True + except RuntimeError: + # File doesn't exist or isn't accessible + return False + def download_virtual_file_package(self, dep_ref: DependencyReference, target_path: Path) -> PackageInfo: """Download a single file as a virtual APM package. @@ -455,6 +497,141 @@ def download_virtual_file_package(self, dep_ref: DependencyReference, target_pat installed_at=datetime.now().isoformat() ) + def download_collection_package(self, dep_ref: DependencyReference, target_path: Path) -> PackageInfo: + """Download a collection as a virtual APM package. + + Downloads the collection manifest, then fetches all referenced files and + organizes them into the appropriate .apm/ subdirectories. + + Args: + dep_ref: Dependency reference with virtual_path pointing to collection + target_path: Local path where virtual package should be created + + Returns: + PackageInfo: Information about the created virtual package + + Raises: + ValueError: If the dependency is not a valid collection package + RuntimeError: If download fails + """ + if not dep_ref.is_virtual or not dep_ref.virtual_path: + raise ValueError("Dependency must be a virtual collection package") + + if not dep_ref.is_virtual_collection(): + raise ValueError(f"Path '{dep_ref.virtual_path}' is not a valid collection path") + + # Determine the ref to use + ref = dep_ref.reference or "main" + + # Extract collection name from path (e.g., "collections/project-planning" -> "project-planning") + collection_name = dep_ref.virtual_path.split('/')[-1] + + # Build collection manifest path - try .yml first, then .yaml as fallback + collection_manifest_path = f"{dep_ref.virtual_path}.collection.yml" + + # Download the collection manifest + try: + manifest_content = self.download_raw_file(dep_ref, collection_manifest_path, ref) + except RuntimeError as e: + # Try .yaml extension as fallback + if ".collection.yml" in str(e): + collection_manifest_path = f"{dep_ref.virtual_path}.collection.yaml" + try: + manifest_content = self.download_raw_file(dep_ref, collection_manifest_path, ref) + except RuntimeError: + raise RuntimeError(f"Collection manifest not found: {dep_ref.virtual_path}.collection.yml (also tried .yaml)") + else: + raise RuntimeError(f"Failed to download collection manifest: {e}") + + # Parse the collection manifest + from .collection_parser import parse_collection_yml + + try: + manifest = parse_collection_yml(manifest_content) + except (ValueError, Exception) as e: + raise RuntimeError(f"Invalid collection manifest '{collection_name}': {e}") + + # Create target directory structure + target_path.mkdir(parents=True, exist_ok=True) + + # Download all items from the collection + downloaded_count = 0 + failed_items = [] + + for item in manifest.items: + try: + # Download the file + item_content = self.download_raw_file(dep_ref, item.path, ref) + + # Determine subdirectory based on item kind + subdir = item.subdirectory + + # Create the subdirectory + apm_subdir = target_path / ".apm" / subdir + apm_subdir.mkdir(parents=True, exist_ok=True) + + # Write the file + filename = item.path.split('/')[-1] + file_path = apm_subdir / filename + file_path.write_bytes(item_content) + + downloaded_count += 1 + + except RuntimeError as e: + # Log the failure but continue with other items + failed_items.append(f"{item.path} ({e})") + continue + + # Check if we downloaded at least some items + if downloaded_count == 0: + error_msg = f"Failed to download any items from collection '{collection_name}'" + if failed_items: + error_msg += f". Failures:\n - " + "\n - ".join(failed_items) + raise RuntimeError(error_msg) + + # Generate apm.yml with collection metadata + package_name = dep_ref.get_virtual_package_name() + + apm_yml_content = f"""name: {package_name} +version: 1.0.0 +description: {manifest.description} +author: {dep_ref.repo_url.split('/')[0]} +""" + + # Add tags if present + if manifest.tags: + apm_yml_content += f"\ntags:\n" + for tag in manifest.tags: + apm_yml_content += f" - {tag}\n" + + apm_yml_path = target_path / "apm.yml" + apm_yml_path.write_text(apm_yml_content, encoding='utf-8') + + # Create APMPackage object + package = APMPackage( + name=package_name, + version="1.0.0", + description=manifest.description, + author=dep_ref.repo_url.split('/')[0], + source=dep_ref.to_github_url(), + package_path=target_path + ) + + # Log warnings for failed items if any + if failed_items: + import warnings + warnings.warn( + f"Collection '{collection_name}' installed with {downloaded_count}/{manifest.item_count} items. " + f"Failed items: {len(failed_items)}" + ) + + # Return PackageInfo + return PackageInfo( + package=package, + install_path=target_path, + installed_at=datetime.now().isoformat() + ) + def download_package(self, repo_ref: str, target_path: Path) -> PackageInfo: """Download a GitHub repository and validate it as an APM package. @@ -484,8 +661,8 @@ def download_package(self, repo_ref: str, target_path: Path) -> PackageInfo: # Individual file virtual package return self.download_virtual_file_package(dep_ref, target_path) elif dep_ref.is_virtual_collection(): - # Collection virtual package (Phase 2) - raise NotImplementedError("Collection virtual packages will be implemented in Phase 2") + # Collection virtual package + return self.download_collection_package(dep_ref, target_path) else: raise ValueError(f"Unknown virtual package type for {dep_ref.virtual_path}") diff --git a/src/apm_cli/models/apm_package.py b/src/apm_cli/models/apm_package.py index ee979ad8..3a3b2ad7 100644 --- a/src/apm_cli/models/apm_package.py +++ b/src/apm_cli/models/apm_package.py @@ -29,6 +29,11 @@ class ValidationError(Enum): INVALID_PRIMITIVE_STRUCTURE = "invalid_primitive_structure" +class InvalidVirtualPackageExtensionError(ValueError): + """Raised when a virtual package file has an invalid extension.""" + pass + + @dataclass class ResolvedReference: """Represents a resolved Git reference.""" @@ -197,7 +202,7 @@ def parse(cls, dependency_str: str) -> "DependencyReference": # Individual file virtual package - must end with valid extension valid_extension = any(virtual_path.endswith(ext) for ext in cls.VIRTUAL_FILE_EXTENSIONS) if not valid_extension: - raise ValueError( + raise InvalidVirtualPackageExtensionError( f"Invalid virtual package path '{virtual_path}'. " f"Individual files must end with one of: {', '.join(cls.VIRTUAL_FILE_EXTENSIONS)}" ) diff --git a/tests/integration/test_auto_install_e2e.py b/tests/integration/test_auto_install_e2e.py new file mode 100644 index 00000000..eb479f4c --- /dev/null +++ b/tests/integration/test_auto_install_e2e.py @@ -0,0 +1,364 @@ +""" +End-to-end tests for auto-install feature (README Hero Scenario). + +Tests the exact zero-config flow from the README: + apm run github/awesome-copilot/prompts/architecture-blueprint-generator + +This validates that users can run virtual packages without manual installation. + +Note: Tests terminate execution early (after auto-install completes) to save time. +The full execution is already tested in test_golden_scenario_e2e.py. +""" + +import os +import pytest +import subprocess +import tempfile +import shutil +from pathlib import Path + + +# Skip all tests in this module if not in E2E mode +E2E_MODE = os.environ.get('APM_E2E_TESTS', '').lower() in ('1', 'true', 'yes') + +pytestmark = pytest.mark.skipif( + not E2E_MODE, + reason="E2E tests only run when APM_E2E_TESTS=1 is set" +) + + +@pytest.fixture(scope="module") +def temp_e2e_home(): + """Create a temporary home directory for E2E testing.""" + with tempfile.TemporaryDirectory() as temp_dir: + original_home = os.environ.get('HOME') + test_home = os.path.join(temp_dir, 'e2e_home') + os.makedirs(test_home) + + # Set up test environment + os.environ['HOME'] = test_home + + yield test_home + + # Restore original environment + if original_home: + os.environ['HOME'] = original_home + else: + del os.environ['HOME'] + + +class TestAutoInstallE2E: + """E2E tests for auto-install functionality.""" + + def setup_method(self): + """Set up test environment.""" + # Create isolated test directory + self.test_dir = tempfile.mkdtemp(prefix="apm-auto-install-e2e-") + self.original_dir = os.getcwd() + os.chdir(self.test_dir) + + # Create minimal apm.yml for testing + with open("apm.yml", "w") as f: + f.write("""name: auto-install-test +version: 1.0.0 +description: Auto-install E2E test project +author: test +""") + + def teardown_method(self): + """Clean up test environment.""" + os.chdir(self.original_dir) + if os.path.exists(self.test_dir): + shutil.rmtree(self.test_dir) + + def test_auto_install_virtual_prompt_first_run(self, temp_e2e_home): + """Test auto-install on first run with virtual package reference. + + This is the exact README hero scenario: + apm run github/awesome-copilot/prompts/architecture-blueprint-generator + + Expected behavior: + 1. Package doesn't exist locally + 2. APM detects it's a virtual package reference + 3. Auto-installs to apm_modules/ + 4. Discovers and attempts to run the prompt + 5. Terminates before full execution to save time + """ + # Verify package doesn't exist initially + apm_modules = Path("apm_modules") + assert not apm_modules.exists(), "apm_modules should not exist initially" + + # Set up environment (like golden scenario does) + env = os.environ.copy() + env['HOME'] = temp_e2e_home + + # Run the exact README command with streaming output monitoring + process = subprocess.Popen( + [ + "apm", + "run", + "github/awesome-copilot/prompts/architecture-blueprint-generator" + ], + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT, + text=True, + cwd=self.test_dir, + env=env + ) + + output_lines = [] + execution_started = False + + # Monitor output and terminate once execution starts + try: + for line in iter(process.stdout.readline, ''): + if not line: + break + output_lines.append(line) + print(line.rstrip()) # Show progress + + # Once we see "Package installed and ready to run", execution is about to start + # Terminate to avoid waiting for full prompt execution + if "✨ Package installed and ready to run" in line: + execution_started = True + print("\n⚡ Test validated - terminating to save time") + process.terminate() + break + + # Wait for graceful shutdown + try: + process.wait(timeout=5) + except subprocess.TimeoutExpired: + process.kill() + process.wait() + + finally: + output = ''.join(output_lines) + + # Check output for auto-install messages + assert "Auto-installing virtual package" in output or "📦" in output, \ + "Should show auto-install message" + assert "Downloading from" in output or "📥" in output, \ + "Should show download message" + assert execution_started, "Should have started execution (✨ Package installed and ready to run)" + + # Verify package was installed + package_path = apm_modules / "github" / "awesome-copilot-architecture-blueprint-generator" + assert package_path.exists(), f"Package should be installed at {package_path}" + + # Verify apm.yml was created in the virtual package + apm_yml = package_path / "apm.yml" + assert apm_yml.exists(), "Virtual package should have apm.yml" + + # Verify the prompt file exists + prompt_file = package_path / ".apm" / "prompts" / "architecture-blueprint-generator.prompt.md" + assert prompt_file.exists(), f"Prompt file should exist at {prompt_file}" + + print(f"✅ Auto-install successful: {package_path}") + + def test_auto_install_uses_cache_on_second_run(self, temp_e2e_home): + """Test that second run uses cached package (no re-download). + + Expected behavior: + 1. First run installs package + 2. Second run discovers already-installed package + 3. No download happens on second run + """ + # Set up environment + env = os.environ.copy() + env['HOME'] = temp_e2e_home + + # First run - install with early termination + process = subprocess.Popen( + [ + "apm", + "run", + "github/awesome-copilot/prompts/architecture-blueprint-generator" + ], + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT, + text=True, + cwd=self.test_dir, + env=env + ) + + try: + for line in iter(process.stdout.readline, ''): + if not line: + break + if "✨ Package installed and ready to run" in line: + process.terminate() + break + process.wait(timeout=5) + except: + process.kill() + process.wait() + + # Verify package exists + package_path = Path("apm_modules/github/awesome-copilot-architecture-blueprint-generator") + assert package_path.exists(), "Package should exist after first run" + + # Second run - should use cache with early termination + process = subprocess.Popen( + [ + "apm", + "run", + "github/awesome-copilot/prompts/architecture-blueprint-generator" + ], + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT, + text=True, + cwd=self.test_dir, + env=env + ) + + output_lines = [] + try: + for line in iter(process.stdout.readline, ''): + if not line: + break + output_lines.append(line) + # Terminate once we see execution starting (no need for full run) + if "Executing" in line or "✨" in line: + process.terminate() + break + process.wait(timeout=5) + except: + process.kill() + process.wait() + finally: + output = ''.join(output_lines) + + # Check output - should NOT show install/download messages + assert "Auto-installing" not in output, "Should not auto-install on second run" + assert "Auto-discovered" in output or "ℹ" in output, \ + "Should show auto-discovery message (using cached package)" + + print("✅ Second run used cached package (no re-download)") + + def test_simple_name_works_after_install(self, temp_e2e_home): + """Test that simple name works after package is installed. + + Expected behavior: + 1. Install package with full path + 2. Run with simple name (just the prompt name) + 3. Should discover and run from installed package + """ + # Set up environment + env = os.environ.copy() + env['HOME'] = temp_e2e_home + + # First install with full path - early termination + process = subprocess.Popen( + [ + "apm", + "run", + "github/awesome-copilot/prompts/architecture-blueprint-generator" + ], + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT, + text=True, + cwd=self.test_dir, + env=env + ) + + try: + for line in iter(process.stdout.readline, ''): + if not line: + break + if "✨ Package installed and ready to run" in line: + process.terminate() + break + process.wait(timeout=5) + except: + process.kill() + process.wait() + + # Run with simple name - early termination + process = subprocess.Popen( + [ + "apm", + "run", + "architecture-blueprint-generator" + ], + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT, + text=True, + cwd=self.test_dir, + env=env + ) + + output_lines = [] + try: + for line in iter(process.stdout.readline, ''): + if not line: + break + output_lines.append(line) + # Terminate once we see execution starting + if "Executing" in line or "Auto-discovered" in line: + process.terminate() + break + process.wait(timeout=5) + except: + process.kill() + process.wait() + finally: + output = ''.join(output_lines) + + # Check output - should discover the installed prompt + assert "Auto-discovered" in output or "ℹ" in output, \ + "Should auto-discover prompt from installed package" + + print("✅ Simple name works after installation") + + def test_auto_install_with_qualified_path(self, temp_e2e_home): + """Test auto-install works with qualified path format. + + Tests both formats: + - Full: github/awesome-copilot/prompts/file.prompt.md + - Qualified: github/awesome-copilot/architecture-blueprint-generator + """ + # Set up environment + env = os.environ.copy() + env['HOME'] = temp_e2e_home + + # Test with qualified path (without .prompt.md extension) - early termination + process = subprocess.Popen( + [ + "apm", + "run", + "github/awesome-copilot/prompts/architecture-blueprint-generator" + ], + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT, + text=True, + cwd=self.test_dir, + env=env + ) + + try: + for line in iter(process.stdout.readline, ''): + if not line: + break + # Terminate once installation completes + if "✨ Package installed and ready to run" in line: + process.terminate() + break + process.wait(timeout=5) + except: + process.kill() + process.wait() + + # Check that package was installed + package_path = Path("apm_modules/github/awesome-copilot-architecture-blueprint-generator") + assert package_path.exists(), "Package should be installed" + + # Check that prompt file exists + prompt_file = package_path / ".apm" / "prompts" / "architecture-blueprint-generator.prompt.md" + assert prompt_file.exists(), "Prompt file should exist" + + print("✅ Auto-install works with qualified path") + + +if __name__ == "__main__": + pytest.main([__file__, "-v", "-s"]) diff --git a/tests/integration/test_collection_install.py b/tests/integration/test_collection_install.py new file mode 100644 index 00000000..f14becce --- /dev/null +++ b/tests/integration/test_collection_install.py @@ -0,0 +1,165 @@ +"""Integration tests for collection virtual package installation.""" + +import pytest +from pathlib import Path +import tempfile +import shutil + +from apm_cli.deps.github_downloader import GitHubPackageDownloader +from apm_cli.models.apm_package import DependencyReference + + +class TestCollectionInstallation: + """Test collection virtual package installation from GitHub.""" + + def test_parse_collection_dependency(self): + """Test parsing a collection dependency reference.""" + dep_ref = DependencyReference.parse("github/awesome-copilot/collections/awesome-copilot") + + assert dep_ref.is_virtual is True + assert dep_ref.is_virtual_collection() is True + assert dep_ref.is_virtual_file() is False + assert dep_ref.repo_url == "github/awesome-copilot" + assert dep_ref.virtual_path == "collections/awesome-copilot" + assert dep_ref.get_virtual_package_name() == "awesome-copilot-awesome-copilot" + + def test_parse_collection_with_reference(self): + """Test parsing a collection dependency with git reference.""" + dep_ref = DependencyReference.parse("github/awesome-copilot/collections/project-planning#main") + + assert dep_ref.is_virtual is True + assert dep_ref.is_virtual_collection() is True + assert dep_ref.reference == "main" + assert dep_ref.virtual_path == "collections/project-planning" + + @pytest.mark.integration + @pytest.mark.slow + def test_download_small_collection(self): + """Test downloading a small collection from awesome-copilot. + + This is a real integration test that requires: + - Network access + - GitHub API access + - The github/awesome-copilot repository to be accessible + """ + with tempfile.TemporaryDirectory() as temp_dir: + target_path = Path(temp_dir) / "test-collection" + + downloader = GitHubPackageDownloader() + + # Download the smallest collection (awesome-copilot has 6 items) + package_info = downloader.download_package( + "github/awesome-copilot/collections/awesome-copilot", + target_path + ) + + # Verify package was created + assert package_info is not None + assert package_info.package.name == "awesome-copilot-awesome-copilot" + assert "Meta prompts" in package_info.package.description + + # Verify apm.yml was generated + apm_yml = target_path / "apm.yml" + assert apm_yml.exists() + + # Verify .apm directory structure + apm_dir = target_path / ".apm" + assert apm_dir.exists() + + # Verify files were downloaded to correct subdirectories + # The collection should have prompts and chatmodes + prompts_dir = apm_dir / "prompts" + chatmodes_dir = apm_dir / "chatmodes" + + # At least one of these should exist and have files + has_prompts = prompts_dir.exists() and any(prompts_dir.iterdir()) + has_chatmodes = chatmodes_dir.exists() and any(chatmodes_dir.iterdir()) + + assert has_prompts or has_chatmodes, "Collection should have downloaded some files" + + def test_collection_manifest_parsing(self): + """Test parsing a collection manifest.""" + from apm_cli.deps.collection_parser import parse_collection_yml + + manifest_yaml = b""" +id: test-collection +name: Test Collection +description: A test collection for unit testing +tags: [testing, example] +items: + - path: prompts/test-prompt.prompt.md + kind: prompt + - path: instructions/test-instruction.instructions.md + kind: instruction + - path: chatmodes/test-mode.chatmode.md + kind: chat-mode +display: + ordering: alpha + show_badge: true +""" + + manifest = parse_collection_yml(manifest_yaml) + + assert manifest.id == "test-collection" + assert manifest.name == "Test Collection" + assert manifest.description == "A test collection for unit testing" + assert len(manifest.items) == 3 + assert manifest.tags == ["testing", "example"] + + # Check item parsing + assert manifest.items[0].path == "prompts/test-prompt.prompt.md" + assert manifest.items[0].kind == "prompt" + assert manifest.items[0].subdirectory == "prompts" + + assert manifest.items[1].kind == "instruction" + assert manifest.items[1].subdirectory == "instructions" + + assert manifest.items[2].kind == "chat-mode" + assert manifest.items[2].subdirectory == "chatmodes" + + def test_collection_manifest_validation_missing_fields(self): + """Test that collection manifest validation catches missing fields.""" + from apm_cli.deps.collection_parser import parse_collection_yml + + # Missing required field 'description' + invalid_yaml = b""" +id: test +name: Test +items: + - path: test.prompt.md + kind: prompt +""" + + with pytest.raises(ValueError, match="missing required fields"): + parse_collection_yml(invalid_yaml) + + def test_collection_manifest_validation_empty_items(self): + """Test that collection manifest validation catches empty items.""" + from apm_cli.deps.collection_parser import parse_collection_yml + + # Empty items array + invalid_yaml = b""" +id: test +name: Test +description: Test collection +items: [] +""" + + with pytest.raises(ValueError, match="must contain at least one item"): + parse_collection_yml(invalid_yaml) + + def test_collection_manifest_validation_invalid_item(self): + """Test that collection manifest validation catches invalid items.""" + from apm_cli.deps.collection_parser import parse_collection_yml + + # Item missing 'kind' field + invalid_yaml = b""" +id: test +name: Test +description: Test collection +items: + - path: test.prompt.md +""" + + with pytest.raises(ValueError, match="missing required field"): + parse_collection_yml(invalid_yaml) diff --git a/tests/integration/test_guardrailing_hero_e2e.py b/tests/integration/test_guardrailing_hero_e2e.py new file mode 100644 index 00000000..32521af8 --- /dev/null +++ b/tests/integration/test_guardrailing_hero_e2e.py @@ -0,0 +1,253 @@ +""" +End-to-end test for README Hero Scenario 2: 2-Minute Guardrailing + +Tests the exact 2-minute guardrailing flow from README (lines 46-60): +1. apm init my-project && cd my-project +2. apm install danielmeppiel/design-guidelines +3. apm install danielmeppiel/compliance-rules +4. apm compile +5. apm run design-review + +This validates that: +- Multiple APM packages can be installed +- AGENTS.md is generated with combined guardrails +- Prompts from installed packages work correctly +""" + +import os +import subprocess +import tempfile +import pytest +from pathlib import Path + + +# Skip all tests in this module if not in E2E mode +E2E_MODE = os.environ.get('APM_E2E_TESTS', '').lower() in ('1', 'true', 'yes') + +# Token detection for test requirements +GITHUB_APM_PAT = os.environ.get('GITHUB_APM_PAT') +GITHUB_TOKEN = os.environ.get('GITHUB_TOKEN') +PRIMARY_TOKEN = GITHUB_APM_PAT or GITHUB_TOKEN + +pytestmark = pytest.mark.skipif( + not E2E_MODE, + reason="E2E tests only run when APM_E2E_TESTS=1 is set" +) + + +def run_command(cmd, check=True, capture_output=True, timeout=180, cwd=None, show_output=False, env=None): + """Run a shell command with proper error handling.""" + try: + if show_output: + print(f"\n>>> Running command: {cmd}") + result = subprocess.run( + cmd, + shell=True, + check=check, + capture_output=False, + text=True, + timeout=timeout, + cwd=cwd, + env=env + ) + result_capture = subprocess.run( + cmd, + shell=True, + check=False, + capture_output=True, + text=True, + timeout=timeout, + cwd=cwd, + env=env + ) + result.stdout = result_capture.stdout + result.stderr = result_capture.stderr + else: + result = subprocess.run( + cmd, + shell=True, + check=check, + capture_output=capture_output, + text=True, + timeout=timeout, + cwd=cwd, + env=env + ) + return result + except subprocess.TimeoutExpired: + pytest.fail(f"Command timed out after {timeout}s: {cmd}") + except subprocess.CalledProcessError as e: + pytest.fail(f"Command failed: {cmd}\nStdout: {e.stdout}\nStderr: {e.stderr}") + + +@pytest.fixture(scope="module") +def apm_binary(): + """Get path to APM binary for testing.""" + possible_paths = [ + "apm", + "./apm", + "./dist/apm", + Path(__file__).parent.parent.parent / "dist" / "apm", + ] + + for path in possible_paths: + try: + result = subprocess.run([str(path), "--version"], capture_output=True, text=True) + if result.returncode == 0: + return str(path) + except (subprocess.CalledProcessError, FileNotFoundError): + continue + + pytest.skip("APM binary not found. Build it first with: python -m build") + + +class TestGuardrailingHeroScenario: + """Test README Hero Scenario 2: 2-Minute Guardrailing""" + + @pytest.mark.skipif(not PRIMARY_TOKEN, reason="GitHub token required for E2E tests") + def test_2_minute_guardrailing_flow(self, apm_binary): + """Test the exact 2-minute guardrailing flow from README. + + Validates: + 1. apm init my-project creates minimal project + 2. apm install danielmeppiel/design-guidelines succeeds + 3. apm install danielmeppiel/compliance-rules succeeds + 4. apm compile generates AGENTS.md with both packages + 5. apm run design-review executes prompt from installed package + """ + + with tempfile.TemporaryDirectory() as workspace: + # Step 1: apm init my-project + print("\n=== Step 1: apm init my-project ===") + result = run_command(f"{apm_binary} init my-project --yes", cwd=workspace, show_output=True) + assert result.returncode == 0, f"Project init failed: {result.stderr}" + + project_dir = Path(workspace) / "my-project" + assert project_dir.exists(), "Project directory not created" + assert (project_dir / "apm.yml").exists(), "apm.yml not created" + + print("✓ Project initialized") + + # Step 2: apm install danielmeppiel/design-guidelines + print("\n=== Step 2: apm install danielmeppiel/design-guidelines ===") + env = os.environ.copy() + result = run_command( + f"{apm_binary} install danielmeppiel/design-guidelines", + cwd=project_dir, + show_output=True, + env=env + ) + assert result.returncode == 0, f"design-guidelines install failed: {result.stderr}" + + # Verify installation + design_pkg = project_dir / "apm_modules" / "danielmeppiel" / "design-guidelines" + assert design_pkg.exists(), "design-guidelines package not installed" + assert (design_pkg / "apm.yml").exists(), "design-guidelines apm.yml not found" + + print("✓ design-guidelines installed") + + # Step 3: apm install danielmeppiel/compliance-rules + print("\n=== Step 3: apm install danielmeppiel/compliance-rules ===") + result = run_command( + f"{apm_binary} install danielmeppiel/compliance-rules", + cwd=project_dir, + show_output=True, + env=env + ) + assert result.returncode == 0, f"compliance-rules install failed: {result.stderr}" + + # Verify installation + compliance_pkg = project_dir / "apm_modules" / "danielmeppiel" / "compliance-rules" + assert compliance_pkg.exists(), "compliance-rules package not installed" + assert (compliance_pkg / "apm.yml").exists(), "compliance-rules apm.yml not found" + + print("✓ compliance-rules installed") + + # Step 4: apm compile + print("\n=== Step 4: apm compile ===") + result = run_command(f"{apm_binary} compile", cwd=project_dir, show_output=True) + assert result.returncode == 0, f"Compilation failed: {result.stderr}" + + # Verify AGENTS.md was generated + agents_md = project_dir / "AGENTS.md" + assert agents_md.exists(), "AGENTS.md not generated" + + # Verify AGENTS.md contains content from both packages + agents_content = agents_md.read_text() + assert "design-guidelines" in agents_content.lower() or "design" in agents_content.lower(), \ + "AGENTS.md doesn't contain design-guidelines content" + assert "compliance" in agents_content.lower() or "gdpr" in agents_content.lower(), \ + "AGENTS.md doesn't contain compliance-rules content" + + print(f"✓ AGENTS.md generated ({len(agents_content)} bytes)") + print(f" Contains design-guidelines: ✓") + print(f" Contains compliance-rules: ✓") + + # Step 5: apm run design-review + print("\n=== Step 5: apm run design-review ===") + + # Use early termination pattern - we only need to verify prompt starts correctly + # Don't wait for full Copilot CLI execution (takes minutes) + process = subprocess.Popen( + f"{apm_binary} run design-review", + shell=True, + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT, + text=True, + cwd=project_dir, + env=env + ) + + # Monitor output for success signals + output_lines = [] + prompt_started = False + + try: + for line in iter(process.stdout.readline, ''): + if not line: + break + + output_lines.append(line.rstrip()) + print(f" {line.rstrip()}") + + # Look for signals that prompt execution started successfully + if any(signal in line for signal in [ + "Subprocess execution:", # Codex about to run + ]): + prompt_started = True + print("✓ design-review prompt execution started") + break + + # Terminate the process gracefully + if process.poll() is None: + process.terminate() + try: + process.wait(timeout=5) + except subprocess.TimeoutExpired: + process.kill() + process.wait() + + except Exception as e: + process.kill() + process.wait() + pytest.fail(f"Error monitoring design-review execution: {e}") + + # Verify prompt was found and started + full_output = '\n'.join(output_lines) + assert prompt_started or "design-review" in full_output, \ + f"Prompt execution didn't start correctly. Output:\n{full_output}" + + print("✓ design-review prompt found and started successfully") + + print("\n=== 2-Minute Guardrailing Hero Scenario: PASSED ✨ ===") + print("✓ Project initialization") + print("✓ Multiple APM package installation") + print("✓ AGENTS.md compilation with combined guardrails") + print("✓ Prompt execution from installed package") + + +if __name__ == "__main__": + if E2E_MODE: + pytest.main([__file__, "-v", "-s"]) + else: + print("E2E mode not enabled. Set APM_E2E_TESTS=1 to run these tests.") diff --git a/tests/integration/test_virtual_package_orphan_detection.py b/tests/integration/test_virtual_package_orphan_detection.py new file mode 100644 index 00000000..e8e416f2 --- /dev/null +++ b/tests/integration/test_virtual_package_orphan_detection.py @@ -0,0 +1,237 @@ +""" +Integration tests for orphan detection with virtual packages. + +Tests that virtual packages (individual files and collections) are correctly +recognized and not flagged as orphaned when they are declared in apm.yml. +""" + +import tempfile +from pathlib import Path +import pytest +import yaml +from apm_cli.models.apm_package import APMPackage + + +@pytest.mark.integration +def test_virtual_collection_not_flagged_as_orphan(tmp_path): + """Test that installed virtual collection is not flagged as orphaned.""" + # Create test project structure + project_dir = tmp_path / "test-project" + project_dir.mkdir() + + # Create apm.yml with collection dependency + apm_yml_content = { + "name": "test-project", + "version": "1.0.0", + "dependencies": { + "apm": [ + "github/awesome-copilot/collections/awesome-copilot" + ] + } + } + + with open(project_dir / "apm.yml", "w") as f: + yaml.dump(apm_yml_content, f) + + # Simulate installed virtual collection package + # Virtual collections are installed as: apm_modules/{org}/{repo-name}-{collection-name}/ + collection_dir = project_dir / "apm_modules" / "github" / "awesome-copilot-awesome-copilot" + collection_dir.mkdir(parents=True) + + # Create generated apm.yml in the collection + collection_apm = { + "name": "awesome-copilot-awesome-copilot", + "version": "1.0.0", + "description": "Virtual collection package" + } + with open(collection_dir / "apm.yml", "w") as f: + yaml.dump(collection_apm, f) + + # Add some files to make it realistic + (collection_dir / ".apm").mkdir() + (collection_dir / ".apm" / "prompts").mkdir() + (collection_dir / ".apm" / "prompts" / "test.prompt.md").write_text("# Test prompt") + + # Parse apm.yml and check for orphans + apm_package = APMPackage.from_apm_yml(project_dir / "apm.yml") + declared_deps = apm_package.get_apm_dependencies() + + # Build expected installed packages set (same logic as _check_orphaned_packages) + expected_installed = set() + for dep in declared_deps: + repo_parts = dep.repo_url.split('/') + if len(repo_parts) >= 2: + org_name = repo_parts[0] + if dep.is_virtual: + package_name = dep.get_virtual_package_name() + expected_installed.add(f"{org_name}/{package_name}") + else: + repo_name = repo_parts[1] + expected_installed.add(f"{org_name}/{repo_name}") + + # Check installed packages + apm_modules_dir = project_dir / "apm_modules" + orphaned_packages = [] + for org_dir in apm_modules_dir.iterdir(): + if org_dir.is_dir() and not org_dir.name.startswith("."): + for repo_dir in org_dir.iterdir(): + if repo_dir.is_dir() and not repo_dir.name.startswith("."): + org_repo_name = f"{org_dir.name}/{repo_dir.name}" + if org_repo_name not in expected_installed: + orphaned_packages.append(org_repo_name) + + # Assert no orphans found + assert len(orphaned_packages) == 0, \ + f"Virtual collection should not be flagged as orphaned. Found: {orphaned_packages}" + + +@pytest.mark.integration +def test_virtual_file_not_flagged_as_orphan(tmp_path): + """Test that installed virtual file package is not flagged as orphaned.""" + # Create test project structure + project_dir = tmp_path / "test-project" + project_dir.mkdir() + + # Create apm.yml with virtual file dependency + apm_yml_content = { + "name": "test-project", + "version": "1.0.0", + "dependencies": { + "apm": [ + "github/awesome-copilot/prompts/code-review.prompt.md" + ] + } + } + + with open(project_dir / "apm.yml", "w") as f: + yaml.dump(apm_yml_content, f) + + # Simulate installed virtual file package + # Virtual files are installed as: apm_modules/{org}/{repo-name}-{file-name}/ + file_pkg_dir = project_dir / "apm_modules" / "github" / "awesome-copilot-code-review" + file_pkg_dir.mkdir(parents=True) + + # Create generated apm.yml in the package + file_pkg_apm = { + "name": "awesome-copilot-code-review", + "version": "1.0.0", + "description": "Virtual file package" + } + with open(file_pkg_dir / "apm.yml", "w") as f: + yaml.dump(file_pkg_apm, f) + + # Add the prompt file + (file_pkg_dir / ".apm").mkdir() + (file_pkg_dir / ".apm" / "prompts").mkdir() + (file_pkg_dir / ".apm" / "prompts" / "code-review.prompt.md").write_text("# Code review prompt") + + # Parse apm.yml and check for orphans + apm_package = APMPackage.from_apm_yml(project_dir / "apm.yml") + declared_deps = apm_package.get_apm_dependencies() + + # Build expected installed packages set + expected_installed = set() + for dep in declared_deps: + repo_parts = dep.repo_url.split('/') + if len(repo_parts) >= 2: + org_name = repo_parts[0] + if dep.is_virtual: + package_name = dep.get_virtual_package_name() + expected_installed.add(f"{org_name}/{package_name}") + else: + repo_name = repo_parts[1] + expected_installed.add(f"{org_name}/{repo_name}") + + # Check installed packages + apm_modules_dir = project_dir / "apm_modules" + orphaned_packages = [] + for org_dir in apm_modules_dir.iterdir(): + if org_dir.is_dir() and not org_dir.name.startswith("."): + for repo_dir in org_dir.iterdir(): + if repo_dir.is_dir() and not repo_dir.name.startswith("."): + org_repo_name = f"{org_dir.name}/{repo_dir.name}" + if org_repo_name not in expected_installed: + orphaned_packages.append(org_repo_name) + + # Assert no orphans found + assert len(orphaned_packages) == 0, \ + f"Virtual file should not be flagged as orphaned. Found: {orphaned_packages}" + + +@pytest.mark.integration +def test_mixed_dependencies_orphan_detection(tmp_path): + """Test orphan detection with mix of regular and virtual packages.""" + # Create test project structure + project_dir = tmp_path / "test-project" + project_dir.mkdir() + + # Create apm.yml with mixed dependencies + apm_yml_content = { + "name": "test-project", + "version": "1.0.0", + "dependencies": { + "apm": [ + "danielmeppiel/design-guidelines", # Regular package + "github/awesome-copilot/collections/awesome-copilot", # Virtual collection + "danielmeppiel/compliance-rules/prompts/gdpr.prompt.md" # Virtual file + ] + } + } + + with open(project_dir / "apm.yml", "w") as f: + yaml.dump(apm_yml_content, f) + + # Simulate installed packages + apm_modules_dir = project_dir / "apm_modules" + + # Regular package + regular_dir = apm_modules_dir / "danielmeppiel" / "design-guidelines" + regular_dir.mkdir(parents=True) + (regular_dir / "apm.yml").write_text("name: design-guidelines\nversion: 1.0.0") + + # Virtual collection + collection_dir = apm_modules_dir / "github" / "awesome-copilot-awesome-copilot" + collection_dir.mkdir(parents=True) + (collection_dir / "apm.yml").write_text("name: awesome-copilot-awesome-copilot\nversion: 1.0.0") + + # Virtual file + file_dir = apm_modules_dir / "danielmeppiel" / "compliance-rules-gdpr" + file_dir.mkdir(parents=True) + (file_dir / "apm.yml").write_text("name: compliance-rules-gdpr\nversion: 1.0.0") + + # Parse apm.yml and check for orphans + apm_package = APMPackage.from_apm_yml(project_dir / "apm.yml") + declared_deps = apm_package.get_apm_dependencies() + + # Build expected installed packages set + expected_installed = set() + for dep in declared_deps: + repo_parts = dep.repo_url.split('/') + if len(repo_parts) >= 2: + org_name = repo_parts[0] + if dep.is_virtual: + package_name = dep.get_virtual_package_name() + expected_installed.add(f"{org_name}/{package_name}") + else: + repo_name = repo_parts[1] + expected_installed.add(f"{org_name}/{repo_name}") + + # Check installed packages + orphaned_packages = [] + for org_dir in apm_modules_dir.iterdir(): + if org_dir.is_dir() and not org_dir.name.startswith("."): + for repo_dir in org_dir.iterdir(): + if repo_dir.is_dir() and not repo_dir.name.startswith("."): + org_repo_name = f"{org_dir.name}/{repo_dir.name}" + if org_repo_name not in expected_installed: + orphaned_packages.append(org_repo_name) + + # Assert no orphans found + assert len(orphaned_packages) == 0, \ + f"No packages should be flagged as orphaned. Found: {orphaned_packages}" + + # Verify expected counts + assert len(expected_installed) == 3, "Should have 3 expected packages" + assert "danielmeppiel/design-guidelines" in expected_installed + assert "github/awesome-copilot-awesome-copilot" in expected_installed + assert "danielmeppiel/compliance-rules-gdpr" in expected_installed diff --git a/tests/unit/test_script_runner.py b/tests/unit/test_script_runner.py index 1af36bec..0ef3414e 100644 --- a/tests/unit/test_script_runner.py +++ b/tests/unit/test_script_runner.py @@ -480,3 +480,237 @@ def test_compile_with_dependency_resolution(self, mock_file, mock_mkdir): mock_file.assert_called() opened_path = mock_file.call_args_list[0][0][0] assert str(opened_path) == "apm_modules/danielmeppiel/design-guidelines/test.prompt.md" + + +class TestScriptRunnerAutoInstall: + """Test ScriptRunner auto-install functionality.""" + + def setup_method(self): + """Set up test fixtures.""" + self.script_runner = ScriptRunner() + + def test_is_virtual_package_reference_valid_file(self): + """Test detection of valid virtual file package references.""" + # Valid virtual file package reference + ref = "github/awesome-copilot/prompts/architecture-blueprint-generator.prompt.md" + assert self.script_runner._is_virtual_package_reference(ref) is True + + def test_is_virtual_package_reference_valid_collection(self): + """Test detection of valid virtual collection package references.""" + # Valid virtual collection package reference + ref = "github/awesome-copilot/collections/project-planning" + assert self.script_runner._is_virtual_package_reference(ref) is True + + def test_is_virtual_package_reference_regular_package(self): + """Test detection rejects regular packages.""" + # Regular package (not virtual) + ref = "danielmeppiel/design-guidelines" + assert self.script_runner._is_virtual_package_reference(ref) is False + + def test_is_virtual_package_reference_simple_name(self): + """Test detection rejects simple names without slashes.""" + # Simple name (not a virtual package) + ref = "code-review" + assert self.script_runner._is_virtual_package_reference(ref) is False + + def test_is_virtual_package_reference_invalid_format(self): + """Test detection rejects invalid formats.""" + # Invalid format + ref = "owner/repo/some/invalid/path.txt" + assert self.script_runner._is_virtual_package_reference(ref) is False + + @patch('apm_cli.deps.github_downloader.GitHubPackageDownloader') + @patch('apm_cli.core.script_runner.Path.mkdir') + @patch('apm_cli.core.script_runner.Path.exists') + def test_auto_install_virtual_package_file_success(self, mock_exists, mock_mkdir, mock_downloader_class): + """Test successful auto-install of virtual file package.""" + # Setup mocks + mock_exists.return_value = False # Package not already installed + mock_downloader = MagicMock() + mock_downloader_class.return_value = mock_downloader + + # Mock package info + mock_package = MagicMock() + mock_package.name = "awesome-copilot-architecture-blueprint-generator" + mock_package.version = "1.0.0" + mock_package_info = MagicMock() + mock_package_info.package = mock_package + mock_downloader.download_virtual_file_package.return_value = mock_package_info + + # Test auto-install + ref = "github/awesome-copilot/prompts/architecture-blueprint-generator.prompt.md" + result = self.script_runner._auto_install_virtual_package(ref) + + assert result is True + mock_downloader.download_virtual_file_package.assert_called_once() + + @patch('apm_cli.deps.github_downloader.GitHubPackageDownloader') + @patch('apm_cli.core.script_runner.Path.mkdir') + @patch('apm_cli.core.script_runner.Path.exists') + def test_auto_install_virtual_package_collection_success(self, mock_exists, mock_mkdir, mock_downloader_class): + """Test successful auto-install of virtual collection package.""" + # Setup mocks + mock_exists.return_value = False # Package not already installed + mock_downloader = MagicMock() + mock_downloader_class.return_value = mock_downloader + + # Mock package info + mock_package = MagicMock() + mock_package.name = "awesome-copilot-project-planning" + mock_package.version = "1.0.0" + mock_package_info = MagicMock() + mock_package_info.package = mock_package + mock_downloader.download_virtual_collection_package.return_value = mock_package_info + + # Test auto-install + ref = "github/awesome-copilot/collections/project-planning" + result = self.script_runner._auto_install_virtual_package(ref) + + assert result is True + mock_downloader.download_virtual_collection_package.assert_called_once() + + @patch('apm_cli.core.script_runner.Path.exists') + def test_auto_install_virtual_package_already_installed(self, mock_exists): + """Test auto-install skips when package already installed.""" + # Package already exists + mock_exists.return_value = True + + ref = "github/awesome-copilot/prompts/architecture-blueprint-generator.prompt.md" + result = self.script_runner._auto_install_virtual_package(ref) + + assert result is True # Should return True (success) without downloading + + @patch('apm_cli.deps.github_downloader.GitHubPackageDownloader') + @patch('apm_cli.core.script_runner.Path.mkdir') + @patch('apm_cli.core.script_runner.Path.exists') + def test_auto_install_virtual_package_download_failure(self, mock_exists, mock_mkdir, mock_downloader_class): + """Test auto-install handles download failures gracefully.""" + # Setup mocks + mock_exists.return_value = False + mock_downloader = MagicMock() + mock_downloader_class.return_value = mock_downloader + + # Simulate download failure + mock_downloader.download_virtual_file_package.side_effect = RuntimeError("Download failed") + + # Test auto-install + ref = "github/awesome-copilot/prompts/architecture-blueprint-generator.prompt.md" + result = self.script_runner._auto_install_virtual_package(ref) + + assert result is False # Should return False on failure + + def test_auto_install_virtual_package_invalid_reference(self): + """Test auto-install rejects invalid references.""" + # Not a virtual package + ref = "danielmeppiel/design-guidelines" + result = self.script_runner._auto_install_virtual_package(ref) + + assert result is False + + @patch('apm_cli.core.script_runner.ScriptRunner._auto_install_virtual_package') + @patch('apm_cli.core.script_runner.ScriptRunner._discover_prompt_file') + @patch('apm_cli.core.script_runner.ScriptRunner._detect_installed_runtime') + @patch('apm_cli.core.script_runner.ScriptRunner._execute_script_command') + @patch('apm_cli.core.script_runner.Path.exists') + @patch('builtins.open', new_callable=mock_open, read_data="name: test\nscripts: {}") + def test_run_script_triggers_auto_install(self, mock_file, mock_exists, mock_execute, + mock_runtime, mock_discover, mock_auto_install): + """Test that run_script triggers auto-install for virtual package references.""" + mock_exists.return_value = True # apm.yml exists + mock_discover.side_effect = [None, Path("apm_modules/github/awesome-copilot-architecture-blueprint-generator/.apm/prompts/architecture-blueprint-generator.prompt.md")] + mock_auto_install.return_value = True + mock_runtime.return_value = "copilot" + mock_execute.return_value = True + + ref = "github/awesome-copilot/prompts/architecture-blueprint-generator.prompt.md" + result = self.script_runner.run_script(ref, {}) + + # Verify auto-install was called + mock_auto_install.assert_called_once_with(ref) + # Verify discovery was attempted twice (before and after install) + assert mock_discover.call_count == 2 + # Verify script was executed + mock_execute.assert_called_once() + assert result is True + + @patch('apm_cli.core.script_runner.ScriptRunner._auto_install_virtual_package') + @patch('apm_cli.core.script_runner.ScriptRunner._discover_prompt_file') + @patch('apm_cli.core.script_runner.Path.exists') + @patch('builtins.open', new_callable=mock_open, read_data="name: test\nscripts: {}") + def test_run_script_auto_install_failure_shows_error(self, mock_file, mock_exists, + mock_discover, mock_auto_install): + """Test that run_script shows helpful error when auto-install fails.""" + mock_exists.return_value = True # apm.yml exists + mock_discover.return_value = None + mock_auto_install.return_value = False # Auto-install failed + + ref = "github/awesome-copilot/prompts/architecture-blueprint-generator.prompt.md" + + with pytest.raises(RuntimeError) as exc_info: + self.script_runner.run_script(ref, {}) + + error_msg = str(exc_info.value) + assert "Script or prompt" in error_msg + assert "not found" in error_msg + + @patch('apm_cli.core.script_runner.ScriptRunner._auto_install_virtual_package') + @patch('apm_cli.core.script_runner.ScriptRunner._discover_prompt_file') + @patch('apm_cli.core.script_runner.Path.exists') + @patch('builtins.open', new_callable=mock_open, read_data="name: test\nscripts: {}") + def test_run_script_skips_auto_install_for_simple_names(self, mock_file, mock_exists, + mock_discover, mock_auto_install): + """Test that run_script doesn't trigger auto-install for simple names.""" + mock_exists.return_value = True # apm.yml exists + mock_discover.return_value = None + + # Simple name (not a virtual package reference) + ref = "code-review" + + with pytest.raises(RuntimeError): + self.script_runner.run_script(ref, {}) + + # Auto-install should NOT be called for simple names + mock_auto_install.assert_not_called() + + @patch('apm_cli.core.script_runner.ScriptRunner._discover_prompt_file') + @patch('apm_cli.core.script_runner.ScriptRunner._detect_installed_runtime') + @patch('apm_cli.core.script_runner.ScriptRunner._execute_script_command') + @patch('apm_cli.core.script_runner.Path.exists') + @patch('builtins.open', new_callable=mock_open, read_data="name: test\nscripts: {}") + def test_run_script_uses_cached_package(self, mock_file, mock_exists, mock_execute, + mock_runtime, mock_discover): + """Test that run_script uses already-installed package without re-downloading.""" + mock_exists.return_value = True # apm.yml exists + # Package already discovered (no auto-install needed) + mock_discover.return_value = Path("apm_modules/github/awesome-copilot-architecture-blueprint-generator/.apm/prompts/architecture-blueprint-generator.prompt.md") + mock_runtime.return_value = "copilot" + mock_execute.return_value = True + + ref = "github/awesome-copilot/prompts/architecture-blueprint-generator.prompt.md" + result = self.script_runner.run_script(ref, {}) + + # Verify discovery found it on first try + mock_discover.assert_called_once() + # Verify script was executed + mock_execute.assert_called_once() + assert result is True + + @patch('apm_cli.core.script_runner.ScriptRunner._auto_install_virtual_package') + @patch('apm_cli.core.script_runner.ScriptRunner._discover_prompt_file') + @patch('apm_cli.core.script_runner.Path.exists') + @patch('builtins.open', new_callable=mock_open, read_data="name: test\nscripts: {}") + def test_run_script_handles_install_success_but_no_prompt(self, mock_file, mock_exists, + mock_discover, mock_auto_install): + """Test error when package installs successfully but prompt not found.""" + mock_exists.return_value = True # apm.yml exists + mock_discover.side_effect = [None, None] # Not found before or after install + mock_auto_install.return_value = True # Install succeeded + + ref = "github/awesome-copilot/prompts/architecture-blueprint-generator.prompt.md" + + with pytest.raises(RuntimeError) as exc_info: + self.script_runner.run_script(ref, {}) + + error_msg = str(exc_info.value) + assert "Package installed successfully but prompt not found" in error_msg + assert "may not contain the expected prompt file" in error_msg diff --git a/uv.lock b/uv.lock index a5ca16bc..3c0a00a1 100644 --- a/uv.lock +++ b/uv.lock @@ -166,7 +166,7 @@ wheels = [ [[package]] name = "apm-cli" -version = "0.4.3" +version = "0.5.0" source = { editable = "." } dependencies = [ { name = "click", version = "8.1.8", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.10'" },