Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## [Unreleased]

### Added
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CHANGELOG entries must conform to style of previous entries (1 line per PR, like this one)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 558ef60 — consolidated to a single line and corrected the PR reference from #261 to #260.


- Diff-aware `apm install` — manifest as source of truth: removed packages, ref/version changes, and MCP config drift in `apm.yml` all self-correct on the next `apm install` without `--update` or `--force`; introduces `drift.py` with pure helper functions (#260)

## [0.7.7] - 2026-03-10

### Added
Expand Down
7 changes: 7 additions & 0 deletions docs/src/content/docs/reference/cli-commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,13 @@ apm install [PACKAGES...] [OPTIONS]
- `apm install` (no args): Installs **all** packages from `apm.yml`
- `apm install <package>`: Installs **only** the specified package (adds to `apm.yml` if not present)

**Diff-Aware Installation (manifest as source of truth):**
- MCP servers already configured with matching config are skipped (`already configured`)
- MCP servers already configured but with changed manifest config are re-applied automatically (`updated`)
- APM packages removed from `apm.yml` have their deployed files cleaned up on the next full `apm install`
- APM packages whose ref/version changed in `apm.yml` are re-downloaded automatically (no `--update` needed)
- `--force` remains available for full overwrite/reset scenarios

**Examples:**
```bash
# Install all dependencies from apm.yml
Expand Down
6 changes: 6 additions & 0 deletions docs/src/content/docs/reference/lockfile-spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,7 @@ dependencies:
| `apm_version` | string | MUST | Version of APM that generated this lock file. |
| `dependencies` | array | MUST | Ordered list of resolved dependencies (see [section 4.2](#42-dependency-entries)). |
| `mcp_servers` | array | MAY | List of MCP server identifiers registered by installed packages. |
| `mcp_configs` | mapping | MAY | Mapping of MCP server name to its manifest configuration dict. Used for diff-aware installation — when config in `apm.yml` changes, `apm install` detects the drift and re-applies without `--force`. |
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new mcp_configs field is documented as storing the manifest configuration dict. Since this can include env / headers values, it can unintentionally persist secrets in apm.lock (which many users commit). Consider adding an explicit warning here (and/or changing the design to store hashes/redacted values) so users understand the security implications.

Suggested change
| `mcp_configs` | mapping | MAY | Mapping of MCP server name to its manifest configuration dict. Used for diff-aware installation — when config in `apm.yml` changes, `apm install` detects the drift and re-applies without `--force`. |
| `mcp_configs` | mapping | MAY | Mapping of MCP server name to its manifest configuration dict. Used for diff-aware installation — when config in `apm.yml` changes, `apm install` detects the drift and re-applies without `--force`. **Security note:** these configs may include sensitive values (for example `env` or HTTP `headers`), which are written in plain text to `apm.lock`; avoid storing secrets here if the lockfile is committed to version control. |

Copilot uses AI. Check for mistakes.

### 4.2 Dependency Entries

Expand Down Expand Up @@ -264,6 +265,11 @@ dependencies:

mcp_servers:
- security-scanner

mcp_configs:
security-scanner:
name: security-scanner
transport: stdio
```

---
Expand Down
154 changes: 122 additions & 32 deletions src/apm_cli/commands/install.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@

import click

from ..drift import build_download_ref, detect_orphans, detect_ref_change
from ..utils.console import _rich_error, _rich_info, _rich_success, _rich_warning
from ..utils.diagnostics import DiagnosticCollector
from ..utils.github_host import default_host, is_valid_fqdn
Expand Down Expand Up @@ -391,15 +392,17 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo
prompt_count = 0
agent_count = 0

# Capture old MCP servers from lockfile BEFORE _install_apm_dependencies
# regenerates it (which drops the mcp_servers field).
# Capture old MCP servers and configs from lockfile BEFORE
# _install_apm_dependencies regenerates it (which drops the fields).
# We always read this — even when --only=apm — so we can restore the
# field after the lockfile is regenerated by the APM install step.
old_mcp_servers: builtins.set = builtins.set()
old_mcp_configs: builtins.dict = {}
_lock_path = Path.cwd() / "apm.lock"
_existing_lock = LockFile.read(_lock_path)
if _existing_lock:
old_mcp_servers = builtins.set(_existing_lock.mcp_servers)
old_mcp_configs = builtins.dict(_existing_lock.mcp_configs)

apm_diagnostics = None
if should_install_apm and apm_deps:
Expand Down Expand Up @@ -441,26 +444,30 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo
mcp_count = 0
new_mcp_servers: builtins.set = builtins.set()
if should_install_mcp and mcp_deps:
mcp_count = MCPIntegrator.install(mcp_deps, runtime, exclude, verbose)
mcp_count = MCPIntegrator.install(
mcp_deps, runtime, exclude, verbose,
stored_mcp_configs=old_mcp_configs,
)
new_mcp_servers = MCPIntegrator.get_server_names(mcp_deps)
new_mcp_configs = MCPIntegrator.get_server_configs(mcp_deps)

# Remove stale MCP servers that are no longer needed
stale_servers = old_mcp_servers - new_mcp_servers
if stale_servers:
MCPIntegrator.remove_stale(stale_servers, runtime, exclude)

# Persist the new MCP server set in the lockfile
MCPIntegrator.update_lockfile(new_mcp_servers)
# Persist the new MCP server set and configs in the lockfile
MCPIntegrator.update_lockfile(new_mcp_servers, mcp_configs=new_mcp_configs)
elif should_install_mcp and not mcp_deps:
# No MCP deps at all — remove any old APM-managed servers
if old_mcp_servers:
MCPIntegrator.remove_stale(old_mcp_servers, runtime, exclude)
MCPIntegrator.update_lockfile(builtins.set())
MCPIntegrator.update_lockfile(builtins.set(), mcp_configs={})
_rich_warning("No MCP dependencies found in apm.yml")
elif not should_install_mcp and old_mcp_servers:
# --only=apm: APM install regenerated the lockfile and dropped
# mcp_servers. Restore the previous set so it is not lost.
MCPIntegrator.update_lockfile(old_mcp_servers)
MCPIntegrator.update_lockfile(old_mcp_servers, mcp_configs=old_mcp_configs)

# Show beautiful post-install summary
_rich_blank_line()
Expand Down Expand Up @@ -645,6 +652,17 @@ def _collect_descendants(node, visited=None):
_rich_info("No APM dependencies to install", symbol="check")
return 0, 0, 0

# ------------------------------------------------------------------
# Orphan detection: packages in lockfile no longer in the manifest.
# Only relevant for a full install (not apm install <pkg>).
# We compute this NOW, before the download loop, so we know which old
# lockfile entries to remove from the merge and which deployed files
# to clean up after the loop.
# ------------------------------------------------------------------
intended_dep_keys: builtins.set = builtins.set(
d.get_unique_key() for d in deps_to_install
)

# apm_modules directory already created above

# Auto-detect target for integration (same logic as compile)
Expand Down Expand Up @@ -719,6 +737,14 @@ def _collect_descendants(node, visited=None):
from apm_cli.integration.base_integrator import BaseIntegrator
managed_files = BaseIntegrator.normalize_managed_files(managed_files)

# Collect deployed file paths for packages that are no longer in the manifest.
# detect_orphans() returns an empty set for partial installs automatically.
orphaned_deployed_files = detect_orphans(
existing_lockfile,
intended_dep_keys,
only_packages=only_packages,
)

# Install each dependency with Rich progress display
from rich.progress import (
Progress,
Expand All @@ -744,8 +770,19 @@ def _collect_descendants(node, visited=None):
# Skip if already downloaded during BFS resolution
if _pd_key in callback_downloaded:
continue
# Detect if manifest ref changed from what's recorded in the lockfile.
# detect_ref_change() handles all transitions including None→ref.
_pd_locked_chk = (
existing_lockfile.get_dependency(_pd_key)
if existing_lockfile and not update_refs
else None
)
_pd_ref_changed = detect_ref_change(
_pd_ref, _pd_locked_chk, update_refs=update_refs
)
# Skip if lockfile SHA matches local HEAD (Phase 5 check)
if _pd_path.exists() and existing_lockfile and not update_refs:
# — but only if the ref itself has not changed in the manifest.
if _pd_path.exists() and existing_lockfile and not update_refs and not _pd_ref_changed:
_pd_locked = existing_lockfile.get_dependency(_pd_key)
if _pd_locked and _pd_locked.resolved_commit and _pd_locked.resolved_commit != "cached":
try:
Expand All @@ -754,15 +791,11 @@ def _collect_descendants(node, visited=None):
continue
except Exception:
pass
# Build download ref (use locked commit for reproducibility)
_pd_dlref = str(_pd_ref)
if existing_lockfile and not update_refs:
_pd_locked = existing_lockfile.get_dependency(_pd_key)
if _pd_locked and _pd_locked.resolved_commit and _pd_locked.resolved_commit != "cached":
_pd_base = _pd_ref.repo_url
if _pd_ref.virtual_path:
_pd_base = f"{_pd_base}/{_pd_ref.virtual_path}"
_pd_dlref = f"{_pd_base}#{_pd_locked.resolved_commit}"
# Build download ref (use locked commit for reproducibility).
# build_download_ref() uses the manifest ref when ref_changed is True.
_pd_dlref = build_download_ref(
_pd_ref, existing_lockfile, update_refs=update_refs, ref_changed=_pd_ref_changed
)
_need_download.append((_pd_ref, _pd_path, _pd_dlref))

if _need_download and parallel_downloads > 0:
Expand Down Expand Up @@ -840,9 +873,20 @@ def _collect_descendants(node, visited=None):
]
# Skip download if: already fetched by resolver callback, or cached tag/commit
already_resolved = dep_ref.get_unique_key() in callback_downloaded
# Detect if manifest ref changed vs what the lockfile recorded.
# detect_ref_change() handles all transitions including None→ref.
_dep_locked_chk = (
existing_lockfile.get_dependency(dep_ref.get_unique_key())
if existing_lockfile and not update_refs
else None
)
ref_changed = detect_ref_change(
dep_ref, _dep_locked_chk, update_refs=update_refs
)
# Phase 5 (#171): Also skip when lockfile SHA matches local HEAD
# — but not when the manifest ref has changed (user wants different version).
lockfile_match = False
if install_path.exists() and existing_lockfile and not update_refs:
if install_path.exists() and existing_lockfile and not update_refs and not ref_changed:
locked_dep = existing_lockfile.get_dependency(dep_ref.get_unique_key())
if locked_dep and locked_dep.resolved_commit and locked_dep.resolved_commit != "cached":
try:
Expand Down Expand Up @@ -1133,16 +1177,11 @@ def _collect_descendants(node, visited=None):
total=None, # Indeterminate initially; git will update with actual counts
)

# T5: Build download ref - use locked commit if available
download_ref = str(dep_ref)
if existing_lockfile and not update_refs:
locked_dep = existing_lockfile.get_dependency(dep_ref.get_unique_key())
if locked_dep and locked_dep.resolved_commit and locked_dep.resolved_commit != "cached":
# Override with locked commit for reproducible install
base_ref = dep_ref.repo_url
if dep_ref.virtual_path:
base_ref = f"{base_ref}/{dep_ref.virtual_path}"
download_ref = f"{base_ref}#{locked_dep.resolved_commit}"
# T5: Build download ref - use locked commit if available.
# build_download_ref() uses manifest ref when ref_changed is True.
download_ref = build_download_ref(
dep_ref, existing_lockfile, update_refs=update_refs, ref_changed=ref_changed
)

# Phase 4 (#171): Use pre-downloaded result if available
_dep_key = dep_ref.get_unique_key()
Expand Down Expand Up @@ -1388,6 +1427,45 @@ def _collect_descendants(node, visited=None):
# Update .gitignore
_update_gitignore_for_apm_modules()

# ------------------------------------------------------------------
# Orphan cleanup: remove deployed files for packages that were
# removed from the manifest. This happens on every full install
# (no only_packages), making apm install idempotent with the manifest.
# Handles both regular files and directory entries (e.g., legacy skills).
# ------------------------------------------------------------------
if orphaned_deployed_files:
import shutil as _shutil
_removed_orphan_count = 0
_failed_orphan_count = 0
_deleted_orphan_paths: builtins.list = []
for _orphan_path in sorted(orphaned_deployed_files):
# validate_deploy_path() is the safety gate: it rejects path-traversal,
# requires .github/ or .claude/ prefix, and checks the resolved path
# stays within project_root — so rmtree is safe here.
if BaseIntegrator.validate_deploy_path(_orphan_path, project_root):
_target = project_root / _orphan_path
if _target.exists():
try:
if _target.is_dir():
_shutil.rmtree(_target)
else:
_target.unlink()
_deleted_orphan_paths.append(_target)
_removed_orphan_count += 1
except Exception as _orphan_err:
_rich_warning(
f" └─ Could not remove orphaned path {_orphan_path}: {_orphan_err}"
)
_failed_orphan_count += 1
# Clean up empty parent directories left after file removal
if _deleted_orphan_paths:
BaseIntegrator.cleanup_empty_parents(_deleted_orphan_paths, project_root)
if _removed_orphan_count > 0:
_rich_info(
f"Removed {_removed_orphan_count} file(s) from packages "
"no longer in apm.yml"
)
Comment on lines +1436 to +1467
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Orphan cleanup only calls Path.unlink(). deployed_files can include directory paths (e.g., legacy deployed_skills migration stores .github/skills/<name>/), which will raise IsADirectoryError and leave orphaned directories behind. Consider mirroring the prune command’s behavior: remove files vs directories appropriately (unlink vs shutil.rmtree) and then call BaseIntegrator.cleanup_empty_parents(...) to avoid leaving empty parent folders.

Copilot uses AI. Check for mistakes.

# Generate apm.lock for reproducible installs (T4: lockfile generation)
if installed_packages:
try:
Expand All @@ -1399,13 +1477,25 @@ def _collect_descendants(node, visited=None):
for dep_key, pkg_type in package_types.items():
if dep_key in lockfile.dependencies:
lockfile.dependencies[dep_key].package_type = pkg_type
# Merge with existing lockfile to preserve entries for packages
# not processed in this run (e.g. `apm install X` only installs X).
# Skip merge when update_refs is set — stale entries must not survive.
# Selectively merge entries from the existing lockfile:
# - For partial installs (only_packages): preserve all old entries
# (sequential install — only the specified package was processed).
# - For full installs: only preserve entries for packages still in
# the manifest that failed to download (in intended_dep_keys but
# not in the new lockfile due to a download error).
# - Orphaned entries (not in intended_dep_keys) are intentionally
# dropped so the lockfile matches the manifest.
# Skip merge entirely when update_refs is set — stale entries must not survive.
if existing_lockfile and not update_refs:
for dep_key, dep in existing_lockfile.dependencies.items():
if dep_key not in lockfile.dependencies:
lockfile.dependencies[dep_key] = dep
if only_packages or dep_key in intended_dep_keys:
# Preserve: partial install (sequential install support)
# OR package still in manifest but failed to download.
lockfile.dependencies[dep_key] = dep
# else: orphan — package was in lockfile but is no longer in
# the manifest (full install only). Don't preserve so the
# lockfile stays in sync with what apm.yml declares.
lockfile_path = get_lockfile_path(project_root)

# When installing a subset of packages (apm install <pkg>),
Expand Down
4 changes: 4 additions & 0 deletions src/apm_cli/deps/lockfile.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,7 @@ class LockFile:
apm_version: Optional[str] = None
dependencies: Dict[str, LockedDependency] = field(default_factory=dict)
mcp_servers: List[str] = field(default_factory=list)
mcp_configs: Dict[str, dict] = field(default_factory=dict)

def add_dependency(self, dep: LockedDependency) -> None:
"""Add a dependency to the lock file."""
Expand Down Expand Up @@ -153,6 +154,8 @@ def to_yaml(self) -> str:
data["dependencies"] = [dep.to_dict() for dep in self.get_all_dependencies()]
if self.mcp_servers:
data["mcp_servers"] = sorted(self.mcp_servers)
if self.mcp_configs:
data["mcp_configs"] = dict(sorted(self.mcp_configs.items()))
return yaml.dump(
data, default_flow_style=False, sort_keys=False, allow_unicode=True
)
Expand All @@ -173,6 +176,7 @@ def from_yaml(cls, yaml_str: str) -> "LockFile":
for dep_data in data.get("dependencies", []):
lock.add_dependency(LockedDependency.from_dict(dep_data))
lock.mcp_servers = list(data.get("mcp_servers", []))
lock.mcp_configs = dict(data.get("mcp_configs") or {})
return lock

def write(self, path: Path) -> None:
Expand Down
Loading
Loading