Merged
Conversation
When root creates /tmp/fcvm-layer2-initrd and then non-root
tries to use it, permission denied errors occur.
Fix by including UID in temp directory names:
- /tmp/fcvm-layer2-initrd-{uid}
- /tmp/fcvm-layer2-setup-{uid}
Each user gets their own temp directory, avoiding conflicts.
For clones, port mappings now DNAT to veth_inner_ip (10.x.y.2) which the host can route to. The existing blanket DNAT rule inside the namespace (set up by setup_in_namespace_nat) forwards traffic from veth_inner_ip to guest_ip. Changes: - Track veth_inner_ip in BridgedNetwork for clones - Port mappings target veth_inner_ip for clones, guest_ip for baseline - Update test to expect direct guest access N/A for clones (by design) The test now passes: - Port forward (host IP): curl host:19080 → clone nginx ✓ - Localhost port forward: curl localhost:19080 → clone nginx ✓
- Delete snapshot directory (memory.bin, disk.raw, etc.) on SIGTERM/SIGINT - Add double Ctrl-C protection: warns about running clones first, requires confirmation within 3 seconds to force shutdown - Prevents disk space exhaustion from orphaned snapshots (5.6GB each) - Each snapshot has ~2GB memory.bin that cannot be reflinked, so cleanup is essential for repeated test runs
- delete_state() now removes .json.lock and .json.tmp files - Prevents accumulation of orphaned lock files during test runs - Lock files are harmless but clutter the state directory
The rootless user_data_dir() and is_writable() fallback was overly complex and not needed. All fcvm operations require the btrfs filesystem at /mnt/fcvm-btrfs anyway, so the automatic fallback to ~/.local/share/fcvm was misleading - it would fail later when btrfs operations were attempted. Changes: - Remove user_data_dir() and is_writable() helpers - Simplify base_dir(), kernel_dir(), rootfs_dir() to just use DEFAULT_BASE_DIR - Remove fallback paths that check both user and system locations
Changes to disk format and error handling: - Rename disk files from .ext4 to .raw (reflects raw disk format) - Remove fallback to regular cp when reflink fails - Require btrfs filesystem explicitly with clear error message - Update test assertions to use .raw extension The fallback copy was problematic because: 1. Without reflinks, each VM would use ~10GB disk space 2. Regular copy would succeed but defeat the CoW benefit 3. Better to fail fast with a clear error about btrfs requirement
Implements bidirectional I/O channel between fc-agent and host for container stdout/stderr streaming. fc-agent changes: - Add OUTPUT_VSOCK_PORT (4997) for dedicated I/O channel - Create vsock connection on container start - Stream stdout/stderr to host as "stdout:line" / "stderr:line" - Accept stdin from host as "stdin:line" (bidirectional) - Wait for output tasks to complete before closing connection Host changes (podman.rs): - Add run_output_listener() for vsock output handling - Parse raw line format and print with [ctr:stream] prefix - Send ack for bidirectional protocol This separates container output from the status channel (port 4999) for cleaner protocol handling.
Tests that use bridged networking or modify iptables require root. Adding #[cfg(feature = "privileged-tests")] allows running unprivileged tests separately from privileged ones. Affected tests: - test_sanity_bridged - test_egress_fresh_bridged, test_egress_clone_bridged - test_egress_stress_bridged - test_exec_bridged - test_fuse_in_vm_smoke, test_fuse_in_vm_full - test_posix_all_sequential_bridged (renamed for clarity) - test_port_forward_bridged Rootless variants remain unprivileged and run without the feature flag.
Changes enable tests to run concurrently without resource conflicts: tests/common/mod.rs: - Make require_non_root() a no-op (testing shows unshare works as root) - Keep for API compatibility test_health_monitor.rs: - Use create_unique_test_dir() instead of shared base dir - Remove serial_test dependency for this file test_clone_connection.rs: - Use unique_names() helper for VM/snapshot names - Update name pattern for clarity test_localhost_image.rs: - Use unique_names() for test isolation - Update assertions for new naming test_readme_examples.rs: - Use unique_names() throughout - Fix test_quick_start to use unique names test_signal_cleanup.rs: - Use unique VM names per test run This fixes failures when tests run in parallel by ensuring each test uses unique resource names (VMs, snapshots, temp directories).
Documentation: - CLAUDE.md: Update development patterns and test isolation notes - DESIGN.md: Reflect current architecture changes - README.md: Update usage examples and descriptions Build system: - Makefile: Improve test targets and feature flag handling - .gitignore: Add container marker files Minor code: - args.rs: Add example to --cmd flag documentation - setup/mod.rs: Minor cleanup
When multiple VMs start simultaneously, they all try to create the same fc-agent initrd. The previous code had a TOCTOU race where: 1. Process A checks if initrd exists (no) 2. Process B checks if initrd exists (no) 3. Process A creates temp dir and starts building 4. Process B does remove_dir_all(&temp_dir), deleting A's work 5. Process A fails with "No such file or directory" Fix: - Add flock-based exclusive lock around initrd creation - Double-check pattern: check existence before AND after acquiring lock - Use PID in temp dir name as extra safety measure - Release lock on error and success paths
When multiple VMs start simultaneously and the kernel isn't cached, they would all try to download it. Now uses flock to ensure only one process downloads while others wait and use the result. Same double-check pattern as initrd: check before lock, acquire lock, check again after lock, then download if still needed.
Rustdoc has proc-macro linking issues that cause spurious failures when running doctests (can't find serde attributes). Since we have no actual doc examples (all code blocks are ```text), skip doctests with --tests flag.
ejc3
added a commit
that referenced
this pull request
Feb 23, 2026
…hang Replace the racy double-rebind approach with a deterministic handshake chain that guarantees the exec server's AsyncFd epoll is re-registered before the host starts health-checking. Reduces restore-to-healthy from ~61s to ~0.5s. ## The Problem After snapshot restore, Firecracker's vsock transport reset (VIRTIO_VSOCK_EVENT_TRANSPORT_RESET) leaves the exec server's AsyncFd epoll registration stale. The previous fix (c15aa6b) removed the duplicate rebind signal from agent.rs but left a timing gap: if the restore-epoch watcher's single signal arrived late, the host's health monitor would start exec calls against a stale listener, hanging for ~60s until the kernel's vsock cleanup expired the stale connections. ## Trace Evidence (the smoking gun) From the vsock muxer log of a failing run (vm-ba97c): T+0.009s Exec call #1 → WORKS (167+144+176+123+71+27 bytes response) T+0.076s Exec call #2 → WORKS T+0.520s Exec call #3 → guest ACKs, receives request, sends NOTHING → 5s timeout T+5.5-55s Exec calls #4-#9 → same pattern: kernel accepts, app never processes T+60.5s Guest sends RST for ALL stale connections simultaneously T+60.5s Exec call #10 → WORKS → "container running status running=true" The container started at T+0.28s. The exec server was broken for 60 more seconds because the duplicate re_register() from agent.rs corrupted the edge-triggered epoll: the old AsyncFd consumed the edge notification, and the new AsyncFd never received events for pending connections. ## The Fix: Deterministic Handshake Chain exec_rebind_signal → exec_re_register → rebind_done → output.reconnect() ↓ host accepts output connection ↓ health monitor spawns Every transition has an explicit signal. Zero timing dependencies. ### fc-agent side (4 files): - exec.rs: After re_register(), signals rebind_done (AtomicBool + Notify) - restore.rs: Signals exec rebind, waits for rebind_done confirmation (5s timeout), THEN reconnects output vsock - agent.rs: Removed duplicate rebind signal after notify_cache_ready_and_wait; added exec_rebind_done/exec_rebind_done_notify Arcs - mmds.rs: Threads new params through watch_restore_epoch to both handle_clone_restore call sites ### Host side (3 files): - listeners.rs: Added connected_tx oneshot to run_output_listener(), fired on first output connection accept - snapshot.rs: Waits for output_connected_rx (30s timeout) before spawning health monitor; removed stale output_reconnect.notify_one() for startup snapshots - podman/mod.rs: Passes None for connected_tx (non-snapshot path) ## Results Before: restore-to-healthy = ~61s (exec broken, 9 consecutive 5s timeouts) After: restore-to-healthy = ~0.5s (35ms to output connected, 533ms to healthy) Post-restore exec stress test: 10 parallel calls completed in 16.3ms (max single: 15.3ms), zero timeouts. Tested: make test-root FILTER=localhost_rootless_btrfs_snapshot_restore STREAM=1
ejc3
added a commit
that referenced
this pull request
Mar 2, 2026
…hang Replace the racy double-rebind approach with a deterministic handshake chain that guarantees the exec server's AsyncFd epoll is re-registered before the host starts health-checking. Reduces restore-to-healthy from ~61s to ~0.5s. ## The Problem After snapshot restore, Firecracker's vsock transport reset (VIRTIO_VSOCK_EVENT_TRANSPORT_RESET) leaves the exec server's AsyncFd epoll registration stale. The previous fix (c15aa6b) removed the duplicate rebind signal from agent.rs but left a timing gap: if the restore-epoch watcher's single signal arrived late, the host's health monitor would start exec calls against a stale listener, hanging for ~60s until the kernel's vsock cleanup expired the stale connections. ## Trace Evidence (the smoking gun) From the vsock muxer log of a failing run (vm-ba97c): T+0.009s Exec call #1 → WORKS (167+144+176+123+71+27 bytes response) T+0.076s Exec call #2 → WORKS T+0.520s Exec call #3 → guest ACKs, receives request, sends NOTHING → 5s timeout T+5.5-55s Exec calls #4-#9 → same pattern: kernel accepts, app never processes T+60.5s Guest sends RST for ALL stale connections simultaneously T+60.5s Exec call #10 → WORKS → "container running status running=true" The container started at T+0.28s. The exec server was broken for 60 more seconds because the duplicate re_register() from agent.rs corrupted the edge-triggered epoll: the old AsyncFd consumed the edge notification, and the new AsyncFd never received events for pending connections. ## The Fix: Deterministic Handshake Chain exec_rebind_signal → exec_re_register → rebind_done → output.reconnect() ↓ host accepts output connection ↓ health monitor spawns Every transition has an explicit signal. Zero timing dependencies. ### fc-agent side (4 files): - exec.rs: After re_register(), signals rebind_done (AtomicBool + Notify) - restore.rs: Signals exec rebind, waits for rebind_done confirmation (5s timeout), THEN reconnects output vsock - agent.rs: Removed duplicate rebind signal after notify_cache_ready_and_wait; added exec_rebind_done/exec_rebind_done_notify Arcs - mmds.rs: Threads new params through watch_restore_epoch to both handle_clone_restore call sites ### Host side (3 files): - listeners.rs: Added connected_tx oneshot to run_output_listener(), fired on first output connection accept - snapshot.rs: Waits for output_connected_rx (30s timeout) before spawning health monitor; removed stale output_reconnect.notify_one() for startup snapshots - podman/mod.rs: Passes None for connected_tx (non-snapshot path) ## Results Before: restore-to-healthy = ~61s (exec broken, 9 consecutive 5s timeouts) After: restore-to-healthy = ~0.5s (35ms to output connected, 533ms to healthy) Post-restore exec stress test: 10 parallel calls completed in 16.3ms (max single: 15.3ms), zero timeouts. Tested: make test-root FILTER=localhost_rootless_btrfs_snapshot_restore STREAM=1
ejc3
added a commit
that referenced
this pull request
Mar 2, 2026
…hang Replace the racy double-rebind approach with a deterministic handshake chain that guarantees the exec server's AsyncFd epoll is re-registered before the host starts health-checking. Reduces restore-to-healthy from ~61s to ~0.5s. ## The Problem After snapshot restore, Firecracker's vsock transport reset (VIRTIO_VSOCK_EVENT_TRANSPORT_RESET) leaves the exec server's AsyncFd epoll registration stale. The previous fix (c15aa6b) removed the duplicate rebind signal from agent.rs but left a timing gap: if the restore-epoch watcher's single signal arrived late, the host's health monitor would start exec calls against a stale listener, hanging for ~60s until the kernel's vsock cleanup expired the stale connections. ## Trace Evidence (the smoking gun) From the vsock muxer log of a failing run (vm-ba97c): T+0.009s Exec call #1 → WORKS (167+144+176+123+71+27 bytes response) T+0.076s Exec call #2 → WORKS T+0.520s Exec call #3 → guest ACKs, receives request, sends NOTHING → 5s timeout T+5.5-55s Exec calls #4-#9 → same pattern: kernel accepts, app never processes T+60.5s Guest sends RST for ALL stale connections simultaneously T+60.5s Exec call #10 → WORKS → "container running status running=true" The container started at T+0.28s. The exec server was broken for 60 more seconds because the duplicate re_register() from agent.rs corrupted the edge-triggered epoll: the old AsyncFd consumed the edge notification, and the new AsyncFd never received events for pending connections. ## The Fix: Deterministic Handshake Chain exec_rebind_signal → exec_re_register → rebind_done → output.reconnect() ↓ host accepts output connection ↓ health monitor spawns Every transition has an explicit signal. Zero timing dependencies. ### fc-agent side (4 files): - exec.rs: After re_register(), signals rebind_done (AtomicBool + Notify) - restore.rs: Signals exec rebind, waits for rebind_done confirmation (5s timeout), THEN reconnects output vsock - agent.rs: Removed duplicate rebind signal after notify_cache_ready_and_wait; added exec_rebind_done/exec_rebind_done_notify Arcs - mmds.rs: Threads new params through watch_restore_epoch to both handle_clone_restore call sites ### Host side (3 files): - listeners.rs: Added connected_tx oneshot to run_output_listener(), fired on first output connection accept - snapshot.rs: Waits for output_connected_rx (30s timeout) before spawning health monitor; removed stale output_reconnect.notify_one() for startup snapshots - podman/mod.rs: Passes None for connected_tx (non-snapshot path) ## Results Before: restore-to-healthy = ~61s (exec broken, 9 consecutive 5s timeouts) After: restore-to-healthy = ~0.5s (35ms to output connected, 533ms to healthy) Post-restore exec stress test: 10 parallel calls completed in 16.3ms (max single: 15.3ms), zero timeouts. Tested: make test-root FILTER=localhost_rootless_btrfs_snapshot_restore STREAM=1
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Part 2 of 5.