Skip to content

Fix rootfs race condition causing missing podman#4

Merged
ejc3 merged 14 commits intomainfrom
fix/rootfs-race-condition
Dec 21, 2025
Merged

Fix rootfs race condition causing missing podman#4
ejc3 merged 14 commits intomainfrom
fix/rootfs-race-condition

Conversation

@ejc3
Copy link
Copy Markdown
Owner

@ejc3 ejc3 commented Dec 20, 2025

Summary

When multiple VM tests run in parallel (VM Exec, VM Egress, VM Sanity), a race condition occurs:

  1. First process creates rootfs file via dd during extract_root_partition()
  2. Second process acquires lock, checks if file exists, sees it, returns immediately
  3. But first process hasn't finished install_packages_in_rootfs() yet

Result: VMs start with a rootfs that has no podman installed, causing:

Error: running podman pull
    No such file or directory (os error 2)

Fix

Create rootfs at temp path (base.ext4.tmp), then rename to final path (base.ext4) only after all package installation completes. This makes the existence check atomic with completion.

Also fixes

The fuser fork has a formatting fix pushed (import ordering in integration_tests.rs).

ejc3 added 14 commits December 20, 2025 20:11
When multiple VM tests run in parallel, a race condition occurs:
1. First process creates rootfs file via dd
2. Second process checks if file exists, sees it, uses it
3. But first process hasn't finished installing packages yet

Result: VMs start with rootfs that has no podman installed.

Fix: Create rootfs at temp path (base.ext4.tmp), then rename to
final path (base.ext4) only after package installation completes.
This ensures the file check is atomic with completion.
Rootfs creation in CI can take 2+ minutes due to:
- Cloud image download (~7 seconds)
- virt-customize (~10-60+ seconds, variable)
- NBD extraction (~30 seconds)
- Package installation (~60 seconds)

Increase timeouts:
- test_sanity: 180 -> 300 seconds
- test_egress: 60 -> 180 seconds (baseline)
- test_exec: 60 -> 180 seconds

Clone VMs keep 60s timeout since they use cached rootfs.
Include the actual error message from Firecracker when API calls fail.
This helps diagnose issues like 400 Bad Request during snapshot load.
When poll_health_by_pid times out, kill the baseline VM process
before returning the error. This releases any locks held by that
process (e.g., rootfs creation lock).

Also increase timeout from 180 to 300 seconds to match fresh test.
- Add rust-toolchain.toml to lock Rust version (prevents format drift)
- Add dependabot.yml for weekly dependency updates
- Apply rustfmt 1.92.0 formatting fixes
Required for network_overrides support in snapshot load API.
This enables clone VMs to use different TAP devices from baseline.
- Add find_firecracker() that validates version before returning path
- Fail early with clear error if Firecracker is too old
- Required for network_overrides support in snapshot cloning
The BuildJet runners have systemd-resolved holding port 53,
which prevents dnsmasq from starting. This caused DNS resolution
failures in the VM Egress bridged clone tests.

Stop systemd-resolved before installing/starting dnsmasq.
Health check fixes (rootless mode):
- Use nsenter to curl guest directly instead of slirp4netns port forwarding
- slirp4netns port forwarding cannot reach addresses outside its 10.0.2.0/24
  network, so DNAT approach failed
- Health checks now enter the namespace via nsenter and curl 192.168.1.2:80
- Simplified slirp.rs: removed DNAT rules, kept MASQUERADE for egress

DNS fixes:
- Add get_host_dns_servers() to read DNS from /etc/resolv.conf
- Falls back to /run/systemd/resolve/resolv.conf for systemd-resolved
- Remove dnsmasq setup from CI workflows (no longer needed)
- Update bridged.rs to use host DNS instead of dnsmasq

Other:
- Lower minimum Firecracker version to 1.13.1 for CI compatibility
VMs now use host DNS servers directly (read from /etc/resolv.conf)
instead of relying on dnsmasq. This simplifies the network setup
and removes the need to wait for dnsmasq to bind to the veth IP.
The VM tests (Sanity, Exec, Egress) all share /dev/nbd0 via bind mount
and flock doesn't work across podman containers. When they ran in
parallel, multiple virt-customize processes tried to access the same
cloud image simultaneously, causing one to hang indefinitely.

Fix by adding job dependencies so they run sequentially:
- VM Sanity runs first
- VM Exec waits for Sanity
- VM Egress waits for Exec

Added `if: always()` so later jobs still run even if earlier ones fail,
since the rootfs will be cached after the first successful creation.
@ejc3 ejc3 merged commit c2aeb55 into main Dec 21, 2025
21 of 22 checks passed
@ejc3 ejc3 deleted the fix/rootfs-race-condition branch December 21, 2025 03:02
ejc3 added a commit that referenced this pull request Feb 23, 2026
…hang

Replace the racy double-rebind approach with a deterministic handshake chain
that guarantees the exec server's AsyncFd epoll is re-registered before the
host starts health-checking. Reduces restore-to-healthy from ~61s to ~0.5s.

## The Problem

After snapshot restore, Firecracker's vsock transport reset
(VIRTIO_VSOCK_EVENT_TRANSPORT_RESET) leaves the exec server's AsyncFd epoll
registration stale. The previous fix (c15aa6b) removed the duplicate rebind
signal from agent.rs but left a timing gap: if the restore-epoch watcher's
single signal arrived late, the host's health monitor would start exec calls
against a stale listener, hanging for ~60s until the kernel's vsock cleanup
expired the stale connections.

## Trace Evidence (the smoking gun)

From the vsock muxer log of a failing run (vm-ba97c):

  T+0.009s  Exec call #1 → WORKS (167+144+176+123+71+27 bytes response)
  T+0.076s  Exec call #2 → WORKS
  T+0.520s  Exec call #3 → guest ACKs, receives request, sends NOTHING → 5s timeout
  T+5.5-55s Exec calls #4-#9 → same pattern: kernel accepts, app never processes
  T+60.5s   Guest sends RST for ALL stale connections simultaneously
  T+60.5s   Exec call #10 → WORKS → "container running status running=true"

The container started at T+0.28s. The exec server was broken for 60 more
seconds because the duplicate re_register() from agent.rs corrupted the
edge-triggered epoll: the old AsyncFd consumed the edge notification, and
the new AsyncFd never received events for pending connections.

## The Fix: Deterministic Handshake Chain

  exec_rebind_signal → exec_re_register → rebind_done → output.reconnect()
                                                              ↓
                                               host accepts output connection
                                                              ↓
                                                  health monitor spawns

Every transition has an explicit signal. Zero timing dependencies.

### fc-agent side (4 files):

- exec.rs: After re_register(), signals rebind_done (AtomicBool + Notify)
- restore.rs: Signals exec rebind, waits for rebind_done confirmation (5s
  timeout), THEN reconnects output vsock
- agent.rs: Removed duplicate rebind signal after notify_cache_ready_and_wait;
  added exec_rebind_done/exec_rebind_done_notify Arcs
- mmds.rs: Threads new params through watch_restore_epoch to both
  handle_clone_restore call sites

### Host side (3 files):

- listeners.rs: Added connected_tx oneshot to run_output_listener(), fired
  on first output connection accept
- snapshot.rs: Waits for output_connected_rx (30s timeout) before spawning
  health monitor; removed stale output_reconnect.notify_one() for startup
  snapshots
- podman/mod.rs: Passes None for connected_tx (non-snapshot path)

## Results

Before: restore-to-healthy = ~61s (exec broken, 9 consecutive 5s timeouts)
After:  restore-to-healthy = ~0.5s (35ms to output connected, 533ms to healthy)

Post-restore exec stress test: 10 parallel calls completed in 16.3ms
(max single: 15.3ms), zero timeouts.

Tested: make test-root FILTER=localhost_rootless_btrfs_snapshot_restore STREAM=1
ejc3 added a commit that referenced this pull request Mar 2, 2026
Fix rootfs race condition causing missing podman
ejc3 added a commit that referenced this pull request Mar 2, 2026
…hang

Replace the racy double-rebind approach with a deterministic handshake chain
that guarantees the exec server's AsyncFd epoll is re-registered before the
host starts health-checking. Reduces restore-to-healthy from ~61s to ~0.5s.

## The Problem

After snapshot restore, Firecracker's vsock transport reset
(VIRTIO_VSOCK_EVENT_TRANSPORT_RESET) leaves the exec server's AsyncFd epoll
registration stale. The previous fix (c15aa6b) removed the duplicate rebind
signal from agent.rs but left a timing gap: if the restore-epoch watcher's
single signal arrived late, the host's health monitor would start exec calls
against a stale listener, hanging for ~60s until the kernel's vsock cleanup
expired the stale connections.

## Trace Evidence (the smoking gun)

From the vsock muxer log of a failing run (vm-ba97c):

  T+0.009s  Exec call #1 → WORKS (167+144+176+123+71+27 bytes response)
  T+0.076s  Exec call #2 → WORKS
  T+0.520s  Exec call #3 → guest ACKs, receives request, sends NOTHING → 5s timeout
  T+5.5-55s Exec calls #4-#9 → same pattern: kernel accepts, app never processes
  T+60.5s   Guest sends RST for ALL stale connections simultaneously
  T+60.5s   Exec call #10 → WORKS → "container running status running=true"

The container started at T+0.28s. The exec server was broken for 60 more
seconds because the duplicate re_register() from agent.rs corrupted the
edge-triggered epoll: the old AsyncFd consumed the edge notification, and
the new AsyncFd never received events for pending connections.

## The Fix: Deterministic Handshake Chain

  exec_rebind_signal → exec_re_register → rebind_done → output.reconnect()
                                                              ↓
                                               host accepts output connection
                                                              ↓
                                                  health monitor spawns

Every transition has an explicit signal. Zero timing dependencies.

### fc-agent side (4 files):

- exec.rs: After re_register(), signals rebind_done (AtomicBool + Notify)
- restore.rs: Signals exec rebind, waits for rebind_done confirmation (5s
  timeout), THEN reconnects output vsock
- agent.rs: Removed duplicate rebind signal after notify_cache_ready_and_wait;
  added exec_rebind_done/exec_rebind_done_notify Arcs
- mmds.rs: Threads new params through watch_restore_epoch to both
  handle_clone_restore call sites

### Host side (3 files):

- listeners.rs: Added connected_tx oneshot to run_output_listener(), fired
  on first output connection accept
- snapshot.rs: Waits for output_connected_rx (30s timeout) before spawning
  health monitor; removed stale output_reconnect.notify_one() for startup
  snapshots
- podman/mod.rs: Passes None for connected_tx (non-snapshot path)

## Results

Before: restore-to-healthy = ~61s (exec broken, 9 consecutive 5s timeouts)
After:  restore-to-healthy = ~0.5s (35ms to output connected, 533ms to healthy)

Post-restore exec stress test: 10 parallel calls completed in 16.3ms
(max single: 15.3ms), zero timeouts.

Tested: make test-root FILTER=localhost_rootless_btrfs_snapshot_restore STREAM=1
ejc3 added a commit that referenced this pull request Mar 2, 2026
Fix rootfs race condition causing missing podman
ejc3 added a commit that referenced this pull request Mar 2, 2026
…hang

Replace the racy double-rebind approach with a deterministic handshake chain
that guarantees the exec server's AsyncFd epoll is re-registered before the
host starts health-checking. Reduces restore-to-healthy from ~61s to ~0.5s.

## The Problem

After snapshot restore, Firecracker's vsock transport reset
(VIRTIO_VSOCK_EVENT_TRANSPORT_RESET) leaves the exec server's AsyncFd epoll
registration stale. The previous fix (c15aa6b) removed the duplicate rebind
signal from agent.rs but left a timing gap: if the restore-epoch watcher's
single signal arrived late, the host's health monitor would start exec calls
against a stale listener, hanging for ~60s until the kernel's vsock cleanup
expired the stale connections.

## Trace Evidence (the smoking gun)

From the vsock muxer log of a failing run (vm-ba97c):

  T+0.009s  Exec call #1 → WORKS (167+144+176+123+71+27 bytes response)
  T+0.076s  Exec call #2 → WORKS
  T+0.520s  Exec call #3 → guest ACKs, receives request, sends NOTHING → 5s timeout
  T+5.5-55s Exec calls #4-#9 → same pattern: kernel accepts, app never processes
  T+60.5s   Guest sends RST for ALL stale connections simultaneously
  T+60.5s   Exec call #10 → WORKS → "container running status running=true"

The container started at T+0.28s. The exec server was broken for 60 more
seconds because the duplicate re_register() from agent.rs corrupted the
edge-triggered epoll: the old AsyncFd consumed the edge notification, and
the new AsyncFd never received events for pending connections.

## The Fix: Deterministic Handshake Chain

  exec_rebind_signal → exec_re_register → rebind_done → output.reconnect()
                                                              ↓
                                               host accepts output connection
                                                              ↓
                                                  health monitor spawns

Every transition has an explicit signal. Zero timing dependencies.

### fc-agent side (4 files):

- exec.rs: After re_register(), signals rebind_done (AtomicBool + Notify)
- restore.rs: Signals exec rebind, waits for rebind_done confirmation (5s
  timeout), THEN reconnects output vsock
- agent.rs: Removed duplicate rebind signal after notify_cache_ready_and_wait;
  added exec_rebind_done/exec_rebind_done_notify Arcs
- mmds.rs: Threads new params through watch_restore_epoch to both
  handle_clone_restore call sites

### Host side (3 files):

- listeners.rs: Added connected_tx oneshot to run_output_listener(), fired
  on first output connection accept
- snapshot.rs: Waits for output_connected_rx (30s timeout) before spawning
  health monitor; removed stale output_reconnect.notify_one() for startup
  snapshots
- podman/mod.rs: Passes None for connected_tx (non-snapshot path)

## Results

Before: restore-to-healthy = ~61s (exec broken, 9 consecutive 5s timeouts)
After:  restore-to-healthy = ~0.5s (35ms to output connected, 533ms to healthy)

Post-restore exec stress test: 10 parallel calls completed in 16.3ms
(max single: 15.3ms), zero timeouts.

Tested: make test-root FILTER=localhost_rootless_btrfs_snapshot_restore STREAM=1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant