Conversation
|
|
🔧 CI Auto-FixCreated fix PR: #213 Issue: Port forward tests were flaky due to race condition between container-ready health check and nginx startup. Fix: Added retry loops to give nginx time to bind to port 80 after container starts. |
🔧 CI Auto-FixCreated fix PR: #214 The lint job was failing due to an unused import ( |
⛔ Lint FailedThe Lint check has failed on this PR. Please fix the linting issues before I can proceed with code review. Check the lint failures here: https://github.com/ejc3/fcvm/actions/runs/21619566606/job/62305531034 Once lint passes, I'll automatically review the code. Review by Claude | Run |
|
CI Failure AnalysisI analyzed CI #21619688864 but couldn't create an automatic fix. DiagnosisThe CI is failing in 4 jobs with 2 distinct test failures: Test Failures:
Root Cause: Why Not FixableThis is an environmental/infrastructure issue, not a code issue:
RecommendationThe repository maintainers should either:
|
Add slirp_ipv6 test module with tests for: - libslirp version detection (4.7+ supports native IPv6 DNS) - DNS resolution in VMs - IPv6 connectivity on eth0 Change health check default: don't auto-assign HTTP health check URL from network config. HTTP health checks require an HTTP server in the container. Use container-ready file mechanism by default instead. Users can explicitly set --health-check for HTTP checks. Also add find_available_port() helper to tests/common for port scanning. Tested: make test-root FILTER=slirp_ipv6 - all 3 tests pass make test-root FILTER=sanity - sanity test still passes
- Switch from wget to curl for HTTPS testing (busybox wget doesn't properly support HTTPS through HTTP proxy) - Use dynamic port allocation (find_available_high_port) instead of hardcoded 8080 to avoid conflicts with system services - Add --vm flag to clone tests for nslookup/curl (run in VM, not container) - Update test URLs: google.com → facebook.com, httpbin.org → checkip.amazonaws.com Tests verified: test_egress_fresh_rootless - passed test_exec_rootless - passed test_port_forward_rootless - passed
test_ipv6_egress_to_host starts an IPv6-only server on host and verifies VM cannot reach it. This documents that slirp4netns only provides IPv4 NAT - IPv6 egress requires either: 1. IPv6 NAT (not supported by slirp4netns) 2. Bridged networking with IPv6 on the bridge 3. A different networking solution like pasta
Enable IPv6 connectivity for VMs using slirp4netns: - Add guest_ipv6 and host_ipv6 fields to NetworkConfig - Configure IPv6 addresses on TAP devices (fd00:1::2/64 for guest) - Enable IPv6 forwarding and NAT66 in namespace setup script - Pass --enable-ipv6 and --outbound-addr6 to slirp4netns - Detect host's global IPv6 address for outbound traffic - Add configure_ipv6_from_cmdline() in fc-agent to parse ipv6= boot param - Pass ipv6=<client>|<gateway> in kernel cmdline from podman.rs Tested: make test FILTER=ipv6 - all 3 tests pass - test_ipv6_connectivity_in_vm: ping to fd00::3 works - test_ipv6_egress_to_host: VM reaches host's global IPv6 - test_ipv6_egress_internet: documents slirp4netns limitation
The test was checking for fd00::100 (namespace slirp0 address) but the guest VM is configured with fd00:1::2. Also changed ping target from fd00::3 (slirp DNS, unreachable without NAT66) to fd00:1::1 (gateway). Tested: make test-root FILTER=ipv6 (2 tests pass)
🔍 Claude ReviewSEVERITY: low Findings[LOW] In
[LOW] In if \!addr.starts_with("fe80:") && \!addr.starts_with("fc") && \!addr.starts_with("fd")Actually, this is already checking for "fd" prefix correctly, so this is fine. The current logic correctly filters both link-local (fe80::) and ULA (fc00::/7, which includes both fc and fd prefixes). SummaryThe IPv6 implementation is well-designed and comprehensive. The code quality is high with good error handling, clear documentation, and thorough testing. The only minor issues are in test design where one test doesn't properly validate failure modes. Review by Claude | Run |
The PR changed health checks from HTTP-based (which verified nginx was responding) to container-ready file (which only verifies container started). This causes a race condition where the container is marked healthy but nginx hasn't finished binding to port 80 yet. Add 30-second retry loops with 500ms intervals to both test_port_forward_bridged and test_port_forward_rootless, giving nginx time to start after the container becomes ready. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
Add Delegate=yes to fc-agent.service so podman can use cgroup controllers (pids, memory, cpu) when running containers inside the VM. Without this, crun fails with "the requested cgroup controller 'pids' is not available" on fresh VM boots. Snapshot restores worked because cgroups were already configured when the snapshot was taken. Tested: FCVM_NO_SNAPSHOT=1 make test-root FILTER=test_dns_resolution_in_vm
❌ CI Check FailedThe Host-Root-arm64-SnapshotDisabled test has failed. Please fix the failing test before I can review this PR. Review by Claude | Run |
🔧 CI Auto-FixCreated fix PR: #216 Problem: The Solution: Changed to use |
|
Superseded by PR #217 (http-proxy) which includes all IPv6 changes plus proxy support |
Summary
Adds IPv6 support to rootless (slirp4netns) networking mode. VMs now get both IPv4 and IPv6 connectivity when the host has a global IPv6 address.
Changes
IPv6 network setup in namespace:
fd00:1::2/64fd00:1::1/64--enable-ipv6and--outbound-addr6flags passed to slirp4netnsfc-agent IPv6 configuration:
ipv6=<client>|<gateway>from kernel cmdlineTest improvements:
test_ipv6_connectivity_in_vm: Verifies guest IPv6 address and gateway reachabilitytest_ipv6_egress_to_host: Tests egress to host's global IPv6 addressTest Results
Test plan