Skip to content

fix(fleet/installer): fix ~90s daemon stop delay during installer setup#46757

Closed
BaptisteFoy wants to merge 10 commits intomainfrom
baptiste.foy/FA/installer-cancellable-ctx-span-fix
Closed

fix(fleet/installer): fix ~90s daemon stop delay during installer setup#46757
BaptisteFoy wants to merge 10 commits intomainfrom
baptiste.foy/FA/installer-cancellable-ctx-span-fix

Conversation

@BaptisteFoy
Copy link
Copy Markdown
Contributor

@BaptisteFoy BaptisteFoy commented Feb 21, 2026

What does this PR do?

Fixes the ~90s stop delay of datadog-agent-installer.service during installer setup (e.g. --flavor databricks), introduced when remote_updates: true became the default.

Root cause

datadog-agent-installer.service has BindsTo=datadog-agent.service. Every systemctl restart datadog-agent during setup also stops the installer daemon.

The daemon stop hangs because installer subprocesses (get-states, garbage-collect) open packages.db via bbolt's exclusive flock, which is held by the setup process for the entire setup duration. These subprocesses block in bbolt.Open indefinitely — not interruptible by context cancellation or SIGINT — and are orphaned in the systemd cgroup when the daemon process exits, causing systemd to wait the full TimeoutStopSec (90s) before sending SIGKILL.

Two related issues compound this:

  1. newDaemon() called refreshState() synchronously, spawning a get-states subprocess before FX init completed. If the daemon was stopped before init finished, signal handlers were never registered, so SIGTERM used Go's default handler (immediate exit) instead of calling daemon.Stop() — orphaning the subprocess without any cleanup.

  2. Stop() returned before the background goroutine exited. Even when daemon.Stop() was called correctly, it returned as soon as stopChan was closed. The daemon process exited, orphaning any subprocess still blocked on bbolt.Open. WaitDelay never fired because it requires the parent to stay alive.

Fixes

pkg/fleet/daemon/daemon.go

  • Move refreshState to background goroutine: Removed refreshState() from newDaemon() and moved it to the start of the Start() goroutine (without holding d.m — safe because refreshState only reads external state). FX init now completes immediately, signal handlers are registered, and SIGTERM is handled via daemon.Stop() before any blocking subprocess is spawned.

  • goroutineWG.Wait() in Stop(): Added a goroutineWG tracking the background goroutine. Stop() waits for the goroutine to exit before returning, keeping the daemon process alive until all child subprocesses have been waited on. This is the core fix: the parent process stays alive long enough for WaitDelay (15s) to fire and SIGKILL any subprocess blocked on bbolt.Open.

  • Cancellable daemon context: Added context.WithCancel to the daemon. d.cancel() is called at the start of Stop() (before acquiring the mutex) so in-flight subprocesses receive SIGINT immediately, without waiting for the mutex.

  • Release mutex before waiting: Replaced defer d.m.Unlock() with explicit unlocks before d.goroutineWG.Wait(), so the background goroutine can still acquire d.m if needed while draining.

  • scheduleRemoteAPIRequest stop-awareness: Added a select on d.stopChan to avoid requestsWG.Add(1) without a corresponding Done() after the goroutine has exited.

  • Start RC after initial refreshState: Moved rc.Start() from the end of Start() into the background goroutine, after the initial refreshState() completes. This ensures the first RC payload sent to the backend contains actual package state instead of an empty state. The call is guarded by the mutex and a context check to prevent a race with rc.Close() in Stop().

pkg/fleet/installer/exec/installer_exec_nix.go

  • WaitDelay = 15s: Ensures SIGKILL fires 15s after SIGINT for subprocesses blocked in bbolt's exclusive flock, bounding the daemon stop time to ~15s in the worst case.

pkg/fleet/installer/setup/common/services_nix.go

  • Span leak fix: Added missing defer func() { span.Finish(err) }() to restartServices.

Validate

  • go test ./pkg/fleet/daemon/... passes
  • End-to-end: installer setup --flavor databricks with remote_updates: true completes without the ~90s hang (previously observed on every systemctl restart datadog-agent during setup)

@BaptisteFoy BaptisteFoy added changelog/no-changelog No changelog entry needed qa/done QA done before merge and regressions are covered by tests labels Feb 21, 2026
@dd-octo-sts dd-octo-sts Bot added internal Identify a non-fork PR team/windows-products labels Feb 21, 2026
@github-actions github-actions Bot added the medium review PR review might take time label Feb 21, 2026
@agent-platform-auto-pr
Copy link
Copy Markdown
Contributor

agent-platform-auto-pr Bot commented Feb 21, 2026

Static quality checks

✅ Please find below the results from static quality gates
Comparison made with ancestor 4623533
📊 Static Quality Gates Dashboard
🔗 SQG Job

Successful checks

Info

Quality gate Change Size (prev → curr → max)
agent_heroku_amd64 +4.0 KiB (0.00% increase) 323.754 → 323.758 → 329.530
docker_agent_amd64 -13.05 KiB (0.00% reduction) 817.041 → 817.028 → 821.990
docker_agent_arm64 -13.77 KiB (0.00% reduction) 819.927 → 819.914 → 828.520
docker_agent_jmx_amd64 -13.05 KiB (0.00% reduction) 1007.953 → 1007.940 → 1012.870
docker_agent_jmx_arm64 -13.77 KiB (0.00% reduction) 999.621 → 999.607 → 1008.120
docker_cluster_agent_amd64 -3.33 KiB (0.00% reduction) 203.002 → 202.999 → 204.270
docker_cluster_agent_arm64 -3.31 KiB (0.00% reduction) 217.457 → 217.454 → 218.000
24 successful checks with minimal change (< 2 KiB)
Quality gate Current Size
agent_deb_amd64 756.007 MiB
agent_deb_amd64_fips 715.097 MiB
agent_msi 622.076 MiB
agent_rpm_amd64 755.991 MiB
agent_rpm_amd64_fips 715.080 MiB
agent_rpm_arm64 734.141 MiB
agent_rpm_arm64_fips 696.216 MiB
agent_suse_amd64 755.991 MiB
agent_suse_amd64_fips 715.080 MiB
agent_suse_arm64 734.141 MiB
agent_suse_arm64_fips 696.216 MiB
docker_cws_instrumentation_amd64 7.135 MiB
docker_cws_instrumentation_arm64 6.689 MiB
docker_dogstatsd_amd64 38.500 MiB
docker_dogstatsd_arm64 36.812 MiB
dogstatsd_deb_amd64 29.720 MiB
dogstatsd_deb_arm64 27.881 MiB
dogstatsd_rpm_amd64 29.720 MiB
dogstatsd_suse_amd64 29.720 MiB
iot_agent_deb_amd64 42.617 MiB
iot_agent_deb_arm64 39.723 MiB
iot_agent_deb_armhf 40.447 MiB
iot_agent_rpm_amd64 42.618 MiB
iot_agent_suse_amd64 42.618 MiB
On-wire sizes (compressed)
Quality gate Change Size (prev → curr → max)
agent_deb_amd64 -4.46 KiB (0.00% reduction) 185.464 → 185.459 → 186.090
agent_deb_amd64_fips +21.31 KiB (0.01% increase) 176.270 → 176.291 → 180.330
agent_heroku_amd64 neutral 87.102 MiB → 88.440
agent_msi +20.0 KiB (0.01% increase) 149.199 → 149.219 → 154.470
agent_rpm_amd64 -20.46 KiB (0.01% reduction) 187.339 → 187.319 → 189.170
agent_rpm_amd64_fips +4.03 KiB (0.00% increase) 178.429 → 178.433 → 181.060
agent_rpm_arm64 -48.74 KiB (0.03% reduction) 169.790 → 169.742 → 170.020
agent_rpm_arm64_fips +18.58 KiB (0.01% increase) 162.498 → 162.516 → 164.130
agent_suse_amd64 -20.46 KiB (0.01% reduction) 187.339 → 187.319 → 189.170
agent_suse_amd64_fips +4.03 KiB (0.00% increase) 178.429 → 178.433 → 181.060
agent_suse_arm64 -48.74 KiB (0.03% reduction) 169.790 → 169.742 → 170.020
agent_suse_arm64_fips +18.58 KiB (0.01% increase) 162.498 → 162.516 → 164.130
docker_agent_amd64 neutral 277.778 MiB → 279.410
docker_agent_arm64 +17.75 KiB (0.01% increase) 265.034 → 265.051 → 267.960
docker_agent_jmx_amd64 +2.59 KiB (0.00% increase) 346.426 → 346.429 → 348.040
docker_agent_jmx_arm64 -4.65 KiB (0.00% reduction) 329.680 → 329.676 → 332.560
docker_cluster_agent_amd64 +9.88 KiB (0.01% increase) 71.121 → 71.131 → 71.920
docker_cluster_agent_arm64 +8.3 KiB (0.01% increase) 66.767 → 66.775 → 67.220
docker_cws_instrumentation_amd64 neutral 2.995 MiB → 3.330
docker_cws_instrumentation_arm64 neutral 2.726 MiB → 3.090
docker_dogstatsd_amd64 neutral 14.900 MiB → 15.820
docker_dogstatsd_arm64 neutral 14.239 MiB → 14.830
dogstatsd_deb_amd64 neutral 7.853 MiB → 8.790
dogstatsd_deb_arm64 neutral 6.741 MiB → 7.710
dogstatsd_rpm_amd64 neutral 7.867 MiB → 8.800
dogstatsd_suse_amd64 neutral 7.867 MiB → 8.800
iot_agent_deb_amd64 +2.5 KiB (0.02% increase) 11.236 → 11.239 → 12.040
iot_agent_deb_arm64 +2.22 KiB (0.02% increase) 9.602 → 9.604 → 10.450
iot_agent_deb_armhf neutral 9.804 MiB → 10.620
iot_agent_rpm_amd64 neutral 11.257 MiB → 12.060
iot_agent_suse_amd64 neutral 11.257 MiB → 12.060

@cit-pr-commenter-54b7da
Copy link
Copy Markdown

cit-pr-commenter-54b7da Bot commented Feb 21, 2026

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: fb75c764-7429-4484-b4fb-31214e6a4249

Baseline: 4623533
Comparison: 60cec15
Diff

Optimization Goals: ✅ No significant changes detected

Experiments ignored for regressions

Regressions in experiments with settings containing erratic: true are ignored.

perf experiment goal Δ mean % Δ mean % CI trials links
docker_containers_cpu % cpu utilization -0.92 [-3.98, +2.14] 1 Logs

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
ddot_metrics_sum_cumulative memory utilization +0.55 [+0.40, +0.71] 1 Logs
docker_containers_memory memory utilization +0.43 [+0.35, +0.52] 1 Logs
file_tree memory utilization +0.21 [+0.16, +0.26] 1 Logs
quality_gate_metrics_logs memory utilization +0.05 [-0.16, +0.26] 1 Logs bounds checks dashboard
file_to_blackhole_500ms_latency egress throughput +0.05 [-0.34, +0.43] 1 Logs
file_to_blackhole_100ms_latency egress throughput +0.01 [-0.03, +0.06] 1 Logs
uds_dogstatsd_to_api ingress throughput +0.01 [-0.12, +0.14] 1 Logs
quality_gate_idle memory utilization +0.00 [-0.04, +0.05] 1 Logs bounds checks dashboard
tcp_dd_logs_filter_exclude ingress throughput -0.00 [-0.10, +0.09] 1 Logs
ddot_logs memory utilization -0.00 [-0.07, +0.06] 1 Logs
uds_dogstatsd_20mb_12k_contexts_20_senders memory utilization -0.02 [-0.08, +0.03] 1 Logs
uds_dogstatsd_to_api_v3 ingress throughput -0.02 [-0.14, +0.09] 1 Logs
file_to_blackhole_1000ms_latency egress throughput -0.04 [-0.47, +0.38] 1 Logs
file_to_blackhole_0ms_latency egress throughput -0.05 [-0.54, +0.43] 1 Logs
ddot_metrics_sum_cumulativetodelta_exporter memory utilization -0.10 [-0.33, +0.14] 1 Logs
quality_gate_idle_all_features memory utilization -0.18 [-0.21, -0.15] 1 Logs bounds checks dashboard
otlp_ingest_logs memory utilization -0.20 [-0.29, -0.11] 1 Logs
otlp_ingest_metrics memory utilization -0.27 [-0.42, -0.11] 1 Logs
ddot_metrics_sum_delta memory utilization -0.32 [-0.51, -0.12] 1 Logs
docker_containers_cpu % cpu utilization -0.92 [-3.98, +2.14] 1 Logs
ddot_metrics memory utilization -1.15 [-1.36, -0.95] 1 Logs
tcp_syslog_to_blackhole ingress throughput -1.63 [-1.72, -1.54] 1 Logs
quality_gate_logs % cpu utilization -2.83 [-4.32, -1.35] 1 Logs bounds checks dashboard

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed links
docker_containers_cpu simple_check_run 10/10
docker_containers_memory memory_usage 10/10
docker_containers_memory simple_check_run 10/10
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency lost_bytes 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_100ms_latency lost_bytes 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency lost_bytes 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle intake_connections 10/10 bounds checks dashboard
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features intake_connections 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs intake_connections 10/10 bounds checks dashboard
quality_gate_logs lost_bytes 10/10 bounds checks dashboard
quality_gate_logs memory_usage 10/10 bounds checks dashboard
quality_gate_metrics_logs cpu_usage 10/10 bounds checks dashboard
quality_gate_metrics_logs intake_connections 10/10 bounds checks dashboard
quality_gate_metrics_logs lost_bytes 10/10 bounds checks dashboard
quality_gate_metrics_logs memory_usage 10/10 bounds checks dashboard

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.

@BaptisteFoy BaptisteFoy changed the title fix(fleet/installer): cancellable daemon context, span leak fix, and stdout logging fix(fleet/installer): fix 90s daemon stop delay during installer setup Feb 24, 2026
@BaptisteFoy BaptisteFoy changed the title fix(fleet/installer): fix 90s daemon stop delay during installer setup fix(fleet/installer): fix ~90s daemon stop delay during installer setup Feb 24, 2026
@BaptisteFoy BaptisteFoy marked this pull request as ready for review February 24, 2026 13:03
@BaptisteFoy BaptisteFoy requested review from a team as code owners February 24, 2026 13:03
@BaptisteFoy BaptisteFoy marked this pull request as draft February 24, 2026 16:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

changelog/no-changelog No changelog entry needed internal Identify a non-fork PR medium review PR review might take time qa/done QA done before merge and regressions are covered by tests team/windows-products

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants