feat(health-platform): add modular issue registry and docker check#46008
Conversation
Add a new diagnose suite "health-platform-issues" that displays health platform issues in the `agent diagnose` command output. Changes: - Add HealthPlatformIssues constant to diagnose suite definitions - Create diagnose.go in healthplatform/impl with Diagnose function - Register health platform diagnose in agent run command - Only display issues when health_platform.enabled is true - Map issue severity to diagnose status (critical/high -> FAIL, others -> WARNING) - Format full remediation including summary and numbered steps
Static quality checks✅ Please find below the results from static quality gates Successful checksInfo
On-wire sizes (compressed)
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: f2a4385 Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | +3.36 | [+0.21, +6.51] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | +3.36 | [+0.21, +6.51] | 1 | Logs |
| ➖ | ddot_metrics | memory utilization | +1.72 | [+1.52, +1.93] | 1 | Logs |
| ➖ | quality_gate_logs | % cpu utilization | +0.92 | [-0.59, +2.42] | 1 | Logs bounds checks dashboard |
| ➖ | otlp_ingest_logs | memory utilization | +0.25 | [+0.15, +0.36] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | +0.22 | [-0.01, +0.45] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | +0.16 | [-0.31, +0.63] | 1 | Logs |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.00 | [-0.43, +0.43] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api_v3 | ingress throughput | -0.00 | [-0.15, +0.15] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | -0.00 | [-0.05, +0.04] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.01 | [-0.10, +0.08] | 1 | Logs |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | -0.02 | [-0.40, +0.36] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | -0.03 | [-0.16, +0.10] | 1 | Logs |
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | -0.04 | [-0.09, +0.02] | 1 | Logs |
| ➖ | file_tree | memory utilization | -0.05 | [-0.10, +0.00] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | -0.06 | [-0.10, -0.02] | 1 | Logs bounds checks dashboard |
| ➖ | otlp_ingest_metrics | memory utilization | -0.08 | [-0.23, +0.08] | 1 | Logs |
| ➖ | docker_containers_memory | memory utilization | -0.18 | [-0.26, -0.11] | 1 | Logs |
| ➖ | quality_gate_idle_all_features | memory utilization | -0.20 | [-0.24, -0.16] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | -0.24 | [-0.41, -0.08] | 1 | Logs |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | -0.28 | [-0.36, -0.20] | 1 | Logs |
| ➖ | quality_gate_metrics_logs | memory utilization | -0.46 | [-0.68, -0.25] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_logs | memory utilization | -0.52 | [-0.58, -0.45] | 1 | Logs |
| ➖ | ddot_metrics_sum_delta | memory utilization | -1.00 | [-1.18, -0.82] | 1 | Logs |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | links |
|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | cpu_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | memory_usage | 10/10 | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
Go Package Import DifferencesBaseline: f2a4385
|
|
|
||
| const ( | ||
| // IssueID is the unique identifier for check failure issues | ||
| IssueID = "check-execution-failure" |
There was a problem hiding this comment.
❓ question
What happens when multiple checks fail? Do we report them all with the same issueID? Should we format the name at runtime, including an id for the single failing check?
There was a problem hiding this comment.
That's a good question, I faced the same question while coding this PR, imo we should always have 1 issue = 1 ID, if a check report multiple issue they should have different IDs
I also would like to avoid edge case when the same check fails consistently to report multiple time the issue, it should just say "this issue is still ongoing", moreover we could detect check flakiness at point if the a check goes from new -> ongoing -> resolved -> new etc...
Start to implement this logic here as well: #46069, some coat metric will follow in an other PR
There was a problem hiding this comment.
I do agree, let's keep IssueId as unique per host, so that we can use host + IssueId for the fingerprint
There was a problem hiding this comment.
Are you going to change the id to include the check name in #46069?
pducolin
left a comment
There was a problem hiding this comment.
Ok if you'll implement dynamic and unique issue ids in a follow up
|
|
||
| const ( | ||
| // IssueID is the unique identifier for check failure issues | ||
| IssueID = "check-execution-failure" |
There was a problem hiding this comment.
Are you going to change the id to include the check name in #46069?
| if exists && !reachable { | ||
| // Docker socket exists but is not reachable - permission issue | ||
| return &healthplatform.IssueReport{ | ||
| IssueId: IssueID, |
There was a problem hiding this comment.
💬 suggestion
This issue should report one single issue per id. Here if multiple sockets are unreachable, we will overwrite them
|
/merge |
|
View all feedbacks in Devflow UI.
This pull request is not mergeable according to GitHub. Common reasons include pending required checks, missing approvals, or merge conflicts — but it could also be blocked by other repository rules or settings.
The expected merge time in
|
What does this PR do?
impl/issues/*and wires Health Platform to build issues via the new registry.Motivation
Make Health Platform production-ready by:
Describe how you validated your changes
CI + VM QA
Additional Notes