Fix Viper AllFlattenedSettingsWithSequenceID implementation#46997
Fix Viper AllFlattenedSettingsWithSequenceID implementation#46997gh-worker-dd-mergequeue-cf854d[bot] merged 4 commits intomainfrom
AllFlattenedSettingsWithSequenceID implementation#46997Conversation
Files inventory check summaryFile checks results against ancestor f9067ed5: Results for datadog-agent_7.78.0~devel.git.217.b87e93f.pipeline.99782179-1_amd64.deb:Detected file changes:
|
Static quality checks✅ Please find below the results from static quality gates Successful checksInfo
23 successful checks with minimal change (< 2 KiB)
On-wire sizes (compressed)
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 47e0eb3 Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | -0.26 | [-3.37, +2.85] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | tcp_syslog_to_blackhole | ingress throughput | +0.52 | [+0.40, +0.65] | 1 | Logs |
| ➖ | otlp_ingest_metrics | memory utilization | +0.33 | [+0.17, +0.49] | 1 | Logs |
| ➖ | ddot_logs | memory utilization | +0.24 | [+0.19, +0.30] | 1 | Logs |
| ➖ | quality_gate_idle_all_features | memory utilization | +0.17 | [+0.13, +0.21] | 1 | Logs bounds checks dashboard |
| ➖ | docker_containers_memory | memory utilization | +0.09 | [+0.02, +0.17] | 1 | Logs |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | +0.08 | [-0.33, +0.50] | 1 | Logs |
| ➖ | ddot_metrics_sum_delta | memory utilization | +0.07 | [-0.12, +0.25] | 1 | Logs |
| ➖ | quality_gate_logs | % cpu utilization | +0.05 | [-1.44, +1.54] | 1 | Logs bounds checks dashboard |
| ➖ | quality_gate_metrics_logs | memory utilization | +0.02 | [-0.19, +0.24] | 1 | Logs bounds checks dashboard |
| ➖ | uds_dogstatsd_to_api | ingress throughput | +0.01 | [-0.11, +0.12] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | +0.01 | [-0.04, +0.05] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api_v3 | ingress throughput | -0.00 | [-0.13, +0.12] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.09, +0.08] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | -0.09 | [-0.57, +0.40] | 1 | Logs |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | -0.09 | [-0.47, +0.29] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | -0.12 | [-0.17, -0.07] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_metrics | memory utilization | -0.23 | [-0.45, -0.02] | 1 | Logs |
| ➖ | docker_containers_cpu | % cpu utilization | -0.26 | [-3.37, +2.85] | 1 | Logs |
| ➖ | otlp_ingest_logs | memory utilization | -0.27 | [-0.36, -0.18] | 1 | Logs |
| ➖ | file_tree | memory utilization | -0.33 | [-0.38, -0.27] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | -0.34 | [-0.49, -0.19] | 1 | Logs |
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | -0.38 | [-0.43, -0.32] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | -0.39 | [-0.62, -0.16] | 1 | Logs |
Bounds Checks: ❌ Failed
| perf | experiment | bounds_check_name | replicates_passed | links |
|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
| ❌ | quality_gate_idle | memory_usage | 9/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | cpu_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | memory_usage | 10/10 | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
❌ Failed. Some Quality Gates were violated.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 9/10 replicas passed. Failed 1 which is > 0. Gate FAILED.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
hush-hush
left a comment
There was a problem hiding this comment.
Added some nit-pick, feel free to merge after addressing.
| // 1. Collect known keys and filter for parent-child relationships (keep only leaves) | ||
| // 2. Add all unknown keys as-is | ||
| // Must be called while holding at least a read lock. | ||
| func (c *safeConfig) collectFlattenedKeys() []string { |
There was a problem hiding this comment.
Could we add a test for this ?
| assert.ElementsMatch(t, viperKeys, ntmKeys) | ||
| assert.ElementsMatch(t, []string{"apm_config.foo", "apm_config.bar.baz", "proxy.http"}, viperKeys) |
There was a problem hiding this comment.
Small style nit: for these compatibility tests, it's more in line with other tests to write them like this instead:
| assert.ElementsMatch(t, viperKeys, ntmKeys) | |
| assert.ElementsMatch(t, []string{"apm_config.foo", "apm_config.bar.baz", "proxy.http"}, viperKeys) | |
| expectedKeys := []string{"apm_config.foo", "apm_config.bar.baz", "proxy.http"} | |
| assert.ElementsMatch(t, expectedKeys, viperKeys) | |
| assert.ElementsMatch(t, expectedKeys, ntmKeys) |
The rationale: let's just suppose something changes in the future that causes these tests to fail. In your version, the first assertion would fail, saying the viper and ntm don't match, but that doesn't tell you which is acting incorrectly. In my suggested version, the failure would tell you what the expectations are, and which implementation fails to meet them. Both versions of the code accomplish the same thing, but they communicate differently.
| defer c.RUnlock() | ||
|
|
||
| keys := c.Viper.AllKeys() | ||
| keys := c.collectFlattenedKeys() |
There was a problem hiding this comment.
Could you add a unit test that specifically demonstrates how AllKeys() and collectFlattenedKeys() differ?
There was a problem hiding this comment.
Added a specific comparison in TestAllKeysVsCollectAllLeafKeys. For
config.Set("additional_endpoints", map[string]interface{}{
"https://app.datadoghq.com": []interface{}{"api_key"},...
viper.AllKeys() contains additional_endpoints.https://app.datadoghq.com as a key.
| return res | ||
| } | ||
|
|
||
| // collectFlattenedKeys returns flattened keys that match nodetreemodel semantics: |
There was a problem hiding this comment.
I don't think this method name properly distinguishes what this function does compared to other viper methods. All of the viper methods to retrieve lists of keys (AllKeys, GetKnownKeys, AllKeysLowercased) return dot-separated settings as a flat string list, as opposed to the AllSettings method which returns nested data. Perhaps this should be collectAllLeafKeys instead?
There was a problem hiding this comment.
Renamed to collectAllLeafKeys 👍
dustmop
left a comment
There was a problem hiding this comment.
LGTM, with a request for one small fix. Thanks for the additional tests!
| func TestCollectAllLeafKeys(t *testing.T) { | ||
| config := NewViperConfig("test", "DD", strings.NewReplacer(".", "_")).(*safeConfig) // nolint: forbidigo | ||
|
|
||
| config.SetKnown("apm_config.foo") //nolint:forbidigo // TODO: replace by 'SetDefaultAndBindEnv' |
There was a problem hiding this comment.
Instead of the second comment being // TODO: replace by 'SetDefaultAndBindEnv' could you make it // testing behavior. Because (1) there's no method named SetDefaultAndBindEnv and (2) we won't ever replace this call, it's here on purpose to test the behavior it leads to. Same for when it shows up in TestAllKeysVsCollectAllLeafKeys please.
|
/merge |
|
View all feedbacks in Devflow UI.
This pull request is not mergeable according to GitHub. Common reasons include pending required checks, missing approvals, or merge conflicts — but it could also be blocked by other repository rules or settings.
The expected merge time in
|
… ### What does this PR do? Fixes `AllFlattenedSettingsWithSequenceID()` in Viper to match nodetreemodel (NTM) semantics by replacing`Viper.AllKeys()` with a custom leaf-key implementation. ### Motivation `Viper.AllKeys()` recursively flattens map **values** with `.` delimiters, emitting non-schema keys like: `additional_endpoints.https://url1.com` ### Describe how you validated your changes - Additional unit tests - E22 Test: 1. Run the agent with `additional_endpoints` configured. 2. Run ADP with `DD_DATA_PLANE_USE_NEW_CONFIG_STREAM_ENDPOINT=true` 3. Observe that ADP no longer crashes. ### Additional Notes Co-authored-by: raymond.zhao <raymond.zhao@datadoghq.com> (cherry picked from commit 1782958) ___ Co-authored-by: Raymond Zhao <35050708+rayz@users.noreply.github.com>
…ementation (#47185) Backport 1782958 from #46997. ___ ### What does this PR do? Fixes `AllFlattenedSettingsWithSequenceID()` in Viper to match nodetreemodel (NTM) semantics by replacing`Viper.AllKeys()` with a custom leaf-key implementation. ### Motivation `Viper.AllKeys()` recursively flattens map **values** with `.` delimiters, emitting non-schema keys like: `additional_endpoints.https://url1.com` ### Describe how you validated your changes - Additional unit tests - E22 Test: 1. Run the agent with `additional_endpoints` configured. 2. Run ADP with `DD_DATA_PLANE_USE_NEW_CONFIG_STREAM_ENDPOINT=true` 3. Observe that ADP no longer crashes. ### Additional Notes Co-authored-by: sabrina-datadog <sabrina.lu@datadoghq.com>
What does this PR do?
Fixes
AllFlattenedSettingsWithSequenceID()in Viper to match nodetreemodel (NTM) semantics by replacingViper.AllKeys()with a custom leaf-key implementation.Motivation
Viper.AllKeys()recursively flattens map values with.delimiters, emitting non-schema keys like:additional_endpoints.https://url1.comDescribe how you validated your changes
additional_endpointsconfigured.DD_DATA_PLANE_USE_NEW_CONFIG_STREAM_ENDPOINT=trueAdditional Notes