[SDK] Implement Forceflush for Periodic Metric Reader#2064
[SDK] Implement Forceflush for Periodic Metric Reader#2064lalitb merged 16 commits intoopen-telemetry:mainfrom
Conversation
…-cpp into periodic-forceflush
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #2064 +/- ##
==========================================
+ Coverage 87.17% 87.19% +0.02%
==========================================
Files 166 166
Lines 4746 4784 +38
==========================================
+ Hits 4137 4171 +34
- Misses 609 613 +4
|
|
This is ready for review now. |
|
@owent - Once you have time, can you shortly look into the |
| if (result) | ||
| { | ||
|
|
||
| result = exporter_->ForceFlush(timeout); |
There was a problem hiding this comment.
Is it better to use timeout_steady if it's greater than zero here? We have already wait some time before and the left timeout should be less.
There was a problem hiding this comment.
Good point. Have changed it to use timeout_steady now.
|
|
||
| is_force_flush_notified_.store(false, std::memory_order_release); | ||
|
|
||
| if (timeout == std::chrono::steady_clock::duration::zero()) |
There was a problem hiding this comment.
Do we need remove this if branch?It does nothing.
| if (timeout_steady <= std::chrono::steady_clock::duration::zero()) | ||
| { | ||
| // forceflush timeout, exporter force-flush won't be triggered. | ||
| result = false; |
There was a problem hiding this comment.
Do we need reset result to false here?I think result will keep false and break the loop above when timeout_steady <= std::chrono::steady_clock::duration::zero().
And if result is true and timeout_steady <= std::chrono::steady_clock::duration::zero(), result will also be set to false below.
There was a problem hiding this comment.
Yes the while loop for wait_for will break if timeout_steady <= std::chrono::steady_clock::duration::zero() with result as false. Have removed the check. Thanks for noticing.
| cv_.wait_for(lk, remaining_wait_interval_ms, [this]() { | ||
| if (is_force_wakeup_background_worker_.load(std::memory_order_acquire)) | ||
| { | ||
| is_force_wakeup_background_worker_.store(false, std::memory_order_release); |
There was a problem hiding this comment.
Can store fail then false should be returned?
There was a problem hiding this comment.
I don't know if it can ever fail. It's atomic assignment operation so should eventually be successful. Also, this method returns void, not sure how to get any failure status from this method.
Let me know if I am missing something :)
| is_force_wakeup_background_worker_.store(false, std::memory_order_release); | ||
| return true; | ||
| } | ||
| if (IsShutdown()) |
There was a problem hiding this comment.
nit: return IsShutdown();.
| timeout_steady -= std::chrono::steady_clock::now() - start_timepoint; | ||
| } | ||
|
|
||
| // If it will be already signaled, we must wait util notified. |
There was a problem hiding this comment.
| // If it will be already signaled, we must wait util notified. | |
| // If it will be already signaled, we must wait until notified. |
* commit '7887d32da60f54984a597abccbb0c883f3a51649': (82 commits) [RELEASE] Release version 1.9.0 (open-telemetry#2091) Use sdk_start_ts for MetricData start_ts for instruments having cumulative aggregation temporality. (open-telemetry#2086) [SEMANTIC CONVENTIONS] Upgrade to version 1.20.0 (open-telemetry#2088) [EXPORTER] Add OTLP HTTP SSL support (open-telemetry#1793) Make Windows build environment parallel (open-telemetry#2080) make some hints (open-telemetry#2078) Make some targets parallel in CI pipeline (open-telemetry#2076) [Metrics SDK] Implement Forceflush for Periodic Metric Reader (open-telemetry#2064) Upgraded semantic conventions to 1.19.0 (open-telemetry#2017) Bump actions/stale from 7 to 8 (open-telemetry#2070) Include directory path added for Zipkin exporter example (open-telemetry#2069) Ignore more warning of generated protobuf files than not included in `-Wall` and `-Wextra` (open-telemetry#2067) Add `ForceFlush` for all `LogRecordExporter`s and `SpanExporter`s. (open-telemetry#2000) Remove unused 'alerting' section from prometheus.yml in examples (open-telemetry#2055) Clean warnings in ETW exporters (open-telemetry#2063) Fix default value of `OPENTELEMETRY_INSTALL_default`. (open-telemetry#2062) [EXPORTER] GRPC endpoint scheme should take precedence over OTEL_EXPORTER_OTLP_TRACES_INSECURE (open-telemetry#2060) Fix view names in Prometheus example (open-telemetry#2034) Fix some docs typo (open-telemetry#2057) Checking indices before dereference (open-telemetry#2040) ... # Conflicts: # exporters/ostream/CMakeLists.txt # sdk/src/metrics/state/metric_collector.cc # sdk/src/metrics/state/temporal_metric_storage.cc
Fixes #2056
Changes
As of now, Periodic Metric reader tries to perform one last collection and export during Shutdown. And this fetch fails with warning that the Shutdown is already in progress.
The correct approach should be NOT to do this fetch during shutdown, instead implement PeriodicMetricReader::ForceFlush() method to perform the fetch whenever invoked by application. And then application can just invoke this method before doing the shutdown.
WIP as need to add the unit-tests for this.
For significant contributions please make sure you have completed the following items:
CHANGELOG.mdupdated for non-trivial changes