Conversation
Implements Prometheus metrics collection for Demos Network node monitoring. Components: - MetricsService: Singleton service for metric registration and collection - MetricsServer: HTTP server exposing /metrics and /health endpoints Metrics Categories: - System: node_uptime_seconds, node_info - Consensus: rounds_total, round_duration, block_height, mempool_size - Network: peers_connected, peers_total, messages_sent/received, peer_latency - Transactions: transactions_total/failed, tps, processing_seconds - API: requests_total, request_duration, errors_total - IPFS: pins_total, storage_bytes, peers, operations_total - GCR: accounts_total, total_supply Configuration: - METRICS_ENABLED=true (default) - METRICS_PORT=9090 (default) - METRICS_HOST=0.0.0.0 (default) Dependency: prom-client@15.1.3 Part of Grafana Dashboard epic (DEM-540) Closes: DEM-541 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The peerlist format changed from:
{ "pubkey": "http://url" }
to:
{ "pubkey": { "url": "http://...", "capabilities": {...} } }
PeerManager.loadPeerList() now handles both formats.
Fixes: TypeError "[object Object]" cannot be parsed as a URL
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add Prometheus + Grafana monitoring section to README.md - Add Network Ports section with required/optional ports to both files - Include TCP/UDP protocol requirements for OmniProtocol and WS proxy - Add default ports note for users with custom configurations - Add ufw firewall examples for quick setup Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Replace manual bun install with ./install-deps.sh - Add note about Rust/Cargo requirement for wstcp - Include Rust installation instructions in full guide Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
Caution Review failedThe pull request is closed. WalkthroughAdds a Prometheus/Grafana monitoring stack and a new metrics subsystem (MetricsService, MetricsCollector, MetricsServer), integrates metrics startup/shutdown into the main app and run script, provisions multiple Grafana dashboards, updates docs/compose config, and replaces terminal-kit outputs with centralized logging. Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant Collector as MetricsCollector
participant Service as MetricsService
participant Server as MetricsServer
participant Prom as Prometheus
participant Graf as Grafana
App->>Collector: start()
Collector->>Service: register metrics
Collector->>Collector: schedule collectAll()
Collector->>Service: update metrics (gauges/counters/histograms)
App->>Server: start() (expose /metrics)
Prom->>Server: GET /metrics (scrape)
Server->>Service: getMetrics()
Service-->>Server: metrics payload
Server-->>Prom: 200 + metrics
Graf->>Prom: query data
Prom-->>Graf: timeseries
Graf->>Graf: render dashboards
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Organization UI Review profile: CHILL Plan: Pro Disabled knowledge base sources:
📒 Files selected for processing (15)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label |
||||||||||||||||||||||||||||
PR Code Suggestions ✨Latest suggestions up to 52a2537
Previous suggestions✅ Suggestions up to commit 56b0df9
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 19
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
src/libs/blockchain/gcr/gcr.ts (1)
176-192: Don’t swallow the exception; the current message likely mislabels DB failures as “no balance”.
findOne(...)returning “no row” shouldn’t throw; thecatchis more likely real DB/query/connection errors. Log the error object (at least at debug/warn) so incidents are diagnosable.Proposed change
try { const response = await gcrRepository.findOne({ select: ["details"], where: { publicKey: address }, }) return response ? response.details.content.balance : 0 } catch (e) { - log.debug("[GET BALANCE] No balance for: " + address) + log.warn( + "[GCR] getGCRNativeBalance failed for " + address + ": " + String(e), + ) return 0 }package.json (1)
55-116: Add lockfile to ensure reproducible installs for prom-client@^15.1.3.prom-client 15.1.3 is compatible with Bun and ESM contexts (supports CommonJS-to-ESM interop; explicit Bun fixes in v15.1.2+), but without a committed lockfile, the caret version
^15.1.3can drift to newer minor/patch releases unexpectedly. Generate and commit a lockfile (package-lock.json,bun.lockb, or equivalent) to pin exact versions. Verify that the project's Node.js runtime meets prom-client's requirement (Node.js ≥16, ≥18, or ≥20).src/features/web2/dahr/DAHRFactory.ts (1)
48-55: Fix log prefix + consider whethersessionIdshould be logged.The message says
[DAHRManager]inDAHRFactory.createDAHR(Line 52); also verifysessionIdisn’t a sensitive/token-like identifier before logging.Proposed fix
- log.info("DAHR", `[DAHRManager] Creating new DAHR instance with sessionId: ${sessionId}`) + log.info("DAHR", `[DAHRFactory] Creating new DAHR instance with sessionId: ${sessionId}`)run (1)
350-367: Ctrl+C cleanup likely stops the wrong Postgres compose directory (leaves DB running).This script now uses
postgres_${PG_PORT}elsewhere, butctrl_c()still doescd postgres(Line 353). On interrupt, that can fail to stop the running DB (and thecdis also fragile if the current directory isn’t repo root).Proposed fix
function ctrl_c() { HAS_BEEN_INTERRUPTED=true if [ "$EXTERNAL_DB" = false ]; then - cd postgres - docker compose down - cd .. + PG_FOLDER="postgres_${PG_PORT}" + if [ -d "$PG_FOLDER" ]; then + (cd "$PG_FOLDER" && docker compose down) || true + elif [ -d "postgres" ]; then + (cd "postgres" && docker compose down) || true + fi fi # Stop TLSNotary container if running (enabled by default) if [ "$TLSNOTARY_DISABLED" != "true" ] && [ -d "tlsnotary" ]; then (cd tlsnotary && docker compose down --timeout 5 2>/dev/null) || true # Force kill if still running docker rm -f "tlsn-notary-${TLSNOTARY_PORT:-7047}" 2>/dev/null || true fi # Stop monitoring stack if running (enabled by default) if [ "$MONITORING_DISABLED" != "true" ] && [ -d "monitoring" ]; then (cd monitoring && docker compose down --timeout 5 2>/dev/null) || true fi }src/libs/blockchain/routines/validateTransaction.ts (1)
162-183: BALANCE ERROR catch continues execution; likely returns success after a balance lookup failure.After the catch,
defineGas()can still proceed to compute gas and return[true, gasOperation]even though balance retrieval failed (andvalidityData.data.validisn’t set to false here). If this is intended only for non-PROD, it should be explicit.Proposed fix (fail fast)
try { fromBalance = await GCR.getGCRNativeBalance(from) } catch (e) { log.error("TX", "[Native Tx Validation] [BALANCE ERROR] No balance found for this address: " + from) validityData.data.message = "[Native Tx Validation] [BALANCE ERROR] No balance found for this address: " + from + "\n" - // Hash the validation data - const hash = Hashing.sha256(JSON.stringify(validityData.data)) - // Sign the hash - const signature = await ucrypto.sign( - getSharedState.signingAlgorithm, - new TextEncoder().encode(hash), - ) - validityData.signature = { - type: getSharedState.signingAlgorithm, - data: uint8ArrayToHex(signature.signature), - } + validityData.data.valid = false + validityData = await signValidityData(validityData) + return [false, validityData] }
🤖 Fix all issues with AI agents
In @.env.example:
- Around line 35-39: The .env.example comment for Prometheus metrics can cause a
port conflict because METRICS_PORT defaults to 9090 (the Prometheus server
default); update the comment above METRICS_ENABLED / METRICS_PORT in
.env.example to warn users about the potential conflict and suggest using a
different default port (e.g., 9091 or 9092) or mapping Prometheus externally,
and optionally change METRICS_PORT default value to a non-conflicting port to
match the monitoring README.
In @INSTALL.md:
- Around line 427-444: The two Markdown tables under the "Required Ports" and
"Optional Ports" headings violate MD058 by lacking blank lines around them;
update INSTALL.md to add a blank line before each table and a blank line after
each table (i.e., ensure there is an empty line above the first "| Port |
Service | Description |" row and an empty line after the final table row for
both the Required Ports and Optional Ports sections) so markdownlint MD058
passes.
In @monitoring/docker-compose.yml:
- Around line 41-42: The docker-compose uses a weak default admin password via
GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_ADMIN_PASSWORD:-demos}; remove the insecure
fallback and require explicit configuration or implement secure generation:
update the compose entry to reference
GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_ADMIN_PASSWORD} (no default) or replace
with a startup mechanism that generates a random password, sets
GF_SECURITY_ADMIN_PASSWORD for the Grafana container and prints it to logs on
first boot; also add a prominent warning in the documentation instructing users
to set GRAFANA_ADMIN_PASSWORD before deployment and reference the
GF_SECURITY_ADMIN_USER/GF_SECURITY_ADMIN_PASSWORD variables in the docs.
In @monitoring/grafana/grafana.ini:
- Around line 52-55: The setting disable_sanitize_html under the [panels]
section currently disables HTML sanitization and permits XSS; change
disable_sanitize_html back to false to re-enable sanitization, and add a clear
inline comment near the [panels] section explaining the branding trade-off and
that any intentional override requires explicit justification and restricted
dashboard-edit permissions; alternatively, if branding HTML is required, keep
disable_sanitize_html = true only after adding documentation of the risk and
ensuring dashboard creation/edit rights are limited to a trusted admin role
(update any related dashboard-permission config and docs accordingly).
- Around line 61-64: The Grafana Live configuration currently uses a permissive
wildcard for WebSocket origins; update the [live] section's allowed_origins
setting (the allowed_origins key) to list only specific trusted origins (e.g.,
https://your-app.example.com, https://admin.example.com) instead of "*", or wire
it to a secure environment/config variable so production deployments do not
accept all origins.
In @monitoring/grafana/provisioning/dashboards/json/system-health.json:
- Around line 579-590: Update the Grafana Prometheus query for the TLSNotary
container to use the collector's label value by changing the metric selector in
any occurrences of service_docker_container_up{container="tlsn-server"} to
service_docker_container_up{container="tlsn"} (leave postgres and ipfs selectors
as-is); ensure the stat panel using the expr
"service_docker_container_up{container=\"tlsn\"}" is updated so it matches the
MetricsCollector label names.
- Around line 38-103: Update all Prometheus query expressions in this dashboard
to use the MetricsCollector prefix: replace bare metric names like
system_cpu_usage_percent, system_memory_usage_percent, system_load_average_*,
service_docker_container_up, and service_port_open with their prefixed forms
(prepend "demos_") so the targets' expr fields use
demos_system_cpu_usage_percent, demos_system_memory_usage_percent,
demos_system_load_average_*, demos_service_docker_container_up,
demos_service_port_open; reference the MetricsService.ts prefix configuration
(line 33) and mirror the approach used in demos-overview.json when editing the
target objects' "expr" values.
In @monitoring/prometheus/prometheus.yml:
- Around line 26-42: The README incorrectly instructs users to set
PROMETHEUS_PORT=3333 when it should instruct setting METRICS_PORT=3333 for the
node metrics exporter; update the README text in the "Enabling Metrics on Your
Node" section to reference METRICS_PORT (not PROMETHEUS_PORT) and clarify that
PROMETHEUS_PORT controls the Prometheus service external port (default 9091). In
prometheus.yml update the 'demos-node' job (job_name: 'demos-node') to either
document that the target host.docker.internal:9090 must match the node's
METRICS_PORT or make the target configurable via METRICS_PORT so Prometheus
scrapes the actual node metrics port; also update docs/diagram to explicitly
distinguish the Prometheus service port (PROMETHEUS_PORT, default 9091) from the
node metrics port (METRICS_PORT, default 9090).
In @monitoring/README.md:
- Around line 186-198: The troubleshooting step incorrectly instructs users to
curl http://localhost:3333/metrics; update the README's "Grafana shows 'No
Data'" section to use the actual default Prometheus metrics port by replacing
that URL with http://localhost:9090/metrics so the curl command checks the
correct endpoint.
- Around line 33-40: The README currently uses ENABLE_PROMETHEUS and
PROMETHEUS_PORT:3333 which conflicts with the actual config in .env.example and
MetricsServer.ts (which expect METRICS_ENABLED and METRICS_PORT with default
9090); update the README text and example env block to use METRICS_ENABLED=true
and METRICS_PORT=9090, change the example URL from http://localhost:3333/metrics
to http://localhost:9090/metrics, and also update the architecture diagram lines
that reference port 3333 to 9090 so documentation aligns with the
METRICS_ENABLED/METRICS_PORT variables used by MetricsServer.ts.
In @run:
- Around line 813-853: The Grafana health-check loop uses curl -sf which can
hang; update the curl invocation in the while condition (while ! curl -sf
"http://localhost:$GRAFANA_PORT/api/health" > /dev/null 2>&1;) to include
timeouts (e.g. --connect-timeout 1 and --max-time 2) so each attempt cannot
stall; also apply the same timeout flags to the TLSNotary health check curl call
elsewhere in the script to ensure those startup loops respect the intended time
bounds.
In @src/features/metrics/MetricsCollector.ts:
- Around line 636-664: The collectDockerHealth method interpolates PG_PORT and
TLSNOTARY_PORT directly into a shell command, risking command injection; add a
helper like sanitizePort(value, defaultValue) that validates the port with a
/^\d+$/ check (log a warning and return the default on invalid input), call it
to produce sanitized pgPort and tlsnPort values, and use those sanitized values
in the execAsync docker command and container name templates inside
collectDockerHealth (retain existing metric setting and error handling).
In @src/features/metrics/MetricsServer.ts:
- Line 10: Change the runtime import of Server to a type-only import: replace
the existing `import { Server } from "bun"` with `import type { Server } from
"bun"` because `Server` is only used for type annotations (see `private server:
Server | null = null` in the `MetricsServer` class); update the import so the
type is erased at runtime and follows Bun conventions.
In @src/features/metrics/MetricsService.ts:
- Around line 153-158: The histogram registration in MetricsService using
createHistogram("peer_latency_seconds", ..., ["peer_id"], ...) risks cardinality
explosion because each unique peer_id creates a new time series; remove the raw
peer_id label and replace it with a lower-cardinality dimension (e.g.,
peer_group, region, or a fixed-size bucket), or restrict labels to only top-N
peers and aggregate the rest into an "other" bucket before calling
createHistogram; alternatively, stop using peer-specific labels and record
per-peer latency only via aggregated metrics or log-based sampling to avoid
unbounded cardinality.
- Around line 109-111: The log message in MetricsService ("[METRICS]
MetricsService initialized on port ...") is misleading because MetricsService
does not bind an HTTP port; update the log in the MetricsService initialization
(the log.info call inside the MetricsService constructor/initialize method) to
remove the implication that it is listening — either log that the service was
initialized with a configured port value (e.g. "[METRICS] MetricsService
initialized (configured port: ... )") or remove the port entirely and move/emit
the actual "listening on port" message from MetricsServer where the server
binds. Ensure you change only the string in the log.info call associated with
MetricsService and add the explicit listening log in MetricsServer's start/bind
function if not already present.
- Around line 519-522: The shutdown() method only clears this.initialized but
leaves the static MetricsService.instance intact, causing getInstance() to
return a stale instance that cannot reinitialize; update shutdown() to also
clear the singleton (reset MetricsService.instance to undefined/null) or
implement logic in getInstance()/initialize to recreate the instance when
initialized is false (refer to MetricsService.instance, shutdown(),
getInstance(), and the initialized guard) so subsequent initialization attempts
produce a fresh, usable MetricsService.
In @src/index.ts:
- Around line 582-590: The metricsCollector created via getMetricsCollector is
started but not saved to indexState and not stopped in gracefulShutdown; update
the startup sequence to store the instance (e.g., assign metricsCollector to
indexState.metricsCollector or a similar field) and modify gracefulShutdown to
check for indexState.metricsCollector and call its stop() (await if async)
before completing shutdown, ensuring any errors are caught/logged; reference the
metricsCollector variable, getMetricsCollector(), start(), stop(), indexState,
and gracefulShutdown when applying the changes.
In @src/libs/peer/PeerManager.ts:
- Around line 81-83: The assignment to peerObject.connection.string uses
peerData.url without validating its type or content; update the branch in
PeerManager (the code handling peerData with "url") to first check that
peerData.url is a non-empty string (e.g., typeof peerData.url === "string" and
peerData.url.trim().length > 0) before assigning to
peerObject.connection.string, and if the check fails either skip the assignment
and log an error/warning or throw a clear validation error so downstream URL
parsing does not receive undefined/null/non-string values.
🧹 Nitpick comments (14)
src/utilities/tui/CategorizedLogger.ts (1)
838-855: Approve with optional safety improvement.The caching logic is correct and provides a solid performance optimization. Cache invalidation via
entryCounteris sound, and the implementation handles all edge cases properly (buffer overflow, clearing, empty state).🛡️ Optional: Prevent cache corruption from external mutations
The method returns the cached array reference directly. While internal usages are safe (they use
slice()/filter()), external callers could mutate the returned array and corrupt the cache.Option 1: Return a readonly type (zero-cost)
-getAllEntries(): LogEntry[] { +getAllEntries(): readonly LogEntry[] {Option 2: Defensive copy (small performance cost)
// Return cached result if entry counter hasn't changed if (this.allEntriesCache !== null && this.allEntriesCacheLastCounter === this.entryCounter) { - return this.allEntriesCache + return [...this.allEntriesCache] } // ...existing rebuild logic... - return this.allEntriesCache + return [...this.allEntriesCache]Option 1 is preferred as it provides type-level safety without runtime overhead.
src/features/web2/dahr/DAHRFactory.ts (2)
16-29: Cleanup logging looks good; consider logging failures fromdahr.stopProxy().If
stopProxy()can throw, one failing DAHR may abort cleanup and leak the rest; consider try/catch per instance (and still delete).
63-73: “No DAHR found” at info level may be too chatty.If this can happen on normal flows, consider
debugor rate-limiting to avoid log spam.src/libs/blockchain/routines/validateTransaction.ts (1)
145-161: FROM ERROR path: consider reusingsignValidityData()to avoid duplicated signing code.You already have
signValidityData(); using it here reduces duplication and keeps signing behavior consistent.src/features/metrics/MetricsCollector.ts (4)
61-68: Singleton ignores config updates after first instantiation.Once
getInstance()is called, subsequent calls with a differentconfigparameter are silently ignored. If callers expect to update the configuration, this could lead to unexpected behavior.Consider either:
- Documenting that config is only honored on first call
- Throwing if a different config is passed to an already-initialized instance
- Allowing config updates via a separate method
95-103: Setrunning = truebefore starting the interval to avoid race window.Currently
runningis set aftersetInterval, which creates a brief window wherestop()would seerunning=falsebut the interval is active, potentially leaving the interval running.Proposed fix
// Start periodic collection + this.running = true this.collectionInterval = setInterval( async () => { await this.collectAll() }, this.config.collectionIntervalMs, ) - this.running = true log.info("[METRICS COLLECTOR] Started")
442-458: Fallback sets metrics to zero, which may be misleading on non-Linux platforms.When
/proc/net/devisn't available (macOS, Windows), the fallback sets network metrics to0. This could be misinterpreted as "no network activity" rather than "metrics unavailable."Consider logging a debug message indicating that detailed network I/O is unavailable on the current platform, or omitting the metrics entirely rather than reporting zeros.
541-572: Clear the timeout in the catch block to avoid leaving dangling timer references.When an error occurs (e.g., network failure before timeout), the timeout continues running until it fires. While not a significant leak due to the short duration, using
try/finallywould be cleaner.Proposed fix
private async checkEndpoint( baseUrl: string, path: string, name: string, ): Promise<boolean> { const startTime = Date.now() + const controller = new AbortController() + const timeout = setTimeout(() => controller.abort(), 5000) try { - const controller = new AbortController() - const timeout = setTimeout(() => controller.abort(), 5000) - const response = await fetch(`${baseUrl}${path}`, { method: "GET", signal: controller.signal, }) - clearTimeout(timeout) - const responseTime = Date.now() - startTime const isHealthy = response.ok ? 1 : 0 this.metricsService.setGauge("node_http_health", isHealthy, { endpoint: name, }) this.metricsService.setGauge( "node_http_response_time_ms", responseTime, { endpoint: name }, ) return response.ok } catch { this.metricsService.setGauge("node_http_health", 0, { endpoint: name, }) this.metricsService.setGauge("node_http_response_time_ms", 0, { endpoint: name, }) return false + } finally { + clearTimeout(timeout) } }monitoring/grafana/provisioning/dashboards/json/network-peers.json (1)
278-290: Potential division by zero in Peer Health % calculation.The expression
(peer_online_count / peers_total) * 100will produce NaN or an error ifpeers_totalis 0 (e.g., when the node first starts or has no known peers).Consider using a safer PromQL expression:
Suggested safer expression
- "expr": "(peer_online_count / peers_total) * 100", + "expr": "(peer_online_count / (peers_total > 0 or vector(1))) * 100",Or alternatively use
clamp_min:(peer_online_count / clamp_min(peers_total, 1)) * 100src/features/metrics/MetricsServer.ts (2)
159-166: Singleton factory ignores config on subsequent calls.After the first instantiation,
configpassed togetMetricsServer()is silently ignored. If different configs are passed on subsequent calls, the caller may expect them to take effect. Consider logging a warning or documenting this behavior.♻️ Optional: Add warning when config is ignored
export const getMetricsServer = ( config?: Partial<MetricsServerConfig>, ): MetricsServer => { if (!metricsServerInstance) { metricsServerInstance = new MetricsServer(config) + } else if (config) { + log.warning("[METRICS SERVER] Config ignored - server already instantiated") } return metricsServerInstance }
64-72: Consider adding error handling forBun.serve().If the port is already in use or binding fails,
Bun.serve()will throw. Wrapping this in try/catch would provide clearer error messaging and graceful failure.♻️ Proposed fix
- this.server = Bun.serve({ - port: this.config.port, - hostname: this.config.hostname, - fetch: async (req) => this.handleRequest(req), - }) - - log.info( - `[METRICS SERVER] Started on http://${this.config.hostname}:${this.config.port}/metrics`, - ) + try { + this.server = Bun.serve({ + port: this.config.port, + hostname: this.config.hostname, + fetch: async (req) => this.handleRequest(req), + }) + + log.info( + `[METRICS SERVER] Started on http://${this.config.hostname}:${this.config.port}/metrics`, + ) + } catch (error) { + log.error( + `[METRICS SERVER] Failed to start on port ${this.config.port}: ${error}`, + ) + throw error + }monitoring/grafana/provisioning/dashboards/json/system-health.json (1)
1305-1306: Consider increasing refresh interval for production.A 5-second refresh interval is quite aggressive and may increase load on Prometheus, especially if multiple users view the dashboard. Consider 10s or 30s for production use.
src/features/metrics/MetricsService.ts (2)
73-78: Config is ignored on subsequentgetInstance()calls.Same pattern issue as
MetricsServer: if config differs on subsequent calls, it's silently ignored. Consider logging a warning or throwing if configs conflict.
325-340: Silent no-op when metric not found.If a metric name is misspelled or not registered, these methods silently do nothing. This could make debugging difficult. Consider adding debug-level logging when a metric lookup fails.
♻️ Optional: Add debug logging for missing metrics
public incrementCounter( name: string, labels?: Record<string, string>, value = 1, ): void { if (!this.config.enabled) return const fullName = this.config.prefix + name const counter = this.counters.get(fullName) if (counter) { if (labels) { counter.inc(labels, value) } else { counter.inc(value) } + } else { + log.debug(`[METRICS] Counter not found: ${fullName}`) } }
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (5)
monitoring/grafana/branding/demos-icon.svgis excluded by!**/*.svgmonitoring/grafana/branding/demos-logo-morph.svgis excluded by!**/*.svgmonitoring/grafana/branding/demos-logo-white.svgis excluded by!**/*.svgmonitoring/grafana/branding/favicon.pngis excluded by!**/*.pngmonitoring/grafana/branding/logo.jpgis excluded by!**/*.jpg
📒 Files selected for processing (34)
.beads/.local_version.env.exampleINSTALL.mdREADME.mdmonitoring/README.mdmonitoring/docker-compose.ymlmonitoring/grafana/grafana.inimonitoring/grafana/provisioning/dashboards/dashboard.ymlmonitoring/grafana/provisioning/dashboards/json/consensus-blockchain.jsonmonitoring/grafana/provisioning/dashboards/json/demos-overview.jsonmonitoring/grafana/provisioning/dashboards/json/network-peers.jsonmonitoring/grafana/provisioning/dashboards/json/system-health.jsonmonitoring/grafana/provisioning/datasources/prometheus.ymlmonitoring/prometheus/prometheus.ymlpackage.jsonrunsrc/features/incentive/PointSystem.tssrc/features/metrics/MetricsCollector.tssrc/features/metrics/MetricsServer.tssrc/features/metrics/MetricsService.tssrc/features/metrics/index.tssrc/features/web2/dahr/DAHRFactory.tssrc/index.tssrc/libs/blockchain/gcr/gcr.tssrc/libs/blockchain/routines/validateTransaction.tssrc/libs/crypto/cryptography.tssrc/libs/identity/identity.tssrc/libs/network/endpointHandlers.tssrc/libs/network/manageAuth.tssrc/libs/network/manageExecution.tssrc/libs/peer/PeerManager.tssrc/libs/utils/keyMaker.tssrc/utilities/tui/CategorizedLogger.tssrc/utilities/tui/TUIManager.ts
🧰 Additional context used
🧬 Code graph analysis (11)
src/libs/blockchain/gcr/gcr.ts (1)
src/utilities/tui/CategorizedLogger.ts (1)
log(349-380)
src/features/web2/dahr/DAHRFactory.ts (2)
src/utilities/tui/CategorizedLogger.ts (1)
log(349-380)src/features/web2/dahr/DAHR.ts (1)
sessionId(57-59)
src/libs/network/manageAuth.ts (1)
src/utilities/tui/CategorizedLogger.ts (1)
log(349-380)
src/libs/network/endpointHandlers.ts (1)
src/utilities/tui/CategorizedLogger.ts (1)
log(349-380)
src/libs/utils/keyMaker.ts (2)
src/libs/identity/identity.ts (1)
ensureIdentity(62-83)src/utilities/tui/CategorizedLogger.ts (1)
log(349-380)
src/libs/crypto/cryptography.ts (2)
src/utilities/tui/CategorizedLogger.ts (1)
log(349-380)src/utilities/sharedState.ts (1)
getSharedState(349-351)
src/libs/network/manageExecution.ts (1)
src/utilities/tui/CategorizedLogger.ts (1)
log(349-380)
src/features/metrics/MetricsServer.ts (1)
src/features/metrics/MetricsService.ts (1)
MetricsService(46-523)
src/features/metrics/MetricsService.ts (1)
src/features/metrics/index.ts (3)
MetricsConfig(13-13)MetricsService(11-11)getMetricsService(12-12)
src/libs/blockchain/routines/validateTransaction.ts (1)
src/utilities/tui/CategorizedLogger.ts (1)
log(349-380)
src/index.ts (5)
src/features/metrics/index.ts (2)
getMetricsServer(18-18)getMetricsCollector(24-24)src/features/metrics/MetricsServer.ts (1)
getMetricsServer(159-166)src/features/metrics/MetricsCollector.ts (1)
getMetricsCollector(718-720)src/utilities/waiter.ts (1)
Waiter(25-150)src/exceptions/index.ts (2)
TimeoutError(4-9)AbortError(14-19)
🪛 dotenv-linter (4.0.0)
.env.example
[warning] 39-39: [UnorderedKey] The METRICS_HOST key should go before the METRICS_PORT key
(UnorderedKey)
🪛 LanguageTool
monitoring/README.md
[style] ~82-~82: Consider a different adjective to strengthen your wording.
Context: ...orter (optional) Host-level metrics for deeper system insights: ```bash docker compose...
(DEEP_PROFOUND)
🪛 markdownlint-cli2 (0.18.1)
INSTALL.md
430-430: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
438-438: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
README.md
111-111: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
120-120: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
132-132: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
🔇 Additional comments (45)
src/libs/peer/PeerManager.ts (1)
76-87: Code is defensively sound, but the new format is not yet deployed; capabilities field is undocumented.The dual-format handling is well-implemented with appropriate fallback logic. However, verification shows that the new object format is not currently in use—peer list files only contain the old string format. The
capabilitiesfield mentioned in the comment is not implemented anywhere in the codebase; clarify whether this is reserved for future use or should be removed from the comment.Consider:
- Removing or updating the capabilities reference in the comment if it's not intended for near-term implementation
- Documenting when the new format migration is planned, if applicable
- Ensuring peer list generation tools produce the new format before it's adopted
The warning log and
continuestatement appropriately handle malformed entries.src/utilities/tui/CategorizedLogger.ts (2)
230-232: LGTM! Well-designed cache fields.The cache fields are properly typed and initialized to ensure the cache is rebuilt on first access.
900-901: LGTM! Correct cache invalidation.Setting the cache to null after clearing buffers ensures the cache is rebuilt on the next access. Not resetting
entryCounteris the correct design, as it maintains the cache key across buffer clears..beads/.local_version (1)
1-1: Version bump looks fine.
No concerns in this file.src/features/incentive/PointSystem.ts (1)
92-92: Good:const linkedNomisis the right semantic here.
You’re mutating the array contents (viapush), not reassigning the binding.src/libs/identity/identity.ts (1)
67-67: LGTM! Clean logging migration.The replacement of terminal-kit with centralized logging maintains the same informational intent while aligning with the new observability infrastructure.
Also applies to: 72-72
src/libs/crypto/cryptography.ts (1)
228-244: LGTM! Appropriate log levels for cryptography diagnostics.The migration from terminal-kit to centralized logging uses appropriate severity levels:
log.debugfor normalization attempts,log.warningfor fallback scenarios, andlog.errorfor exceptions.src/libs/network/endpointHandlers.ts (1)
85-85: LGTM! Consistent logging migration across transaction handlers.All terminal-kit calls have been properly replaced with centralized logging at appropriate severity levels, maintaining the same diagnostic information flow.
Also applies to: 87-87, 141-141, 170-170, 224-224, 288-288
src/utilities/tui/TUIManager.ts (1)
928-939: Excellent performance optimization for TUI responsiveness!The introduction of the
logsNeedUpdateflag effectively addresses TUI unresponsiveness by deferring expensive log filtering and scroll updates to the render cycle (every 100ms) instead of executing them on every log entry. This prevents UI blocking when logs arrive rapidly.The implementation correctly:
- Sets the flag in
handleLogEntrywithout immediate processing- Performs deferred updates in
render()only when needed and not in CMD mode- Resets the flag after updating to prevent redundant work
- Maintains auto-scroll behavior within the batched update
Also applies to: 1012-1020
src/libs/utils/keyMaker.ts (1)
4-4: LGTM! Clean logging migration for key generation utility.All console.log statements have been replaced with centralized logging. The key material logging at lines 35-36 is intentional for this development utility.
Also applies to: 12-12, 17-17, 28-28, 35-37, 42-42
src/features/web2/dahr/DAHRFactory.ts (2)
3-3: Confirmsrc/utilities/loggersupports default import +log.info(category, message)shape.This file uses
import log from "src/utilities/logger"and callslog.info("DAHR", "..."); please ensure this matches the logger’s exported type (default vs named export, and arg order).
35-41: Singleton creation log is fine (avoid noisy logs ifinstanceis accessed frequently).src/libs/network/manageAuth.ts (2)
47-49: Readonly branch log message is clear and consistent.
11-18: > Likely an incorrect or invalid review comment.run (2)
9-10: Monitoring flag plumbing (-m/--no-monitoring) looks consistent.Also applies to: 61-65, 446-450, 455-472
943-953: Shutdown path for monitoring is fine (nice symmetry with startup).INSTALL.md (1)
64-70: install-deps.sh + Rust/Cargo note is clear and actionable.Also applies to: 201-211
src/libs/blockchain/routines/validateTransaction.ts (2)
29-39: Logging change is fine; ensurelog.info("TX", ...)matches logger API.
186-208: Insufficient gas log/message alignment looks good.src/features/metrics/MetricsCollector.ts (1)
492-505: LGTM - Good cardinality limiting for peer metrics.Capping to 20 peers prevents metric label explosion. The truncated
peer_idalso helps keep cardinality manageable while still being identifiable.monitoring/grafana/provisioning/datasources/prometheus.yml (1)
1-23: LGTM - Datasource provisioning correctly configured.The configuration properly uses the Docker service name
prometheusfor inter-container communication, sets POST method for better query handling, and marks it as the default non-editable datasource.README.md (2)
154-183: LGTM - Comprehensive network ports documentation.The ports table and firewall examples are well-documented. Good distinction between required and optional ports, with the security note about PostgreSQL being local-only.
109-117: Thedemos_metric prefix documentation is accurate. MetricsService properly applies thedemos_prefix (configured at line 33) to all registered metrics via thefullName = this.config.prefix + namepattern. The documentation correctly reflects the final metric names exposed to Prometheus.monitoring/grafana/provisioning/dashboards/dashboard.yml (1)
1-19: LGTM - Dashboard provisioning correctly configured.The configuration properly sets up the dashboard provider with reasonable defaults. Note that
allowUiUpdates: trueallows dashboard editing in the UI, but changes are ephemeral and will be lost on container restart unless exported back to the JSON files.monitoring/grafana/grafana.ini (1)
66-69: Feature toggles are valid for Grafana 10.2.2 in use.All three toggles (
publicDashboards,topnav,newPanelChromeUI) are documented and available in Grafana 10.2.2. Note:publicDashboardsis expected to graduate from feature toggle to GA (renamed as "Shared dashboards") in Grafana 11.x; plan to update this configuration if upgrading.monitoring/docker-compose.yml (3)
13-34: Prometheus service configuration looks good.The service is well-configured with appropriate retention settings, lifecycle API enabled for reloads, and proper volume mounts. The
host.docker.internal:host-gatewayextra host enables scraping the node running on the host machine.
35-100: Grafana configuration is comprehensive and well-structured.Good use of environment variables for customization, appropriate security settings (disabled analytics, gravatar, sign-up), and proper provisioning volume mounts. The feature toggles and branding setup align with the PR's objectives.
101-130: Node Exporter and infrastructure configuration look correct.Using Docker Compose profiles for the optional node-exporter is a good pattern. The volume mounts for host metrics access are correct, and the named volumes with explicit names aid in management.
src/index.ts (3)
768-828: TUI-aware stdin handling fix is well-implemented.The separation of TUI vs non-TUI modes correctly addresses the terminal-kit stdin conflict. In TUI mode, stdin manipulation is avoided since terminal-kit controls it via
grabInput(). In non-TUI mode, the Enter-key skip behavior is preserved with proper cleanup in thefinallyblock.
559-599: Metrics startup flow is well-structured.Good use of dynamic import for lazy loading, proper error handling with failsafe continuation, and consistent port allocation pattern matching other services (MCP, OmniProtocol).
901-909: Metrics server shutdown is properly handled.The graceful shutdown correctly stops the metrics server with error handling to prevent shutdown failures from propagating.
src/features/metrics/index.ts (1)
1-26: Clean barrel export module.The module correctly aggregates and re-exports the metrics API surface. The JSDoc module documentation is helpful.
monitoring/prometheus/prometheus.yml (1)
44-52: Node Exporter job configuration is correct.The job correctly targets the Docker service name
node-exporter:9100which resolves within the Docker network when the--profile fulloption is used.monitoring/grafana/provisioning/dashboards/json/consensus-blockchain.json (2)
640-661: Block production rate calculation looks correct.The rate calculations using
rate(block_height[5m]) * 60andrate(block_height[1m]) * 60correctly compute blocks per minute from the counter increase rate.
81-92: The metric names in this dashboard are correct and consistent with the MetricsCollector implementation. The MetricsCollector creates metrics without thedemos_prefix (block_height,seconds_since_last_block,last_block_tx_count), which matches what this dashboard queries. The dashboard is properly configured.Likely an incorrect or invalid review comment.
monitoring/grafana/provisioning/dashboards/json/network-peers.json (1)
89-101: Same metric naming inconsistency as consensus-blockchain dashboard.This dashboard uses
peer_online_count,peer_offline_count,peers_totalwithout thedemos_prefix, whiledemos-overview.jsonusesdemos_peer_online_count. Ensure consistency with the actual metric names exposed by the MetricsCollector.monitoring/grafana/provisioning/dashboards/json/demos-overview.json (4)
1-112: Well-structured overview dashboard with clear branding.The dashboard header with inline SVG logo and node version info provides good visual identity. The use of
liveNow: trueenables real-time streaming updates.
126-204: Blockchain status panels are well-configured.Good use of value mappings for ONLINE/OFFLINE status, appropriate thresholds for block lag (30s yellow, 60s red), and consistent styling across panels.
906-938: Docker and port monitoring panels provide useful infrastructure visibility.The container health (
demos_service_docker_container_up) and port status (demos_service_port_open) metrics with UP/DOWN and OPEN/CLOSED mappings give operators quick visibility into service health.
1167-1187: Dashboard global settings are appropriate.5-second refresh interval is suitable for real-time monitoring, and browser timezone with 1-hour default time range are sensible defaults.
src/features/metrics/MetricsServer.ts (2)
78-128: LGTM!The request handling logic is well-structured with proper error handling on the
/metricsendpoint and appropriate HTTP status codes for each route.
133-153: LGTM!Lifecycle methods are clean and provide proper state management.
src/features/metrics/MetricsService.ts (3)
10-17: LGTM!Imports from
prom-clientare correct and include all necessary metric types and utilities.
223-318: LGTM!Metric creation methods are well-implemented with proper deduplication logic preventing duplicate registrations.
526-530: LGTM!Clean export pattern providing both the class and a convenience factory function.
- Fix PromQL: use deriv() instead of rate() for gauge metrics - Add MetricsCollector.stop() to graceful shutdown sequence - Rename node_info to node_metadata to avoid metric collision - Handle division by zero in peer health percentage query - Add non-Linux fallback for network I/O metrics collection - Use subshell pattern for monitoring stack shutdown in run script - Clarify METRICS_PORT comment (node endpoint vs Prometheus server) - Fix monitoring/README.md env var names and example ports - Fix MD058 lint: add blank lines around tables in INSTALL.md Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update all PromQL expressions in system-health.json to use the demos_ prefix that MetricsService automatically applies to all metric names: - demos_system_cpu_usage_percent - demos_system_memory_usage_percent - demos_system_memory_used_bytes - demos_system_memory_total_bytes - demos_system_load_average_1m/5m/15m - demos_service_docker_container_up - demos_service_port_open Also fix TLSNotary container label from "tlsn-server" to "tlsn" to match the displayName used in MetricsCollector.collectDockerHealth(). Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add header comment to prometheus.yml explaining port distinction - Document that node metrics target must match METRICS_PORT from main .env - Add "Important Port Distinction" section to README Configuration - Fix troubleshooting curl example port from 3333 to 9090 - Clarify PROMETHEUS_PORT table entry (server port, not node metrics) METRICS_PORT (9090) = Demos node metrics endpoint (main .env) PROMETHEUS_PORT (9091) = Prometheus server external port (monitoring/.env) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Prevent cardinality explosion by removing the peer_id label from the peer_latency_seconds histogram. Each unique peer would create new time series, causing unbounded growth. Aggregated latency across all peers is sufficient for monitoring; individual peer debugging should use structured logging instead. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…tion - Add curl timeout flags (--connect-timeout 1 --max-time 2) to health check loops for TLSNotary and Grafana to prevent hanging when services are slow - Fix MetricsService log message to say "configured port" instead of "initialized on port" since the service doesn't bind to the port - Add URL validation in PeerManager to ensure peerData.url is a non-empty string before assignment, logging a warning and skipping invalid entries Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|



User description
Besides adding Prometheus + Grafana for observability, fixed a TUI unresponsiveness error and updated README.md and INSTALL.md to be more user friendly
PR Type
Enhancement, Documentation
Description
Added comprehensive Prometheus metrics collection and Grafana monitoring stack for observability
MetricsCollectorservice actively gathers blockchain, system, network, and service health metricsMetricsServiceimplements Prometheus registry with support for counters, gauges, histograms, and summariesMetricsServerexposes/metricsendpoint on dedicated port (default 9090) for Prometheus scrapingCreated four production-ready Grafana dashboards for monitoring
Integrated monitoring stack into main application with Docker Compose configuration
runscript with health checksMETRICS_ENABLEDandMETRICS_PORTconfiguration optionsFixed TUI unresponsiveness issue by preventing stdin manipulation when TUI is enabled
Replaced
terminal-kitdependency with structured logging across 11 modules for consistencyAdded support for new peer data format with backward compatibility
Optimized TUI performance with log caching and debounced updates
Updated documentation (README.md, INSTALL.md, monitoring/README.md) with comprehensive setup and configuration guides
Added
prom-clientv15.1.3 dependency and bumped version to 0.47.0Diagram Walkthrough
File Walkthrough
12 files
MetricsCollector.ts
Active metrics collection from node subsystemssrc/features/metrics/MetricsCollector.ts
blockchain, system, network, and service health subsystems
(default 2.5 seconds)
timestamps), system metrics (CPU, memory, load average), network I/O
rates, peer information, and service health checks
checks for critical services
MetricsService.ts
Prometheus metrics registry and management servicesrc/features/metrics/MetricsService.ts
prom-clientlibraryfor counters, gauges, histograms, and summaries
API, IPFS, and GCR subsystems
format
MetricsServer.ts
HTTP server for Prometheus metrics endpointsrc/features/metrics/MetricsServer.ts
(default 9090)
/metricsendpoint for Prometheus scraping,/healthfor healthchecks, and
/for service infoindex.ts
Metrics module public API exportssrc/features/metrics/index.ts
MetricsService,MetricsServer, andMetricsCollectorclasseswith their configurations
index.ts
Integrate Prometheus metrics and fix TUI stdin conflictsrc/index.ts
indexStatewithMETRICS_ENABLEDandMETRICS_PORTsettingsport allocation
TUI is enabled
PeerManager.ts
Support new peer data format with backward compatibilitysrc/libs/peer/PeerManager.ts
peer data formats
configurations
CategorizedLogger.ts
Cache getAllEntries to improve log retrieval performancesrc/utilities/tui/CategorizedLogger.ts
getAllEntries()to avoid repeated sortingon every call
retrieval
TUIManager.ts
Debounce log updates for improved TUI performancesrc/utilities/tui/TUIManager.ts
cycle
logsNeedUpdateflag to mark when logs need refreshing instead ofupdating on every entry
demos-overview.json
DEMOS Network Node Overview Grafana Dashboardmonitoring/grafana/provisioning/dashboards/json/demos-overview.json
monitoring with 1187 lines of JSON configuration
height, block lag, peer count, transaction count, RPC latency)
average)
I/O)
network-peers.json
Network Peers and Connectivity Grafana Dashboardmonitoring/grafana/provisioning/dashboards/json/network-peers.json
lines of JSON configuration
peers, peer health percentage)
transmitted/received
visualizations
consensus-blockchain.json
Consensus and Blockchain Monitoring Grafana Dashboardmonitoring/grafana/provisioning/dashboards/json/consensus-blockchain.json
lines of JSON configuration
last block, transactions per block, last block timestamp)
thresholds
<30s, yellow 30-60s, red >60s)
transaction patterns
run
Monitoring Stack Integration in Main Run Scriptrun
MONITORING_DISABLEDflag to control Prometheus/Grafana stackstartup (default: enabled)
-mand--no-monitoringcommand-line options to disablemonitoring
Grafana readiness
ctrl_c)startup
9 files
endpointHandlers.ts
Replace terminal-kit with structured loggingsrc/libs/network/endpointHandlers.ts
terminal-kitdependency and replaced with logger callsterm.yellow,term.red) tostructured log calls with categories
validation routines
validateTransaction.ts
Replace terminal-kit with structured loggingsrc/libs/blockchain/routines/validateTransaction.ts
terminal-kitdependency and replaced with logger callsDAHRFactory.ts
Replace terminal-kit with structured loggingsrc/features/web2/dahr/DAHRFactory.ts
terminal-kitdependency and replaced with logger callscategory
manageExecution.ts
Replace terminal-kit with structured loggingsrc/libs/network/manageExecution.ts
terminal-kitdependency and replaced with logger callskeyMaker.ts
Replace terminal-kit with structured loggingsrc/libs/utils/keyMaker.ts
terminal-kitdependency and replaced with logger callscategory
manageAuth.ts
Replace terminal-kit with structured loggingsrc/libs/network/manageAuth.ts
terminal-kitdependency and replaced with logger callscalls
cryptography.ts
Replace terminal-kit with structured loggingsrc/libs/crypto/cryptography.ts
terminal-kitdependency and replaced with logger callsterm.yellowtolog.debugandterm.redtolog.errorappropriately
identity.ts
Replace terminal-kit with structured loggingsrc/libs/identity/identity.ts
terminal-kitdependency and replaced with logger callscategory
gcr.ts
Replace terminal-kit with structured loggingsrc/libs/blockchain/gcr/gcr.ts
terminal-kitdependency and replaced with logger callsterm.yellowtolog.debugfor balance lookup messages1 files
PointSystem.ts
Use const for immutable variable declarationsrc/features/incentive/PointSystem.ts
let linkedNomisdeclaration toconstfor immutability4 files
system-health.json
Grafana system health dashboard with resource and service monitoringmonitoring/grafana/provisioning/dashboards/json/system-health.json
configuration
container status, and port health checks
time-series visualizations
TLSNotary, IPFS, and critical ports
README.md
Monitoring stack documentation and setup guidemonitoring/README.md
Grafana setup
options, and troubleshooting
considerations
README.md
Add monitoring and network ports documentationREADME.md
integration
table
with firewall examples
INSTALL.md
Updated Installation Guide with Dependencies and Port DocumentationINSTALL.md
./install-deps.shscript instead of direct
bun installwstcptool installationoptional ports
Prometheus, Grafana, and PostgreSQL
1 files
package.json
Add prom-client dependency for metricspackage.json
prom-clientv15.1.3 dependency for Prometheus metrics collection7 files
.local_version
Version bump for release.beads/.local_version
docker-compose.yml
Docker Compose Configuration for Monitoring Stackmonitoring/docker-compose.yml
v10.2.2 services
and volume persistence
and provisioning volumes
metrics
demos-monitoringbridge network for service communicationgrafana.ini
Grafana Configuration with DEMOS Brandingmonitoring/grafana/grafana.ini
security settings
toggles
prometheus.yml
Prometheus Configuration for Node Metrics Collectionmonitoring/prometheus/prometheus.yml
node monitoring
metrics (5s interval), and optional Node Exporter
host.docker.internalfor accessing host node metrics fromDocker container
prometheus.yml
Grafana Prometheus Datasource Provisioningmonitoring/grafana/provisioning/datasources/prometheus.yml
configuration
query overlap settings
dashboard.yml
Grafana Dashboard Provisioning Configurationmonitoring/grafana/provisioning/dashboards/dashboard.yml
/etc/grafana/provisioning/dashboards/jsondirectoryinterval
management
.env.example
Environment Variables for Prometheus Metrics Configuration.env.example
configuration
METRICS_ENABLEDflag (default: true) to control metrics exposureMETRICS_PORT(default: 9090) andMETRICS_HOST(default: 0.0.0.0)configuration options
http://localhost:9090/metricsSummary by CodeRabbit
New Features
Documentation
Chores
Style
✏️ Tip: You can customize this high-level summary in your review settings.