Skip to content

Conversation

@mkysel
Copy link
Collaborator

@mkysel mkysel commented Apr 18, 2025

Add automated metrics catalog generation to document application metrics in a markdown table

Implements a metrics documentation system through:

  • A new Go program in generator.go that parses metric declarations from the codebase and generates a markdown table documenting 18 metrics
  • A bash script generate-metrics-catalog that executes the generator program
  • A GitHub workflow step in lint-go.yml that runs the metrics catalog generation
  • The generated output in metrics_catalog.md containing the metrics documentation table

📍Where to Start

Start with the main generator program in generator.go which contains the core logic for parsing metric declarations and generating the documentation.


Macroscope summarized c0e7286.

Summary by CodeRabbit

  • New Features
    • Introduced automated generation of a metrics catalog, providing a comprehensive Markdown reference for all metrics used in the project.
  • Documentation
    • Added a new metrics catalog documentation file listing metric names, types, descriptions, and their source locations.
  • Chores
    • Updated the GitHub Actions workflow to include a step for generating the metrics catalog automatically.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Apr 18, 2025

Walkthrough

This change introduces an automated process for generating and maintaining a catalog of metrics used within the codebase. A new Go program is added to parse metric definitions from source files and output a Markdown documentation file summarizing these metrics. A supporting Bash script is provided to run the generator from the repository root. The GitHub Actions workflow for Go linting is updated to include a step that executes this script, ensuring the metrics catalog is kept up to date. The resulting documentation file lists all metrics with their types, descriptions, and source locations.

Changes

File(s) Change Summary
.github/workflows/lint-go.yml Updated workflow to add a "metrics catalog" step that runs a script to generate the metrics documentation after the "mockery" step.
dev/generate-metrics-catalog New Bash script to change to the repo root and execute the Go metrics documentation generator.
doc/metrics_catalog.md New Markdown file cataloging all metrics, including their names, types, descriptions, and source files.
pkg/metrics/docs/generator.go New Go program that scans Go source files for metric definitions, extracts their metadata, and generates a Markdown table documenting them.

Sequence Diagram(s)

sequenceDiagram
    participant GitHub Actions
    participant Bash Script
    participant Go Generator
    participant Source Files
    participant Markdown Doc

    GitHub Actions->>Bash Script: Run ./dev/generate-metrics-catalog
    Bash Script->>Go Generator: Execute generator.go from repo root
    Go Generator->>Source Files: Parse Go files for metric definitions
    Source Files-->>Go Generator: Return metric metadata
    Go Generator->>Markdown Doc: Write metrics_catalog.md
    Go Generator->>Bash Script: Exit
    Bash Script->>GitHub Actions: Step complete
Loading

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (1.64.8)

Error: you are using a configuration file for golangci-lint v2 with golangci-lint v1: please use golangci-lint v2
Failed executing command with error: you are using a configuration file for golangci-lint v2 with golangci-lint v1: please use golangci-lint v2

Tip

⚡💬 Agentic Chat (Pro Plan, General Availability)
  • We're introducing multi-step agentic chat in review comments and issue comments, within and outside of PR's. This feature enhances review and issue discussions with the CodeRabbit agentic chat by enabling advanced interactions, including the ability to create pull requests directly from comments and add commits to existing pull requests.

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c0e7286 and 5445458.

📒 Files selected for processing (1)
  • pkg/metrics/docs/generator.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • pkg/metrics/docs/generator.go
⏰ Context from checks skipped due to timeout of 90000ms (5)
  • GitHub Check: Push Docker Images to GitHub Packages (xmtpd-cli)
  • GitHub Check: Upgrade Tests
  • GitHub Check: Push Docker Images to GitHub Packages (xmtpd)
  • GitHub Check: Test (Node)
  • GitHub Check: Code Review

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (4)
.github/workflows/lint-go.yml (1)

46-46: Add a newline at the end of the file

The YAMLlint tool reports that the file is missing a newline at the end, which is a standard practice in most codebases.

          command: ./dev/generate-metrics-catalog
+
🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 46-46: no new line character at the end of file

(new-line-at-end-of-file)

dev/generate-metrics-catalog (1)

1-10: LGTM: Script looks good but could use minor improvements

The script correctly sets up strict error handling and ensures it runs from the repository root. Consider these minor enhancements:

-#!/bin/bash
+#!/bin/bash -o pipefail
 
 set -eu
 
-# Work always from the root directory
+# This script generates the metrics catalog documentation by running the Go generator
+# It ensures we work from the repository root directory regardless of where it's called from
 script_dir=$(dirname "$(realpath "$0")")
 repo_root=$(realpath "${script_dir}/../")
 cd "${repo_root}"
 
 go run pkg/metrics/docs/generator.go
pkg/metrics/docs/generator.go (2)

15-15: Follow Go naming conventions for constants

In Go, constants typically use camelCase or PascalCase naming, not SCREAMING_SNAKE_CASE.

-const MARKDOWN_OUTPUT = "doc/metrics_catalog.md"
+const markdownOutput = "doc/metrics_catalog.md"

Remember to update all references to this constant throughout the code.


86-135: Consider more robust metric parsing for future code changes

The current implementation assumes a specific structure for metric declarations (composite literals with Name and Help fields). This might be fragile if the metric declaration pattern changes.

Consider adding:

  1. Error logging when a metric is found but name or description can't be extracted
  2. Support for different patterns of metric declaration
  3. Unit tests to verify the parsing logic works with different metric patterns

Example enhancement:

 				for _, elt := range firstArg.Elts {
 					if kv, ok := elt.(*ast.KeyValueExpr); ok {
 						key := fmt.Sprint(kv.Key)
 						switch key {
 						case "Name":
 							if val, ok := kv.Value.(*ast.BasicLit); ok {
 								metric.Name = strings.Trim(val.Value, `"`)
+							} else {
+								log.Printf("Warning: Found Name field with unexpected value type in %s", path)
 							}
 						case "Help":
 							if val, ok := kv.Value.(*ast.BasicLit); ok {
 								metric.Description = strings.Trim(val.Value, `"`)
+							} else {
+								log.Printf("Warning: Found Help field with unexpected value type in %s", path)
 							}
 						}
 					}
 				}
 
 				if metric.Name != "" {
 					results = append(results, metric)
+				} else {
+					log.Printf("Warning: Found metric declaration without Name in %s", path)
 				}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3a7c641 and c0e7286.

📒 Files selected for processing (4)
  • .github/workflows/lint-go.yml (1 hunks)
  • dev/generate-metrics-catalog (1 hunks)
  • doc/metrics_catalog.md (1 hunks)
  • pkg/metrics/docs/generator.go (1 hunks)
🧰 Additional context used
🪛 YAMLlint (1.35.1)
.github/workflows/lint-go.yml

[error] 46-46: no new line character at the end of file

(new-line-at-end-of-file)

🪛 GitHub Check: Lint-Go
pkg/metrics/docs/generator.go

[failure] 79-79:
File is not properly formatted (gofumpt)


[failure] 44-44:
error is not nil (line 42) but it returns nil (nilerr)

🪛 GitHub Actions: Lint
pkg/metrics/docs/generator.go

[error] 79-79: File is not properly formatted (gofumpt)

⏰ Context from checks skipped due to timeout of 90000ms (5)
  • GitHub Check: Code Review
  • GitHub Check: Push Docker Images to GitHub Packages (xmtpd-cli)
  • GitHub Check: Push Docker Images to GitHub Packages (xmtpd)
  • GitHub Check: Test (Node)
  • GitHub Check: Upgrade Tests
🔇 Additional comments (2)
.github/workflows/lint-go.yml (1)

42-46: LGTM: Added metrics catalog generation step

The change follows the existing pattern in the workflow file, using the same diff-check action as other steps. This will ensure the metrics catalog documentation stays up-to-date.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 46-46: no new line character at the end of file

(new-line-at-end-of-file)

doc/metrics_catalog.md (1)

1-21: LGTM: Well-structured auto-generated metrics catalog

The auto-generated metrics catalog provides a clear and organized table of metrics with their types, descriptions, and source locations. This will be valuable documentation for understanding and maintaining the metrics system.

@mkysel mkysel marked this pull request as ready for review April 18, 2025 15:36
@mkysel mkysel requested review from a team as code owners April 18, 2025 15:36
@macroscopeapp
Copy link

macroscopeapp bot commented Apr 18, 2025

Add automated metrics catalog generation to document metrics usage across the codebase

  • Implements a new metrics documentation generator in generator.go that parses Go files in the pkg/metrics directory to extract metric declarations and their metadata
  • Creates a new executable script generate-metrics-catalog to run the generator from the repository root
  • Adds a 'metrics catalog' step to lint-go.yml GitHub workflow using the nickcharlton/diff-check action
  • Generates documentation output in metrics_catalog.md containing a table of all metrics with their names, types, descriptions, and source locations

📍Where to Start

Start with the main generator implementation in generator.go which contains the core logic for parsing metric declarations and generating the documentation.


Macroscope summarized 5445458.

name string
}

var metricTypes = map[string]MetricType{
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe worth including this as a comment so it doesn't get lost

@mkysel
Copy link
Collaborator Author

mkysel commented Apr 18, 2025

@coderabbitai please re-review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Apr 18, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

I'll conduct a fresh review of your Pull Request that introduces an automated metrics catalog generation. Let me first gather the necessary information to understand the implementation details.


🏁 Script executed:

#!/bin/bash
# Examine the content of the key files in the PR

echo "=== Main Implementation (generator.go) ==="
cat pkg/metrics/docs/generator.go

echo -e "\n\n=== Bash Script (generate-metrics-catalog) ==="
cat dev/generate-metrics-catalog

echo -e "\n\n=== Generated Output (metrics_catalog.md) ==="
cat doc/metrics_catalog.md

echo -e "\n\n=== GitHub Workflow Modification (lint-go.yml) ==="
grep -A 10 -B 10 "metrics catalog" .github/workflows/lint-go.yml

Length of output: 6833


@mkysel I've completed a fresh review of your "Generate metrics catalog" PR. This is a well-structured implementation that automates documentation of application metrics.

The PR introduces:

  1. A Go program (generator.go) that parses metric declarations from source files using Go's AST package
  2. A shell script (generate-metrics-catalog) to run the generator from the repository root
  3. Integration into the CI pipeline via the GitHub workflow
  4. The generated documentation (metrics_catalog.md) with 18 metrics properly organized in a table

The implementation is clean and follows good practices like proper error handling, modular code organization, and appropriate use of Go's AST for parsing declarations.

A few minor suggestions that could enhance this further in the future:

  • Consider adding a header/introduction section to the markdown output
  • You might want to group metrics by category or source file for improved readability
  • Adding a timestamp or generation date to the output could be helpful

Overall, this is a valuable addition that will help keep metrics documentation in sync with code. The automated workflow integration ensures it stays updated.

func parseFile(path string) ([]Metric, error) {
var results []Metric
fs := token.NewFileSet()
node, err := parser.ParseFile(fs, path, nil, parser.AllErrors)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neat. I've only done this kind of thing via reflect, but this is actually pretty tidy

@@ -0,0 +1,139 @@
package main
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small nit, but I think this belongs in the cmd folder

@mkysel mkysel merged commit 1163f75 into main Apr 21, 2025
9 checks passed
@mkysel mkysel deleted the mkysel/generate-metrics-catalog branch April 21, 2025 13:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants