Skip to content

Refactor main into library crate#45

Closed
leynos wants to merge 2 commits intomainfrom
codex/create-library-structure-for-graphql-client
Closed

Refactor main into library crate#45
leynos wants to merge 2 commits intomainfrom
codex/create-library-structure-for-graphql-client

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Jul 30, 2025

Summary

  • expose lib modules and re-exports via src/lib.rs
  • split GraphQL and printing helpers into new modules
  • move command logic into commands.rs
  • simplify main.rs to argument parsing and dispatch

Testing

  • make fmt
  • make lint
  • make test

https://chatgpt.com/codex/tasks/task_e_6888f71e6d3883228498414491bf22ab

Summary by Sourcery

Refactor the CLI tool into a reusable library crate by splitting main logic into dedicated modules and exposing core functions via src/lib.rs

Enhancements:

  • Introduce library crate structure with modules: api, cli_args, commands, models, printer, references, and reviews
  • Simplify src/main.rs to only parse CLI arguments and invoke run_pr/run_issue
  • Extract GraphQL client and query helpers into api module and response data types into models module
  • Move repository reference parsing into references module and pagination into api::paginate
  • Relocate output formatting and printing logic into printer module
  • Consolidate command handlers into commands module and CLI definitions into cli_args module

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Jul 30, 2025

Reviewer's Guide

This PR transforms the binary into a library crate by moving core functionality into well-scoped modules, introducing a lib.rs entry point with public exports, and simplifying main.rs to a minimal CLI parser and dispatcher.

Class diagram for new library crate structure

classDiagram
    class api {
        +GraphQLClient
        +VkError
        +build_graphql_client()
        +fetch_issue()
        +fetch_review_threads()
    }
    class cli_args {
        +Cli
        +Commands
        +GlobalArgs
        +IssueArgs
        +PrArgs
    }
    class commands {
        +run_pr()
        +run_issue()
        +locale_is_utf8()
        +load_with_reference_fallback()
    }
    class html {
        +collapse_details()
    }
    class models {
        +ThreadData
        +Repository
        +PullRequest
        +IssueData
        +IssueRepository
        +Issue
        +ReviewThreadConnection
        +ReviewThread
        +CommentConnection
        +ReviewComment
        +PageInfo
        +User
        +CommentNodeWrapper
        +CommentNode
    }
    class printer {
        +format_comment_diff()
        +write_comment_body()
        +write_comment()
        +write_thread()
        +print_thread()
        +summarize_files()
        +write_summary()
        +print_summary()
        +print_end_banner()
    }
    class references {
        +RepoInfo
        +parse_issue_reference()
        +parse_pr_reference()
        +repo_from_fetch_head()
        +repo_from_str()
    }
    class reviews {
        +fetch_reviews()
        +latest_reviews()
        +print_reviews()
        +write_review()
    }
    class lib {
        +api
        +cli_args
        +commands
        +html
        +models
        +printer
        +references
        +reviews
        +exports (re-exports public API)
    }
    lib --> api
    lib --> cli_args
    lib --> commands
    lib --> html
    lib --> models
    lib --> printer
    lib --> references
    lib --> reviews
    main ..> lib : uses
    commands --> api
    commands --> cli_args
    commands --> printer
    commands --> references
    commands --> reviews
    api --> models
    api --> references
    reviews --> api
    reviews --> models
    printer --> html
    printer --> models
    references --> api
Loading

Class diagram for simplified main.rs and its dependencies

classDiagram
    class main {
        +main()
    }
    class Cli
    class Commands
    class GlobalArgs
    class IssueArgs
    class PrArgs
    class run_pr
    class run_issue
    class load_with_reference_fallback
    main --> Cli : parses
    main --> Commands : dispatches
    main --> GlobalArgs : loads
    main --> run_pr : calls
    main --> run_issue : calls
    main --> load_with_reference_fallback : calls
    Cli --> Commands
    Cli --> GlobalArgs
Loading

File-Level Changes

Change Details Files
Simplify main.rs to a thin CLI wrapper and dispatch
  • Removed all in-file business logic and helper functions
  • Imported Cli, Commands, run_pr, run_issue, and load_with_reference_fallback from the library
  • Initialized logger and delegated subcommand handling to library code
src/main.rs
Introduce lib.rs as the library entry point
  • Added src/lib.rs declaring public modules
  • Re-exported core types, functions, and CLI definitions
src/lib.rs
Extract GraphQL client into api module
  • Moved GraphQLClient, VkError, run_query, and related helpers
  • Defined build_graphql_client, build_headers, pagination, and query constants
src/api.rs
Centralize data structures in models module
  • Relocated all GraphQL response and domain structs
  • Declared ThreadData, ReviewThread, ReviewComment, PageInfo, and related types
src/models.rs
Encapsulate CLI argument definitions
  • Relocated Cli, Commands, GlobalArgs, PrArgs, IssueArgs
  • Retained and adjusted CLI parsing tests
src/cli_args.rs
Isolate repository reference parsing
  • Moved parse_pr_reference and parse_issue_reference logic
  • Implemented repo_from_fetch_head and repo_from_str helpers
src/references.rs
Refactor output formatting into printer module
  • Moved diff formatting and comment/thread printing functions
  • Bundled summary and end-banner helpers
src/printer.rs
Move command handlers to commands module
  • Extracted run_pr and run_issue into commands.rs
  • Relocated locale detection and config-loading fallback logic
src/commands.rs
Update reviews module to use new api paginate
  • Changed crate::paginate import to crate::api::paginate
src/reviews.rs

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Jul 30, 2025

Summary by CodeRabbit

New Features

  • Introduced a command-line interface for interacting with GitHub pull requests and issues, including subcommands for each.
  • Added support for fetching and displaying pull request review threads, comments, and issue details from GitHub.
  • Implemented terminal-friendly formatting for code review comments, diffs, and summaries with markdown styling.
  • Enhanced repository and reference parsing, supporting both URLs and numeric references.

Bug Fixes

  • Improved error handling and validation for invalid references and configuration issues.

Documentation

  • Updated linting attributes to omit certain documentation warnings.

Refactor

  • Centralised core logic into a library for easier maintenance and reuse.
  • Streamlined the application entry point to delegate functionality to the library.

Tests

  • Added tests for CLI parsing, locale detection, and reference parsing.

Walkthrough

Move all core logic, data models, and command handling out of main.rs into a modular library structure. Introduce new modules for API interaction, CLI parsing, command execution, data models, output formatting, and reference parsing. Refactor the main entry point to delegate all logic to the library, exposing a unified public API.

Changes

Cohort / File(s) Change Summary
API Client & Error Handling
src/api.rs
Add a GraphQL client for GitHub API with token authentication, pagination, error handling, and async query execution. Expose error enum VkError and helper functions for client instantiation and header building.
CLI Argument Parsing
src/cli_args.rs
Introduce Cli struct and Commands enum for subcommand-based CLI parsing using clap. Add tests for argument parsing and subcommand requirements.
Command Execution
src/commands.rs
Implement async handlers for pull request and issue commands, including reference parsing, API calls, locale checks, and output formatting. Add locale detection and config fallback utilities with tests.
Data Models
src/models.rs
Define structs for deserialising GraphQL responses: review threads, issues, comments, users, and pagination metadata.
Output Formatting
src/printer.rs
Provide functions for formatting and printing review comments, diffs, threads, and summaries with markdown and terminal styling. Include tests for formatting logic.
Reference Parsing
src/references.rs
Add utilities for parsing GitHub issue/PR references from URLs, numeric inputs, or local git state. Expose RepoInfo and parsing functions with comprehensive tests.
Library Entrypoint & Re-exports
src/lib.rs
Create a library entrypoint exposing all major modules and re-exporting key types and functions for unified access.
Main Entrypoint Refactor
src/main.rs
Remove all logic, types, and data from main. Replace with a minimal async entrypoint that parses CLI arguments and dispatches to library functions.
Linting & Imports
src/html.rs, src/reviews.rs
Add crate-level Clippy allow attributes to suppress missing documentation warnings. Adjust imports and function call qualification in src/reviews.rs.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Main
    participant CLI
    participant Commands
    participant API
    participant Printer

    User->>Main: Run binary with CLI args
    Main->>CLI: Parse arguments
    CLI-->>Main: Return parsed command and options
    Main->>Commands: Dispatch to run_pr or run_issue
    Commands->>API: Fetch data via GraphQLClient
    API-->>Commands: Return data or VkError
    Commands->>Printer: Format and print output
    Printer-->>Commands: Output rendered
    Commands-->>Main: Complete
    Main-->>User: Exit
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~40 minutes

Possibly related PRs

Poem

A library blooms from main’s retreat,
With models, commands, and parsing neat.
The API now talks with grace,
While printers style each code embrace.
CLI commands march in line,
Review threads and issues intertwine.
Refactor done—let logic shine!

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/create-library-structure-for-graphql-client

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes - here's some feedback:

  • Move the in‐function Regex constructors in format_comment_diff and locale_is_utf8 into static LazyLock/once_cell fields to avoid compiling the regex on every call.
  • Merge the placeholder html module (with only collapse_details re-exports) to keep the crate structure minimal.
  • Centralize the paginate function import so reviews.rs doesn’t have to reference crate::api::paginate directly, improving module boundaries.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Move the in‐function Regex constructors in format_comment_diff and locale_is_utf8 into static LazyLock/once_cell fields to avoid compiling the regex on every call.
- Consider merging or removing the placeholder html module (with only collapse_details re-exports) to keep the crate structure minimal.
- Centralize the paginate function import so reviews.rs doesn’t have to reference crate::api::paginate directly, improving module boundaries.

## Individual Comments

### Comment 1
<location> `src/main.rs:13` </location>
<code_context>
-async fn main() -> Result<(), VkError> {
+async fn main() -> Result<(), vk::VkError> {
     env_logger::init();
     let cli = Cli::parse();
     let mut global = GlobalArgs::load_from_iter(std::env::args_os().take(1))?;
</code_context>

<issue_to_address>
GlobalArgs is loaded from only the first CLI argument, which may not be robust.

Currently, only the binary name is passed to GlobalArgs::load_from_iter, so global flags may not be parsed. Consider using all arguments or another method to ensure proper flag parsing.
</issue_to_address>

### Comment 2
<location> `src/references.rs:43` </location>
<code_context>
-    clippy::result_large_err,
-    reason = "VkError has many variants but they are small"
-)]
-fn parse_reference(
-    input: &str,
-    default_repo: Option<&str>,
</code_context>

<issue_to_address>
Consider extracting the GitHub URL parsing logic in `parse_reference` into a helper function to flatten control flow and clarify each phase.

You can collapse the big nested `if let` in `parse_reference` into two clear phases—first try parsing a GitHub‐URL, then fall back to a bare number—by extracting a small helper. This both flattens the logic and keeps all existing behavior:

```rust
fn parse_github_ref(
    input: &str,
    resource_type: ResourceType,
) -> Result<(RepoInfo, u64), VkError> {
    let url = url::Url::parse(input).map_err(|_| VkError::InvalidRef)?;
    if url.host_str() != Some("github.com") {
        return Err(VkError::InvalidRef);
    }

    let segments: Vec<_> = url
        .path_segments()
        .ok_or(VkError::InvalidRef)?
        .collect();

    if segments.len() < 4 {
        return Err(VkError::InvalidRef);
    }

    let owner = segments[0].to_owned();
    let name = segments[1]
        .strip_suffix(".git")
        .unwrap_or(segments[1])
        .to_owned();

    let seg = segments[2];
    let allowed = resource_type.allowed_segments();
    if !allowed.contains(&seg) {
        return Err(VkError::WrongResourceType {
            expected: allowed,
            found: seg.to_string(),
        });
    }

    let number = segments[3].parse().map_err(|_| VkError::InvalidRef)?;
    Ok((RepoInfo { owner, name }, number))
}

fn parse_reference(
    input: &str,
    default_repo: Option<&str>,
    resource_type: ResourceType,
) -> Result<(RepoInfo, u64), VkError> {
    // 1) Try URL form
    match parse_github_ref(input, resource_type) {
        Ok(res) => return Ok(res),
        Err(VkError::InvalidRef) => { /* fall through */ }
        Err(e) => return Err(e),
    }

    // 2) Try numeric fallback
    let number = input.parse::<u64>().map_err(|_| VkError::InvalidRef)?;
    let repo = default_repo
        .and_then(repo_from_str)
        .or_else(repo_from_fetch_head)
        .ok_or(VkError::RepoNotFound)?;
    Ok((repo, number))
}
```

This keeps all the same tests green but removes the deep nesting and makes each step’s responsibility crystal-clear.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread src/main.rs
Comment on lines 13 to 14
let cli = Cli::parse();
let mut global = GlobalArgs::load_from_iter(std::env::args_os().take(1))?;
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot Jul 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): GlobalArgs is loaded from only the first CLI argument, which may not be robust.

Currently, only the binary name is passed to GlobalArgs::load_from_iter, so global flags may not be parsed. Use all arguments to ensure proper flag parsing.

Comment thread src/references.rs
clippy::result_large_err,
reason = "VkError has many variants but they are small"
)]
fn parse_reference(
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot Jul 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (complexity): Extract the GitHub URL parsing logic in parse_reference into a helper function to flatten control flow and clarify each phase.

You can collapse the big nested if let in parse_reference into two clear phases—first try parsing a GitHub‐URL, then fall back to a bare number—by extracting a small helper. This both flattens the logic and keeps all existing behavior:

fn parse_github_ref(
    input: &str,
    resource_type: ResourceType,
) -> Result<(RepoInfo, u64), VkError> {
    let url = url::Url::parse(input).map_err(|_| VkError::InvalidRef)?;
    if url.host_str() != Some("github.com") {
        return Err(VkError::InvalidRef);
    }

    let segments: Vec<_> = url
        .path_segments()
        .ok_or(VkError::InvalidRef)?
        .collect();

    if segments.len() < 4 {
        return Err(VkError::InvalidRef);
    }

    let owner = segments[0].to_owned();
    let name = segments[1]
        .strip_suffix(".git")
        .unwrap_or(segments[1])
        .to_owned();

    let seg = segments[2];
    let allowed = resource_type.allowed_segments();
    if !allowed.contains(&seg) {
        return Err(VkError::WrongResourceType {
            expected: allowed,
            found: seg.to_string(),
        });
    }

    let number = segments[3].parse().map_err(|_| VkError::InvalidRef)?;
    Ok((RepoInfo { owner, name }, number))
}

fn parse_reference(
    input: &str,
    default_repo: Option<&str>,
    resource_type: ResourceType,
) -> Result<(RepoInfo, u64), VkError> {
    // 1) Try URL form
    match parse_github_ref(input, resource_type) {
        Ok(res) => return Ok(res),
        Err(VkError::InvalidRef) => { /* fall through */ }
        Err(e) => return Err(e),
    }

    // 2) Try numeric fallback
    let number = input.parse::<u64>().map_err(|_| VkError::InvalidRef)?;
    let repo = default_repo
        .and_then(repo_from_str)
        .or_else(repo_from_fetch_head)
        .ok_or(VkError::RepoNotFound)?;
    Ok((repo, number))
}

This keeps all the same tests green but removes the deep nesting and makes each step’s responsibility crystal-clear.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 88b07f2 and 7af200b.

📒 Files selected for processing (10)
  • src/api.rs (1 hunks)
  • src/cli_args.rs (2 hunks)
  • src/commands.rs (1 hunks)
  • src/html.rs (1 hunks)
  • src/lib.rs (1 hunks)
  • src/main.rs (1 hunks)
  • src/models.rs (1 hunks)
  • src/printer.rs (1 hunks)
  • src/references.rs (1 hunks)
  • src/reviews.rs (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.rs: Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Prefer semantic error enums: Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary: Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library: Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.
Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.

Files:

  • src/html.rs
  • src/cli_args.rs
  • src/reviews.rs
  • src/commands.rs
  • src/lib.rs
  • src/main.rs
  • src/references.rs
  • src/models.rs
  • src/printer.rs
  • src/api.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • src/html.rs
  • src/cli_args.rs
  • src/reviews.rs
  • src/commands.rs
  • src/lib.rs
  • src/main.rs
  • src/references.rs
  • src/models.rs
  • src/printer.rs
  • src/api.rs
🧬 Code Graph Analysis (3)
src/reviews.rs (2)
src/html.rs (1)
  • collapse_details (28-35)
src/api.rs (1)
  • paginate (263-279)
src/commands.rs (6)
src/api.rs (4)
  • build_graphql_client (190-201)
  • fetch_issue (302-318)
  • fetch_review_threads (341-373)
  • new (82-87)
src/printer.rs (4)
  • print_end_banner (178-180)
  • print_summary (174-176)
  • print_thread (142-144)
  • summarize_files (147-156)
src/references.rs (2)
  • parse_issue_reference (91-96)
  • parse_pr_reference (102-107)
src/reviews.rs (3)
  • fetch_reviews (87-93)
  • latest_reviews (95-115)
  • print_reviews (135-141)
src/cli_args.rs (1)
  • default (68-70)
src/main.rs (2)
  • load_with_reference_fallback (18-18)
  • load_with_reference_fallback (22-22)
src/main.rs (1)
src/commands.rs (3)
  • load_with_reference_fallback (70-85)
  • run_issue (86-103)
  • run_pr (19-48)
🔇 Additional comments (9)
src/reviews.rs (1)

9-12: LGTM! Import reorganisation aligns with modular structure.

The explicit imports from crate::api and crate::models modules and the qualified call to crate::api::paginate correctly reflect the new modular architecture.

Also applies to: 92-92

src/cli_args.rs (1)

1-128: Excellent adherence to coding guidelines.

This module demonstrates proper:

  • Module-level documentation explaining purpose
  • Correct use of #[expect] with clear reasoning instead of forbidden #[allow]
  • Comprehensive test coverage
  • Clean separation of CLI argument concerns
src/lib.rs (1)

1-23: Clean library entry point with well-organised public API.

The module-level documentation clearly explains its purpose, and the re-exports are logically grouped by source module, creating a coherent public interface.

src/commands.rs (1)

19-48: Command handlers demonstrate good separation of concerns.

Both run_pr and run_issue follow clean patterns:

  • Parse references early with proper error handling
  • Handle authentication gracefully with warnings
  • Use appropriate terminal locale detection
  • Delegate to specialised modules for API and printing logic

Also applies to: 86-103

src/main.rs (1)

1-26: Clean refactoring into library structure.

The simplified main function properly delegates all logic to the library modules, achieving good separation of concerns.

src/references.rs (1)

53-73: Good error handling for URL path validation.

The code properly validates URL segments and provides specific error messages for wrong resource types.

src/models.rs (1)

1-99: Well-structured data models for GraphQL responses.

The structs properly model the GraphQL response structure with appropriate serde attributes for field renaming.

src/printer.rs (1)

106-112: Good handling of author display and markdown rendering.

The code properly handles optional author information and uses the markdown skin for formatting.

src/api.rs (1)

347-371: Efficient handling of pagination for nested comments.

The implementation correctly handles pagination at both the thread and comment levels, ensuring all data is fetched.

Comment thread src/api.rs Outdated
Comment on lines +2 to +6
#![allow(
clippy::missing_errors_doc,
clippy::missing_panics_doc,
reason = "docs omitted for brevity"
)]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove module-level #[allow] attributes.

Module-level lint suppressions violate the coding guidelines. Add documentation to public functions instead.

-#![allow(
-    clippy::missing_errors_doc,
-    clippy::missing_panics_doc,
-    reason = "docs omitted for brevity"
-)]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#![allow(
clippy::missing_errors_doc,
clippy::missing_panics_doc,
reason = "docs omitted for brevity"
)]
🤖 Prompt for AI Agents
In src/api.rs around lines 2 to 6, remove the module-level #[allow] attributes
that suppress clippy warnings. Instead, add appropriate documentation comments
to each public function to address missing error and panic documentation,
complying with coding guidelines that prohibit module-wide lint suppressions.

Comment thread src/api.rs Outdated
Comment on lines +94 to +100
let transcript = match transcript {
Some(p) => match std::fs::File::create(p) {
Ok(file) => Some(std::sync::Mutex::new(std::io::BufWriter::new(file))),
Err(e) => return Err(e),
},
None => None,
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Improve error handling in transcript file creation.

The current pattern returns early on error but could be simplified.

-        let transcript = match transcript {
-            Some(p) => match std::fs::File::create(p) {
-                Ok(file) => Some(std::sync::Mutex::new(std::io::BufWriter::new(file))),
-                Err(e) => return Err(e),
-            },
-            None => None,
-        };
+        let transcript = transcript
+            .map(|p| std::fs::File::create(p)
+                .map(|file| std::sync::Mutex::new(std::io::BufWriter::new(file))))
+            .transpose()?;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let transcript = match transcript {
Some(p) => match std::fs::File::create(p) {
Ok(file) => Some(std::sync::Mutex::new(std::io::BufWriter::new(file))),
Err(e) => return Err(e),
},
None => None,
};
let transcript = transcript
.map(|p| std::fs::File::create(p)
.map(|file| std::sync::Mutex::new(std::io::BufWriter::new(file))))
.transpose()?;
🤖 Prompt for AI Agents
In src/api.rs around lines 94 to 100, the error handling for creating the
transcript file uses nested matches and returns early on error, which can be
simplified. Refactor this code to use a more concise approach such as the `map`
and `map_err` combinators or the `?` operator to handle the Result, reducing
nesting and improving readability while preserving the error propagation
behavior.

Comment thread src/api.rs
Comment on lines +109 to +166
pub async fn run_query<V, T>(&self, query: &str, variables: V) -> Result<T, VkError>
where
V: serde::Serialize,
T: DeserializeOwned,
{
let payload = json!({ "query": query, "variables": &variables });
let ctx = serde_json::to_string(&payload).unwrap_or_default();
let response = self
.client
.post(&self.endpoint)
.headers(self.headers.clone())
.json(&payload)
.send()
.await
.map_err(|e| VkError::RequestContext {
context: ctx.clone(),
source: e,
})?;
let body = response.text().await.map_err(|e| VkError::RequestContext {
context: ctx.clone(),
source: e,
})?;
if let Some(t) = &self.transcript {
use std::io::Write as _;
match t.lock() {
Ok(mut f) => {
if let Err(e) = writeln!(
f,
"{}",
serde_json::to_string(&json!({ "request": payload, "response": body }))
.unwrap_or_default(),
) {
warn!("failed to write transcript: {e}");
}
}
Err(_) => warn!("failed to lock transcript"),
}
}
let resp: GraphQlResponse<serde_json::Value> =
serde_json::from_str(&body).map_err(|e| {
let snippet = if body.len() > 500 {
let preview: String = body.chars().take(500).collect();
format!("{preview}...")
} else {
body.clone()
};
VkError::BadResponseSerde(format!("{e} | response body snippet:{snippet}"))
})?;
let resp_debug = format!("{resp:?}");
if let Some(errs) = resp.errors {
return Err(handle_graphql_errors(errs));
}
let value = resp.data.ok_or_else(|| {
VkError::BadResponse(format!("Missing data in response: {resp_debug}"))
})?;
serde_json::from_value(value).map_err(|e| VkError::BadResponseSerde(e.to_string()))
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add timeout configuration for GraphQL requests.

Long-running queries could hang indefinitely. Add configurable timeouts to the HTTP client.

In the GraphQLClient::new or with_endpoint methods, configure the client with timeouts:

use std::time::Duration;

// In with_endpoint method:
let client = reqwest::Client::builder()
    .timeout(Duration::from_secs(30))
    .connect_timeout(Duration::from_secs(10))
    .build()
    .map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e))?;
🤖 Prompt for AI Agents
In src/api.rs between lines 109 and 166, the HTTP client used for GraphQL
requests lacks timeout settings, which can cause indefinite hangs on
long-running queries. To fix this, update the GraphQLClient::new or
with_endpoint method to build the reqwest client with timeout configurations by
setting a request timeout (e.g., 30 seconds) and a connection timeout (e.g., 10
seconds) using
reqwest::Client::builder().timeout(Duration::from_secs(30)).connect_timeout(Duration::from_secs(10)).build().
Handle any build errors appropriately.

Comment thread src/commands.rs Outdated
Comment on lines +2 to +6
#![allow(
clippy::missing_errors_doc,
clippy::missing_panics_doc,
reason = "docs omitted"
)]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Replace forbidden #[allow] with narrowly scoped #[expect] attributes.

The coding guidelines explicitly forbid #[allow] and require narrowly scoped #[expect(lint, reason = "...")] instead.

🤖 Prompt for AI Agents
In src/commands.rs at lines 2 to 6, replace the broad #[allow(...)] attribute
with narrowly scoped #[expect(..., reason = "...")] attributes for each lint.
Change each clippy lint from #[allow(clippy::lint_name)] to
#[expect(clippy::lint_name, reason = "docs omitted")] to comply with coding
guidelines that forbid #[allow] and require #[expect] with specific reasons.

Comment thread src/commands.rs
Comment on lines +51 to +58
pub fn locale_is_utf8() -> bool {
let re: regex::Regex = regex::Regex::new("(?i)\\bUTF-?8\\b").expect("valid regex");
env::var("LC_ALL")
.or_else(|_| env::var("LC_CTYPE"))
.or_else(|_| env::var("LANG"))
.map(|v| re.is_match(&v))
.unwrap_or(false)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Optimise regex compilation for better performance.

The regex is compiled on every function call. Move it to a static or lazy initialisation for better performance:

+use std::sync::LazyLock;
+
+static UTF8_REGEX: LazyLock<regex::Regex> = LazyLock::new(|| {
+    regex::Regex::new("(?i)\\bUTF-?8\\b").expect("valid regex")
+});
+
 #[must_use]
 pub fn locale_is_utf8() -> bool {
-    let re: regex::Regex = regex::Regex::new("(?i)\\bUTF-?8\\b").expect("valid regex");
     env::var("LC_ALL")
         .or_else(|_| env::var("LC_CTYPE"))
         .or_else(|_| env::var("LANG"))
-        .map(|v| re.is_match(&v))
+        .map(|v| UTF8_REGEX.is_match(&v))
         .unwrap_or(false)
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub fn locale_is_utf8() -> bool {
let re: regex::Regex = regex::Regex::new("(?i)\\bUTF-?8\\b").expect("valid regex");
env::var("LC_ALL")
.or_else(|_| env::var("LC_CTYPE"))
.or_else(|_| env::var("LANG"))
.map(|v| re.is_match(&v))
.unwrap_or(false)
}
use std::sync::LazyLock;
static UTF8_REGEX: LazyLock<regex::Regex> = LazyLock::new(|| {
regex::Regex::new("(?i)\\bUTF-?8\\b").expect("valid regex")
});
#[must_use]
pub fn locale_is_utf8() -> bool {
env::var("LC_ALL")
.or_else(|_| env::var("LC_CTYPE"))
.or_else(|_| env::var("LANG"))
.map(|v| UTF8_REGEX.is_match(&v))
.unwrap_or(false)
}
🤖 Prompt for AI Agents
In src/commands.rs around lines 51 to 58, the regex is compiled every time
locale_is_utf8() is called, which is inefficient. Fix this by moving the regex
compilation to a static or lazy initialization using a suitable crate like
once_cell or lazy_static, so the regex is compiled only once and reused on
subsequent calls.

Comment thread src/printer.rs Outdated
Comment on lines +2 to +6
#![allow(
clippy::missing_errors_doc,
clippy::missing_panics_doc,
reason = "docs omitted"
)]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove module-level #[allow] attributes.

Module-level lint suppressions violate the coding guidelines. Add proper documentation to public functions instead.

-#![allow(
-    clippy::missing_errors_doc,
-    clippy::missing_panics_doc,
-    reason = "docs omitted"
-)]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#![allow(
clippy::missing_errors_doc,
clippy::missing_panics_doc,
reason = "docs omitted"
)]
🤖 Prompt for AI Agents
In src/printer.rs around lines 2 to 6, remove the module-level #[allow]
attributes that suppress clippy warnings for missing error and panic
documentation. Instead, add appropriate documentation comments to each public
function to explain their error handling and panic conditions, complying with
the coding guidelines.

Comment thread src/printer.rs Outdated
Comment on lines +17 to +50
fn parse_diff_lines<'a, I>(
lines: I,
mut old_line: Option<i32>,
mut new_line: Option<i32>,
) -> Vec<(Option<i32>, Option<i32>, String)>
where
I: Iterator<Item = &'a str>,
{
let mut parsed = Vec::new();
for l in lines {
if l.starts_with('+') {
parsed.push((None, new_line, l.to_owned()));
if let Some(ref mut n) = new_line {
*n += 1;
}
} else if l.starts_with('-') {
parsed.push((old_line, None, l.to_owned()));
if let Some(ref mut o) = old_line {
*o += 1;
}
} else {
let text = l.strip_prefix(' ').unwrap_or(l);
parsed.push((old_line, new_line, format!(" {text}")));
if let Some(ref mut o) = old_line {
*o += 1;
}
if let Some(ref mut n) = new_line {
*n += 1;
}
}
}
parsed
}

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Extract the nested parse_diff_lines function.

The nested function is complex and would benefit from being a separate private function at module level for better testability and readability.

-    fn parse_diff_lines<'a, I>(
+fn parse_diff_lines<'a, I>(
     lines: I,
     mut old_line: Option<i32>,
     mut new_line: Option<i32>,
 ) -> Vec<(Option<i32>, Option<i32>, String)>
 where
     I: Iterator<Item = &'a str>,
 {
     // ... implementation ...
 }

 pub fn format_comment_diff(comment: &ReviewComment) -> Result<String, std::fmt::Error> {
     // ... use parse_diff_lines here ...

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/printer.rs between lines 17 and 50, the parse_diff_lines function is
currently nested and complex, which reduces readability and testability. Extract
this function from its current nested scope and move it to a private
module-level function. Ensure it retains the same signature and logic, and
update any call sites accordingly to use the new module-level function.

Comment thread src/references.rs Outdated
Comment on lines +2 to +6
#![allow(
clippy::missing_errors_doc,
clippy::missing_panics_doc,
reason = "docs omitted"
)]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove module-level #[allow] attributes.

Module-level lint suppressions violate the coding guidelines. Remove these and add doc comments to public functions instead.

-#![allow(
-    clippy::missing_errors_doc,
-    clippy::missing_panics_doc,
-    reason = "docs omitted"
-)]

Add proper documentation to public functions:

/// Parses an issue reference from various formats.
/// 
/// # Errors
/// 
/// Returns `VkError::InvalidRef` if the reference cannot be parsed.
/// Returns `VkError::RepoNotFound` if repository cannot be determined.
pub fn parse_issue_reference(
🤖 Prompt for AI Agents
In src/references.rs around lines 2 to 6, remove the module-level #[allow]
attributes that suppress clippy lint warnings, as these violate coding
guidelines. Instead, add proper doc comments to each public function, including
descriptions and error documentation, for example, documenting the
parse_issue_reference function with details on possible errors like
VkError::InvalidRef and VkError::RepoNotFound.

Comment thread src/references.rs
Comment on lines +109 to +121
pub fn repo_from_fetch_head() -> Option<RepoInfo> {
let path = Path::new(".git/FETCH_HEAD");
let content = std::fs::read_to_string(path).ok()?;
for line in content.lines() {
if let Some(caps) = GITHUB_RE.captures(line) {
let owner = caps.name("owner")?.as_str().to_owned();
let name_str = caps.name("repo")?.as_str();
let name = name_str.strip_suffix(".git").unwrap_or(name_str).to_owned();
return Some(RepoInfo { owner, name });
}
}
None
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for .git/FETCH_HEAD parsing.

While the function correctly returns None on failure, document when this function might fail (e.g., not in a git repository).

Add documentation:

/// Extracts repository information from the local git FETCH_HEAD file.
/// 
/// Returns `None` if:
/// - The `.git/FETCH_HEAD` file doesn't exist
/// - The file cannot be read
/// - No GitHub repository URL is found in the file
pub fn repo_from_fetch_head() -> Option<RepoInfo> {
🤖 Prompt for AI Agents
In src/references.rs around lines 109 to 121, add a doc comment above the
repo_from_fetch_head function to explain its behavior and failure cases.
Document that it extracts repository info from the local .git/FETCH_HEAD file
and returns None if the file doesn't exist, cannot be read, or no GitHub repo
URL is found. This improves code clarity and informs users about when the
function might fail.

Comment thread src/reviews.rs Outdated
Comment on lines +2 to +7
#![allow(
clippy::missing_errors_doc,
clippy::missing_panics_doc,
clippy::must_use_candidate,
reason = "docs omitted"
)]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Replace forbidden #[allow] with narrowly scoped #[expect] attributes.

The coding guidelines explicitly forbid #[allow] and require narrowly scoped #[expect(lint, reason = "...")] instead.

🤖 Prompt for AI Agents
In src/reviews.rs lines 2 to 7, replace the broad #[allow(...)] attribute with
narrowly scoped #[expect(..., reason = "...")] attributes for each lint. Remove
the #[allow] block and add individual #[expect] attributes directly above the
specific items or code blocks they apply to, specifying the lint name and a
clear reason for each expectation according to the coding guidelines.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 31, 2025

Stale

@leynos leynos closed this Jul 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant