Skip to content

adjust error handling for zero edges and handling of issue column#50

Merged
tonywu1999 merged 1 commit intodevelfrom
simple-fixes
Jul 25, 2025
Merged

adjust error handling for zero edges and handling of issue column#50
tonywu1999 merged 1 commit intodevelfrom
simple-fixes

Conversation

@tonywu1999
Copy link
Copy Markdown
Contributor

@tonywu1999 tonywu1999 commented Jul 25, 2025

Summary by CodeRabbit

  • Bug Fixes
    • Improved handling of missing columns to prevent errors when filtering data.
    • Stricter error handling when no edges remain after filtering, ensuring clearer feedback for invalid input.

PR Type

Bug fix


Description

  • Check for issue column before filter

  • Filter only NA issues when present

  • Stop with error if no edges remain


File Walkthrough

Relevant files
Error handling
utils_getSubnetworkFromIndra.R
Issue check and strict edge-error handling                             

R/utils_getSubnetworkFromIndra.R

  • Wrap input$issue filter in column-existence check
  • Change zero-edge warning to a stopping error
+4/-2     

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Jul 25, 2025

Walkthrough

The edge filtering logic in the utility script was updated to check for the existence of the "issue" column before filtering out rows with missing values. Additionally, the script now raises an error rather than a warning if no edges remain after filtering, enforcing stricter failure handling.

Changes

File(s) Change Summary
R/utils_getSubnetworkFromIndra.R Added conditional check for "issue" column before filtering; changed warning to error on empty result

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Poem

A hop and a skip through the code we go,
Now errors will stop us, not just say hello.
If "issue" is missing, we check with care,
No silent warnings—rabbits play fair!
🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4fdc717 and 6678c78.

📒 Files selected for processing (1)
  • R/utils_getSubnetworkFromIndra.R (2 hunks)
🔇 Additional comments (2)
R/utils_getSubnetworkFromIndra.R (2)

119-121: LGTM! Good defensive programming practice.

The conditional check prevents runtime errors when the "issue" column is absent from the input data, while preserving the existing filtering behavior when the column exists.


300-300: Good change for data integrity, but note this is a breaking change.

Converting from warning to error when no edges remain is appropriate as it prevents downstream processing of empty datasets and forces users to address filtering issues. The error message provides clear guidance.

Note that this is a breaking change - existing code that previously continued with warnings will now fail.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch simple-fixes

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions
Copy link
Copy Markdown
Contributor

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Filter Behavior

Ensure that when the issue column is present and all rows are filtered out, the function handles the empty input appropriately or surfaces a clear error.

if ("issue" %in% colnames(input)) {
    input <- input[is.na(input$issue), ]
}
Error Handling Consistency

Verify that converting the warning to a stop in .filterEdgesDataFrame aligns with overall error-handling strategy and that callers properly catch or handle this error.

if (nrow(edges) == 0) {
    stop("No edges remain after applying filters. Consider relaxing filters")
}
Test Coverage

Add tests for cases with and without the issue column as well as for empty edge sets to cover the new filtering and error branches.

    if ("issue" %in% colnames(input)) {
        input <- input[is.na(input$issue), ]
    }

    # Combine filtered data with exempt proteins and remove duplicates
    if (!is.null(exempt_proteins) && nrow(exempt_proteins) > 0) {
        combined_input <- rbind(exempt_proteins, input)
        # Remove duplicates based on Protein column, keeping first occurrence
        input <- combined_input[!duplicated(combined_input$Protein), ]
    }

    input$Protein <- as.character(input$Protein)
    return(input)
}
#' Add additional metadata to an edge
#' @param edge object representation of an INDRA statement
#' @param input filtered groupComparison result
#' @param source_namespace namespace of the source for evidence URL
#' @return edge with additional metadata
#' @keywords internal
#' @noRd
.addAdditionalMetadataToIndraEdge <- function(edge, input, source_namespace = "@HGNC") {
    edge$evidence_list <- paste(
        "https://db.indra.bio/statements/from_agents?subject=",
        edge$source_id, source_namespace, "&object=",
        edge$target_id, "@HGNC&format=html",
        sep = ""
    )

    # Convert back to uniprot IDs
    if (source_namespace == "@HGNC") {
        matched_rows_source <- input[which(input$HgncId == edge$source_id), ]
    }
    matched_rows_target <- input[which(input$HgncId == edge$target_id), ]

    if ((source_namespace == "@HGNC" && nrow(matched_rows_source) != 1) || nrow(matched_rows_target) != 1) {
        stop(paste0(
            "INDRA Exception: Unexpected number of matches for the following HGNC IDs in the input data: ", 
            edge$source_id, 
            " or ", 
            edge$target_id, 
            ". Each ID must match exactly one entry in the input data, but 0 or multiple matches were found. Please check the input data for duplicates or missing entries."
        ))
    } 

    if (source_namespace == "@HGNC") {
        edge$source_uniprot_id <- matched_rows_source$Protein
    } else {
        edge$source_uniprot_id <- edge$source_id
    }
    edge$target_uniprot_id <- matched_rows_target$Protein

    return(edge)
}


#' Collapse duplicate INDRA statements into a mapping of edge to metadata
#' @param res INDRA response
#' @param input filtered groupComparison result
#' @importFrom r2r hashmap keys
#' @return processed edge to metadata mapping
#' @keywords internal
#' @noRd
.collapseDuplicateEdgesIntoEdgeToMetadataMapping <- function(res, input) {
    edgeToMetadataMapping <- hashmap()

    for (edge in res) {
        key <- paste(edge$source_id, edge$target_id, sep = "_")
        if (key %in% keys(edgeToMetadataMapping)) {
            edgeToMetadataMapping[[key]]$data$evidence_count <-
                edgeToMetadataMapping[[key]]$data$evidence_count +
                edge$data$evidence_count
            edgeToMetadataMapping[[key]]$data$stmt_type <- unique(c(
                edgeToMetadataMapping[[key]]$data$stmt_type,
                edge$data$stmt_type))
            edgeToMetadataMapping[[key]]$data$stmt_type <- unique(c(
                edgeToMetadataMapping[[key]]$data$stmt_type,
                edge$data$stmt_type))
            edgeToMetadataMapping[[key]]$data$paper_count <- 
                edgeToMetadataMapping[[key]]$data$paper_count + 1
        } else {
            edge <- .addAdditionalMetadataToIndraEdge(edge, input)
            edge$data$paper_count <- 1
            edgeToMetadataMapping[[key]] <- edge
        }
    }

    for (key in keys(edgeToMetadataMapping)) {
        edgeToMetadataMapping[[key]]$data$stmt_type <-
            paste(unique(edgeToMetadataMapping[[key]]$data$stmt_type), 
                  collapse = ", ")
    }

    return(edgeToMetadataMapping)
}

#' Construct edges data.frame from INDRA response
#' @param res INDRA response
#' @param input filtered groupComparison result
#' @param protein_level_data output of dataProcess
#' @importFrom r2r query keys
#' @return edge data.frame
#' @keywords internal
#' @noRd
.constructEdgesDataFrame <- function(res, input, protein_level_data) {
    res <- .collapseDuplicateEdgesIntoEdgeToMetadataMapping(res, input)
    edges <- data.frame(
        source = vapply(keys(res), function(x) {
            query(res, x)$source_uniprot_id
        }, ""),
        target = vapply(keys(res), function(x) {
            query(res, x)$target_uniprot_id
        }, ""),
        interaction = vapply(keys(res), function(x) {
            query(res, x)$data$stmt_type
        }, ""),
        evidenceCount = vapply(keys(res), function(x) {
            query(res, x)$data$evidence_count
        }, 1),
        paperCount = vapply(keys(res), function(x) {
            query(res, x)$data$paper_count
        }, 1),
        evidenceLink = vapply(keys(res), function(x) {
            query(res, x)$evidence_list
        }, ""),
        sourceCounts = vapply(keys(res), function(x) {
            query(res, x)$data$source_counts
        }, ""),
        stringsAsFactors = FALSE
    )
    # add correlation - maybe create a separate function
    if (!is.null(protein_level_data)) {
        protein_level_data <- protein_level_data[
            protein_level_data$Protein %in% edges$source | 
                protein_level_data$Protein %in% edges$target, ]
        correlations <- .getCorrelationMatrixFromProteinLevelData(protein_level_data)
        edges$correlation <- apply(edges, 1, function(edge) {
            if (edge["source"] %in% rownames(correlations) && edge["target"] %in% colnames(correlations)) {
                return(correlations[edge["source"], edge["target"]])
            } else {
                return(NA)
            }
        })
    }
    return(edges)
}

#' Construct nodes data.frame from groupComparison output
#' @param input filtered groupComparison result
#' @param edges edges data frame
#' @return nodes data.frame
#' @keywords internal
#' @noRd
.constructNodesDataFrame <- function(input, edges) {
    nodes <- data.frame(
        id = input$Protein,
        logFC = input$log2FC,
        pvalue = input$adj.pvalue,
        hgncName = if ("HgncName" %in% colnames(input) && is.character(input$HgncName)) input$HgncName else NA,
        stringsAsFactors = FALSE
    )
    nodes <- nodes[which(nodes$id %in% edges$source | nodes$id %in% edges$target),]
    return(nodes)
}

#' Filter Edges Data Frame
#' @param edges response from INDRA
#' @param paper_count_cutoff cutoff for number of papers
#' @param correlation_cutoff if protein_level_abundance is not NULL, apply a 
#' cutoff for edges with correlation less than a specified cutoff.
#' @return filtered edges data frame
#' @keywords internal
#' @noRd
.filterEdgesDataFrame <- function(edges, 
                                  paper_count_cutoff,
                                  correlation_cutoff) {
    edges <- edges[which(edges$paperCount >= paper_count_cutoff), ]
    if ("correlation" %in% colnames(edges)) {
        edges <- edges[which(abs(edges$correlation) >= correlation_cutoff), ]
    }
    if (nrow(edges) == 0) {
        stop("No edges remain after applying filters. Consider relaxing filters")
    }

@github-actions
Copy link
Copy Markdown
Contributor

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
General
Warn when issue column is missing

If the issue column is required for downstream logic, silently skipping the filter
when it’s missing may hide data issues. Consider emitting a warning when the column
is absent to alert users.

R/utils_getSubnetworkFromIndra.R [119-121]

 if ("issue" %in% colnames(input)) {
     input <- input[is.na(input$issue), ]
+} else {
+    warning("`issue` column not found; no issue filtering applied")
 }
Suggestion importance[1-10]: 5

__

Why: Emitting a warning when the issue column is absent improves user awareness of skipped filtering without altering core logic.

Low
Suppress call stack in error

By default stop() shows the call stack which can clutter error output; adding call.
= FALSE makes the message cleaner.

R/utils_getSubnetworkFromIndra.R [300]

-stop("No edges remain after applying filters. Consider relaxing filters")
+stop("No edges remain after applying filters. Consider relaxing filters", call. = FALSE)
Suggestion importance[1-10]: 4

__

Why: Adding call. = FALSE cleans up the error message output, a minor stylistic enhancement that doesn’t affect the function’s behavior.

Low

@codecov-commenter
Copy link
Copy Markdown

Codecov Report

Attention: Patch coverage is 66.66667% with 1 line in your changes missing coverage. Please review.

Project coverage is 65.29%. Comparing base (4fdc717) to head (6678c78).

Files with missing lines Patch % Lines
R/utils_getSubnetworkFromIndra.R 66.66% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##            devel      #50      +/-   ##
==========================================
+ Coverage   65.23%   65.29%   +0.05%     
==========================================
  Files           7        7              
  Lines         607      608       +1     
==========================================
+ Hits          396      397       +1     
  Misses        211      211              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@tonywu1999 tonywu1999 merged commit d465bd0 into devel Jul 25, 2025
5 checks passed
@tonywu1999 tonywu1999 deleted the simple-fixes branch July 25, 2025 14:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants