Merge the dynamic update of the metadata table and the options panel within Statistical Inference page in Development#182
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughThe PR makes the modeling section header dynamic by comparison mode, restricts visualization choices in response-curve mode, removes the response-curve "Setup Metadata" submit button, and adds reset logic to auto-build a response-curve contrast matrix from conditions (enabling/disabling modeling and surfacing errors). Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant UI as UI Layer
participant Server as Server Logic
participant Data as Condition Data
User->>UI: Select comparison_mode = response_curve
UI->>Server: notify comparison_mode change
Server->>Server: Restrict visualization_plot_type choices
Server->>UI: Update visualization dropdown options
User->>UI: Click Reset (or proceed)
UI->>Server: trigger reset/proceed observer
Server->>Data: Read condition_list()
Data-->>Server: Return condition data
Server->>Server: Call build_response_curve_matrix(condition data)
alt success
Server->>Server: Set contrast$matrix
Server->>Server: Enable modeling_start
Server->>UI: Render matrix table with "Group Metadata" header
else failure
Server->>Server: Clear contrast$matrix
Server->>Server: Disable modeling_start
Server->>UI: Emit error notification (message)
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tests/testthat/test-statmodel-ui-options-contrasts.R (1)
73-84: This test doesn’t validate runtime conditionality yet.Right now it only checks that both text blocks exist in HTML. Since
conditionalPanelrenders both branches in markup, this can pass even if conditions are wired incorrectly. Please assert the expecteddata-display-ifcondition strings too.✅ Suggested test hardening
test_that("modeling section shows conditional headings in UI", { ui <- MSstatsShiny::statmodelUI("statmodel") ui_html <- htmltools::renderTags(ui)$html + ns <- function(id) paste0("statmodel-", id) expect_true(grepl("Dose response analysis", ui_html), info = "Dose response heading should be in conditional panel") expect_true(grepl("Group comparison", ui_html), info = "Group comparison heading should be in conditional panel") expect_true(grepl("configure the mapping", ui_html), info = "Dose response description should be present") expect_true(grepl("add a comparison matrix", ui_html), info = "Group comparison description should be present") + + expect_true( + grepl( + paste0("input['", ns(NAMESPACE_STATMODEL$comparison_mode), "'] == '", + CONSTANTS_STATMODEL$comparison_mode_response_curve, "'"), + ui_html, + fixed = TRUE + ), + info = "Dose-response conditional should be bound to comparison_mode" + ) + expect_true( + grepl( + paste0("input['", ns(NAMESPACE_STATMODEL$comparison_mode), "'] != '", + CONSTANTS_STATMODEL$comparison_mode_response_curve, "'"), + ui_html, + fixed = TRUE + ), + info = "Group-comparison conditional should be bound to comparison_mode" + ) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/testthat/test-statmodel-ui-options-contrasts.R` around lines 73 - 84, The test currently only checks for presence of text in the rendered HTML but doesn’t assert the conditional expressions, so it can pass even if conditions are wrong; update the test for statmodelUI by locating the conditionalPanel outputs in the rendered HTML (from MSstatsShiny::statmodelUI("statmodel") / ui_html) and assert that the expected data-display-if (or equivalent attribute used by conditionalPanel) strings are present for the "Dose response analysis" and "Group comparison" blocks (e.g., check that the element wrapping the dose response text contains the specific condition string and likewise for the group comparison), ensuring you reference the conditionalPanel outputs rather than only the visible text.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@R/module-statmodel-server.R`:
- Around line 91-100: The code currently enables
NAMESPACE_STATMODEL$modeling_start immediately after assigning contrast$matrix
from build_response_curve_matrix(condition_list()), but that function may return
an empty or invalid matrix without throwing; update the tryCatch block around
build_response_curve_matrix so that after assigning contrast$matrix you validate
it (e.g., check it's a matrix/data.frame and has nrow(contrast$matrix) > 0 and
ncol(...) > 0 and contains no all-NA rows/cols) and only call
enable(NAMESPACE_STATMODEL$modeling_start) when the matrix passes validation; if
validation fails showNotification with a clear error and do not enable the Start
button.
---
Nitpick comments:
In `@tests/testthat/test-statmodel-ui-options-contrasts.R`:
- Around line 73-84: The test currently only checks for presence of text in the
rendered HTML but doesn’t assert the conditional expressions, so it can pass
even if conditions are wrong; update the test for statmodelUI by locating the
conditionalPanel outputs in the rendered HTML (from
MSstatsShiny::statmodelUI("statmodel") / ui_html) and assert that the expected
data-display-if (or equivalent attribute used by conditionalPanel) strings are
present for the "Dose response analysis" and "Group comparison" blocks (e.g.,
check that the element wrapping the dose response text contains the specific
condition string and likewise for the group comparison), ensuring you reference
the conditionalPanel outputs rather than only the visible text.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9a26b816-dd11-4a06-90d1-88406660284a
📒 Files selected for processing (4)
R/module-statmodel-server.RR/statmodel-ui-comparisons.RR/statmodel-ui-options-modeling.Rtests/testthat/test-statmodel-ui-options-contrasts.R
💤 Files with no reviewable changes (1)
- R/statmodel-ui-comparisons.R
There was a problem hiding this comment.
♻️ Duplicate comments (1)
R/module-statmodel-server.R (1)
86-106:⚠️ Potential issue | 🟠 MajorKeep Start disabled until the rebuilt response metadata is actually usable.
This observer clears
contrast$matrixon every mode/proceed reset, but it only disablesmodeling_startin the error branch. That leaves Start clickable after switching away from response-curve mode, and the currentnrow()guard still accepts partially parsed metadata frombuild_response_curve_matrix(). Please disable first, then re-enable only after confirming the auto-built frame contains at least one parsed measurement column with non-NAtreatment values.🛠️ Suggested safeguard
observeEvent(c(input[[NAMESPACE_STATMODEL$comparison_mode]], loadpage_input()$proceed1), { + disable(NAMESPACE_STATMODEL$modeling_start) contrast$matrix = NULL comp_list$dList = NULL significant$result = NULL # Auto-build response curve metadata when dose response mode is selected if (isTRUE(input[[NAMESPACE_STATMODEL$comparison_mode]] == CONSTANTS_STATMODEL$comparison_mode_response_curve)) { tryCatch({ rc_matrix <- build_response_curve_matrix(condition_list()) if (is.null(rc_matrix) || nrow(rc_matrix) == 0) { stop("Unable to auto-build group metadata from the current conditions.") } + value_cols <- grep("_value$", names(rc_matrix), value = TRUE) + treatment_rows <- !toupper(rc_matrix$GROUP) %in% c("DMSO", "CONTROL", "VEHICLE") + has_values <- length(value_cols) > 0 && + any(vapply( + rc_matrix[treatment_rows, value_cols, drop = FALSE], + function(col) any(!is.na(col)), + logical(1) + )) + if (!has_values) { + stop("Unable to auto-build group metadata from the current conditions.") + } contrast$matrix <- rc_matrix enable(NAMESPACE_STATMODEL$modeling_start) }, error = function(e) { contrast$matrix <- NULL disable(NAMESPACE_STATMODEL$modeling_start)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@R/module-statmodel-server.R` around lines 86 - 106, The observer currently clears contrast$matrix but only disables modeling_start in the error branch; change it so modeling_start is disabled immediately when the observer runs (before attempting to auto-build), and only re-enable it after verifying the auto-built rc_matrix from build_response_curve_matrix(condition_list()) is non-NULL, has nrow(rc_matrix) > 0 and contains at least one parsed measurement column with non-NA treatment values; if any of these checks fail set contrast$matrix <- NULL, keep modeling_start disabled, and showNotification with the error/message. Locate this logic in the observeEvent handling input[[NAMESPACE_STATMODEL$comparison_mode]] (and loadpage_input()$proceed1) and update the tryCatch flow around build_response_curve_matrix and the enable/disable calls so that enable(NAMESPACE_STATMODEL$modeling_start) is called only after the validated checks pass.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@R/module-statmodel-server.R`:
- Around line 86-106: The observer currently clears contrast$matrix but only
disables modeling_start in the error branch; change it so modeling_start is
disabled immediately when the observer runs (before attempting to auto-build),
and only re-enable it after verifying the auto-built rc_matrix from
build_response_curve_matrix(condition_list()) is non-NULL, has nrow(rc_matrix) >
0 and contains at least one parsed measurement column with non-NA treatment
values; if any of these checks fail set contrast$matrix <- NULL, keep
modeling_start disabled, and showNotification with the error/message. Locate
this logic in the observeEvent handling
input[[NAMESPACE_STATMODEL$comparison_mode]] (and loadpage_input()$proceed1) and
update the tryCatch flow around build_response_curve_matrix and the
enable/disable calls so that enable(NAMESPACE_STATMODEL$modeling_start) is
called only after the validated checks pass.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0eabaa54-2696-4b9b-aeb1-8cee7dec6b53
📒 Files selected for processing (2)
R/module-statmodel-server.Rtests/testthat/test-statmodel-ui-options-contrasts.R
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/testthat/test-statmodel-ui-options-contrasts.R
|
Response on coderabbitai review: Start is disabled at the top of the observer and only re-enabled after validation confirms the matrix has parsed measurement columns with non-NA treatment values. |
… dropdown in Statistical Inference tab
There was a problem hiding this comment.
♻️ Duplicate comments (1)
R/module-statmodel-server.R (1)
89-109:⚠️ Potential issue | 🟠 MajorDisable Start on reset first, then re-enable only after strict response-curve metadata validation.
At Line 89, reset clears
contrast$matrixbut does not immediately disable Start. Also, Line 99’s validation is too permissive: non-empty matrices can still lack usable parsed treatment values, which can enable Start in a broken state.Suggested safeguard
observeEvent(c(input[[NAMESPACE_STATMODEL$comparison_mode]], loadpage_input()$proceed1), { - contrast$matrix = NULL - comp_list$dList = NULL - significant$result = NULL + contrast$matrix = NULL + comp_list$dList = NULL + significant$result = NULL + disable(NAMESPACE_STATMODEL$modeling_start) # Auto-build response curve metadata when dose response mode is selected if (isTRUE(input[[NAMESPACE_STATMODEL$comparison_mode]] == CONSTANTS_STATMODEL$comparison_mode_response_curve)) { tryCatch({ rc_matrix <- build_response_curve_matrix(condition_list()) - if (is.null(rc_matrix) || nrow(rc_matrix) == 0) { + value_cols <- grep("_value$", colnames(rc_matrix), value = TRUE) + treatment_rows <- !(toupper(rc_matrix$GROUP) %in% c("DMSO", "CONTROL", "VEHICLE")) + has_non_na_treatment_values <- length(value_cols) > 0 && + any(rowSums(!is.na(rc_matrix[treatment_rows, value_cols, drop = FALSE])) > 0) + + if (!is.data.frame(rc_matrix) || nrow(rc_matrix) == 0 || !has_non_na_treatment_values) { stop("Unable to auto-build group metadata from the current conditions.") } contrast$matrix <- rc_matrix enable(NAMESPACE_STATMODEL$modeling_start) }, error = function(e) { contrast$matrix <- NULL disable(NAMESPACE_STATMODEL$modeling_start) showNotification(conditionMessage(e), type = "error", duration = 6) }) } })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@R/module-statmodel-server.R` around lines 89 - 109, Reset currently leaves the Start button enabled; immediately call disable(NAMESPACE_STATMODEL$modeling_start) when you clear contrast$matrix in the observeEvent (the block handling input[[NAMESPACE_STATMODEL$comparison_mode]] and loadpage_input()$proceed1) so Start is always off during reset, then after calling build_response_curve_matrix(condition_list()) perform a strict validation of rc_matrix (e.g. check nrow>0 and that parsed treatment/dose columns are present and non-empty or call/implement a helper like validate_response_curve_matrix(rc_matrix) to ensure usable parsed treatment values); only if that strict validation passes set contrast$matrix <- rc_matrix and enable(NAMESPACE_STATMODEL$modeling_start), otherwise set contrast$matrix <- NULL, keep Start disabled and showNotification with the error message.
🧹 Nitpick comments (1)
tests/testthat/test-module-statmodel-ui.R (1)
15-21: Harden section-order assertions against false positives.If one locator is missing,
regexpr()can return-1and still make relative ordering checks misleading. Assert presence first, then assert ordering.Proposed test hardening
pos_contrast <- regexpr("1\\. Define comparisons", ui_html) pos_modeling <- regexpr(NAMESPACE_STATMODEL$modeling_section_header, ui_html) pos_viz <- regexpr("3\\. Visualization", ui_html) + + expect_gt(pos_contrast, 0, info = "Contrast section anchor should be present") + expect_gt(pos_modeling, 0, info = "Modeling section anchor should be present") + expect_gt(pos_viz, 0, info = "Visualization section anchor should be present") expect_true(pos_contrast < pos_modeling, info = "Contrast matrix section should appear before modeling section")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/testthat/test-module-statmodel-ui.R` around lines 15 - 21, The current relative-order checks using regexpr (pos_contrast, pos_modeling, pos_viz) can mislead if a locator is missing (regexpr returns -1); first assert each locator is found (e.g., expect_true(pos_contrast != -1, info=...), expect_true(pos_modeling != -1, ...), expect_true(pos_viz != -1, ...)) and only then perform the ordering assertions (expect_true(pos_contrast < pos_modeling, ...) and expect_true(pos_modeling < pos_viz, ...)); update the test that references NAMESPACE_STATMODEL$modeling_section_header and the regexpr results to follow this two-step presence-then-order pattern.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@R/module-statmodel-server.R`:
- Around line 89-109: Reset currently leaves the Start button enabled;
immediately call disable(NAMESPACE_STATMODEL$modeling_start) when you clear
contrast$matrix in the observeEvent (the block handling
input[[NAMESPACE_STATMODEL$comparison_mode]] and loadpage_input()$proceed1) so
Start is always off during reset, then after calling
build_response_curve_matrix(condition_list()) perform a strict validation of
rc_matrix (e.g. check nrow>0 and that parsed treatment/dose columns are present
and non-empty or call/implement a helper like
validate_response_curve_matrix(rc_matrix) to ensure usable parsed treatment
values); only if that strict validation passes set contrast$matrix <- rc_matrix
and enable(NAMESPACE_STATMODEL$modeling_start), otherwise set contrast$matrix <-
NULL, keep Start disabled and showNotification with the error message.
---
Nitpick comments:
In `@tests/testthat/test-module-statmodel-ui.R`:
- Around line 15-21: The current relative-order checks using regexpr
(pos_contrast, pos_modeling, pos_viz) can mislead if a locator is missing
(regexpr returns -1); first assert each locator is found (e.g.,
expect_true(pos_contrast != -1, info=...), expect_true(pos_modeling != -1, ...),
expect_true(pos_viz != -1, ...)) and only then perform the ordering assertions
(expect_true(pos_contrast < pos_modeling, ...) and expect_true(pos_modeling <
pos_viz, ...)); update the test that references
NAMESPACE_STATMODEL$modeling_section_header and the regexpr results to follow
this two-step presence-then-order pattern.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 226bcb67-7a07-47cc-8aa1-dcbd0ea206f7
📒 Files selected for processing (6)
R/constants.RR/module-statmodel-server.RR/statmodel-server-options-modeling.RR/statmodel-ui-options-modeling.Rtests/testthat/test-module-statmodel-ui.Rtests/testthat/test-statmodel-ui-options-contrasts.R
✅ Files skipped from review due to trivial changes (1)
- R/constants.R
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/testthat/test-statmodel-ui-options-contrasts.R
Motivation and Context
The Statistical Inference page must adapt its UI and controls depending on the selected comparison mode (group-comparison modes vs. dose-response mode). Previously the side-panel headings, metadata/contrast matrix affordances, visualization choices, and modeling guidance were static and not tailored to dose-response workflows. This PR makes the metadata table header, modeling section header/description, visualization plot-type choices, and the Start button behavior dynamically reflect the selected comparison mode so users receive appropriate controls and guidance for dose-response analysis.
Solution Summary
Server-side observers, a small UI change, and a helper were added so that:
Detailed Changes
R/module-statmodel-server.R
R/statmodel-ui-comparisons.R
R/statmodel-server-options-modeling.R
R/constants.R
Unit Tests Added / Modified
tests/testthat/test-statmodel-ui-options-contrasts.R
tests/testthat/test-module-statmodel-ui.R
tests/testthat/test-utils-statmodel-server.R
Coding Guidelines / Violations