Skip to content

Merge the dynamic update of the metadata table and the options panel within Statistical Inference page in Development#182

Merged
tonywu1999 merged 4 commits intodevelfrom
feat/response-curve-grooming
Apr 2, 2026
Merged

Merge the dynamic update of the metadata table and the options panel within Statistical Inference page in Development#182
tonywu1999 merged 4 commits intodevelfrom
feat/response-curve-grooming

Conversation

@swaraj-neu
Copy link
Copy Markdown
Contributor

@swaraj-neu swaraj-neu commented Mar 29, 2026

  • Dynamically update metadata table and options panel based on selected comparison
image image image

Motivation and Context

The Statistical Inference page must adapt its UI and controls depending on the selected comparison mode (group-comparison modes vs. dose-response mode). Previously the side-panel headings, metadata/contrast matrix affordances, visualization choices, and modeling guidance were static and not tailored to dose-response workflows. This PR makes the metadata table header, modeling section header/description, visualization plot-type choices, and the Start button behavior dynamically reflect the selected comparison mode so users receive appropriate controls and guidance for dose-response analysis.

Solution Summary

Server-side observers, a small UI change, and a helper were added so that:

  • Selecting dose-response mode replaces the matrix header text with "Group Metadata" and shows dose-response-specific modeling guidance.
  • The visualization plot-type select is restricted to "Dose Response Curve" when in dose-response mode and restored to Volcano/Heatmap/Comparison Plot for other modes.
  • When dose-response mode is selected the server attempts to auto-build group metadata (response-curve matrix) from experimental conditions; the Start button is enabled only if auto-build/validation succeeds and remains disabled (with an error notification) on failure. (Start is disabled at the top of the observer and only re-enabled after validation confirms the matrix has parsed measurement columns with non-NA treatment values.)
  • The response-curve configuration panel was simplified by removing the "Setup Metadata" submit control; tests were updated to assert the new dynamic header and panel shape.
  • A new helper get_modeling_section_header(mode) centralizes the dynamic header/content for the modeling section.

Detailed Changes

  • R/module-statmodel-server.R

    • Rendered the modeling section header via output[[modeling_section_header]] <- renderUI(get_modeling_section_header(...)).
    • Added observeEvent on comparison_mode (ignoreInit = TRUE) to update visualization_plot_type choices:
      • response-curve mode → choices = only "Dose Response Curve".
      • other modes → choices = "Volcano Plot", "Heatmap", "Comparison Plot".
    • When switching to response-curve mode, auto-builds a response-curve contrast/metadata matrix via build_response_curve_matrix(condition_list()):
      • On success: sets contrast$matrix and enables NAMESPACE_STATMODEL$modeling_start.
      • On failure: clears contrast$matrix, disables modeling_start, and shows an error notification with the failure message.
    • matrix_build/reactive and comparisons_clear flows updated to handle the response-curve branch and ensure modeling_start enable/disable behavior is consistent.
    • Changed matrix header text to be conditional on comparison_mode (switches "Comparison matrix" → "Group Metadata" for response-curve).
  • R/statmodel-ui-comparisons.R

    • build_response_curve_panel(): removed the header h5("Set up response curve configuration:") and the "Setup Metadata" actionButton (comparisons_submit); left the "Reset" action button (comparisons_clear) as the primary control in the panel.
  • R/statmodel-server-options-modeling.R

    • Added get_modeling_section_header(mode) returning a tagList with:
      • If response-curve mode: h4("2. Dose response analysis") plus mapping/configuration instruction text.
      • Else: h4("2. Group comparison") plus the standard instruction to add a comparison matrix.
    • Modeling options UI now uses the modeling_section_header output placeholder.
  • R/constants.R

    • Added NAMESPACE_STATMODEL$modeling_section_header constant to wire the UI output placeholder.

Unit Tests Added / Modified

  • tests/testthat/test-statmodel-ui-options-contrasts.R

    • Adjusted build_response_curve_panel() expectations: removed expectation for comparisons_submit and reduced expected tag count; asserts comparisons_clear remains.
    • Added a test to render statmodelUI and assert presence of the modeling_section_header placeholder.
  • tests/testthat/test-module-statmodel-ui.R

    • Updated tests to look for the modeling_section_header placeholder instead of hard-coded "2. Group comparison" text and adjusted ordering assertions to reflect the new dynamic header.
  • tests/testthat/test-utils-statmodel-server.R

    • Added three unit tests for get_modeling_section_header():
      • Response-curve mode returns "Dose response analysis" and mapping/configuration guidance and excludes "Group comparison".
      • Non-response-curve modes return "Group comparison" and exclude "Dose response".
      • NULL mode defaults to "Group comparison".

Coding Guidelines / Violations

  • No significant coding-guideline violations were detected in the changed files:
    • Reactive constructs (observeEvent, eventReactive, renderUI) are used appropriately with req/ignoreInit where applicable.
    • Helper extraction (get_modeling_section_header) and constants usage follow existing patterns.
    • Error handling for auto-build uses notifications and control disabling, consistent with app patterns.

@swaraj-neu swaraj-neu requested a review from tonywu1999 March 29, 2026 19:25
@swaraj-neu swaraj-neu self-assigned this Mar 29, 2026
@swaraj-neu swaraj-neu added the enhancement New feature or request label Mar 29, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 29, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: dbabe65b-3ed0-4219-b916-a925a93e1e9c

📥 Commits

Reviewing files that changed from the base of the PR and between 0bf4d88 and db756e3.

📒 Files selected for processing (2)
  • tests/testthat/test-statmodel-ui-options-contrasts.R
  • tests/testthat/test-utils-statmodel-server.R

📝 Walkthrough

Walkthrough

The PR makes the modeling section header dynamic by comparison mode, restricts visualization choices in response-curve mode, removes the response-curve "Setup Metadata" submit button, and adds reset logic to auto-build a response-curve contrast matrix from conditions (enabling/disabling modeling and surfacing errors).

Changes

Cohort / File(s) Summary
Server logic
R/module-statmodel-server.R
Render dynamic modeling header; observe comparison_mode to restrict visualization_plot_type when in response-curve mode; on reset try build_response_curve_matrix(condition_list()), set/clear contrast$matrix, toggle modeling_start, and show error notifications; switch matrix header text based on mode.
Modeling UI helpers & constants
R/statmodel-server-options-modeling.R, R/constants.R
Add get_modeling_section_header(mode) and NAMESPACE_STATMODEL$modeling_section_header; header text/description conditional on response-curve vs other modes.
Comparisons UI
R/statmodel-ui-comparisons.R
Remove response-curve panel header and comparisons_submit ("Setup Metadata") button; leave only comparisons_clear ("Reset").
Modeling options UI
R/statmodel-ui-options-modeling.R
Replace hardcoded section heading with uiOutput(NAMESPACE_STATMODEL$modeling_section_header) (dynamic header).
Tests
tests/testthat/test-statmodel-ui-options-contrasts.R, tests/testthat/test-module-statmodel-ui.R, tests/testthat/test-utils-statmodel-server.R
Update tests to reflect removal of submit button and presence of new modeling_section_header output; add unit tests for get_modeling_section_header() across modes; adjust UI locator expectations and tag counts accordingly.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant UI as UI Layer
    participant Server as Server Logic
    participant Data as Condition Data

    User->>UI: Select comparison_mode = response_curve
    UI->>Server: notify comparison_mode change
    Server->>Server: Restrict visualization_plot_type choices
    Server->>UI: Update visualization dropdown options

    User->>UI: Click Reset (or proceed)
    UI->>Server: trigger reset/proceed observer
    Server->>Data: Read condition_list()
    Data-->>Server: Return condition data
    Server->>Server: Call build_response_curve_matrix(condition data)
    alt success
        Server->>Server: Set contrast$matrix
        Server->>Server: Enable modeling_start
        Server->>UI: Render matrix table with "Group Metadata" header
    else failure
        Server->>Server: Clear contrast$matrix
        Server->>Server: Disable modeling_start
        Server->>UI: Emit error notification (message)
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • sszvetecz

Poem

🐰 I hopped through modes with cheerful cheer,
When curves are chosen, options clear,
A matrix grows from listed names,
Reset then model — no extra games,
Tiny paws applaud this tidy sphere! 🎉

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Title check ⚠️ Warning The title describes a dynamic update mechanism but doesn't clearly convey that the primary change is implementing response-curve mode-specific UI behavior and conditional logic. Consider a more specific title like 'Add dynamic response-curve mode UI updates' or 'Implement mode-dependent section headers and visualization options' to better reflect the core changes.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/response-curve-grooming

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tests/testthat/test-statmodel-ui-options-contrasts.R (1)

73-84: This test doesn’t validate runtime conditionality yet.

Right now it only checks that both text blocks exist in HTML. Since conditionalPanel renders both branches in markup, this can pass even if conditions are wired incorrectly. Please assert the expected data-display-if condition strings too.

✅ Suggested test hardening
 test_that("modeling section shows conditional headings in UI", {
   ui <- MSstatsShiny::statmodelUI("statmodel")
   ui_html <- htmltools::renderTags(ui)$html
+  ns <- function(id) paste0("statmodel-", id)

   expect_true(grepl("Dose response analysis", ui_html),
               info = "Dose response heading should be in conditional panel")
   expect_true(grepl("Group comparison", ui_html),
               info = "Group comparison heading should be in conditional panel")
   expect_true(grepl("configure the mapping", ui_html),
               info = "Dose response description should be present")
   expect_true(grepl("add a comparison matrix", ui_html),
               info = "Group comparison description should be present")
+
+  expect_true(
+    grepl(
+      paste0("input['", ns(NAMESPACE_STATMODEL$comparison_mode), "'] == '",
+             CONSTANTS_STATMODEL$comparison_mode_response_curve, "'"),
+      ui_html,
+      fixed = TRUE
+    ),
+    info = "Dose-response conditional should be bound to comparison_mode"
+  )
+  expect_true(
+    grepl(
+      paste0("input['", ns(NAMESPACE_STATMODEL$comparison_mode), "'] != '",
+             CONSTANTS_STATMODEL$comparison_mode_response_curve, "'"),
+      ui_html,
+      fixed = TRUE
+    ),
+    info = "Group-comparison conditional should be bound to comparison_mode"
+  )
 })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/testthat/test-statmodel-ui-options-contrasts.R` around lines 73 - 84,
The test currently only checks for presence of text in the rendered HTML but
doesn’t assert the conditional expressions, so it can pass even if conditions
are wrong; update the test for statmodelUI by locating the conditionalPanel
outputs in the rendered HTML (from MSstatsShiny::statmodelUI("statmodel") /
ui_html) and assert that the expected data-display-if (or equivalent attribute
used by conditionalPanel) strings are present for the "Dose response analysis"
and "Group comparison" blocks (e.g., check that the element wrapping the dose
response text contains the specific condition string and likewise for the group
comparison), ensuring you reference the conditionalPanel outputs rather than
only the visible text.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@R/module-statmodel-server.R`:
- Around line 91-100: The code currently enables
NAMESPACE_STATMODEL$modeling_start immediately after assigning contrast$matrix
from build_response_curve_matrix(condition_list()), but that function may return
an empty or invalid matrix without throwing; update the tryCatch block around
build_response_curve_matrix so that after assigning contrast$matrix you validate
it (e.g., check it's a matrix/data.frame and has nrow(contrast$matrix) > 0 and
ncol(...) > 0 and contains no all-NA rows/cols) and only call
enable(NAMESPACE_STATMODEL$modeling_start) when the matrix passes validation; if
validation fails showNotification with a clear error and do not enable the Start
button.

---

Nitpick comments:
In `@tests/testthat/test-statmodel-ui-options-contrasts.R`:
- Around line 73-84: The test currently only checks for presence of text in the
rendered HTML but doesn’t assert the conditional expressions, so it can pass
even if conditions are wrong; update the test for statmodelUI by locating the
conditionalPanel outputs in the rendered HTML (from
MSstatsShiny::statmodelUI("statmodel") / ui_html) and assert that the expected
data-display-if (or equivalent attribute used by conditionalPanel) strings are
present for the "Dose response analysis" and "Group comparison" blocks (e.g.,
check that the element wrapping the dose response text contains the specific
condition string and likewise for the group comparison), ensuring you reference
the conditionalPanel outputs rather than only the visible text.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 9a26b816-dd11-4a06-90d1-88406660284a

📥 Commits

Reviewing files that changed from the base of the PR and between c2a30aa and 58c1642.

📒 Files selected for processing (4)
  • R/module-statmodel-server.R
  • R/statmodel-ui-comparisons.R
  • R/statmodel-ui-options-modeling.R
  • tests/testthat/test-statmodel-ui-options-contrasts.R
💤 Files with no reviewable changes (1)
  • R/statmodel-ui-comparisons.R

Comment thread R/module-statmodel-server.R
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
R/module-statmodel-server.R (1)

86-106: ⚠️ Potential issue | 🟠 Major

Keep Start disabled until the rebuilt response metadata is actually usable.

This observer clears contrast$matrix on every mode/proceed reset, but it only disables modeling_start in the error branch. That leaves Start clickable after switching away from response-curve mode, and the current nrow() guard still accepts partially parsed metadata from build_response_curve_matrix(). Please disable first, then re-enable only after confirming the auto-built frame contains at least one parsed measurement column with non-NA treatment values.

🛠️ Suggested safeguard
 observeEvent(c(input[[NAMESPACE_STATMODEL$comparison_mode]], loadpage_input()$proceed1), {
+  disable(NAMESPACE_STATMODEL$modeling_start)
   contrast$matrix = NULL
   comp_list$dList = NULL
   significant$result = NULL

   # Auto-build response curve metadata when dose response mode is selected
   if (isTRUE(input[[NAMESPACE_STATMODEL$comparison_mode]] == 
       CONSTANTS_STATMODEL$comparison_mode_response_curve)) {
     tryCatch({
       rc_matrix <- build_response_curve_matrix(condition_list())
       if (is.null(rc_matrix) || nrow(rc_matrix) == 0) {
         stop("Unable to auto-build group metadata from the current conditions.")
       }
+      value_cols <- grep("_value$", names(rc_matrix), value = TRUE)
+      treatment_rows <- !toupper(rc_matrix$GROUP) %in% c("DMSO", "CONTROL", "VEHICLE")
+      has_values <- length(value_cols) > 0 &&
+        any(vapply(
+          rc_matrix[treatment_rows, value_cols, drop = FALSE],
+          function(col) any(!is.na(col)),
+          logical(1)
+        ))
+      if (!has_values) {
+        stop("Unable to auto-build group metadata from the current conditions.")
+      }
       contrast$matrix <- rc_matrix
       enable(NAMESPACE_STATMODEL$modeling_start)
     }, error = function(e) {
       contrast$matrix <- NULL
       disable(NAMESPACE_STATMODEL$modeling_start)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@R/module-statmodel-server.R` around lines 86 - 106, The observer currently
clears contrast$matrix but only disables modeling_start in the error branch;
change it so modeling_start is disabled immediately when the observer runs
(before attempting to auto-build), and only re-enable it after verifying the
auto-built rc_matrix from build_response_curve_matrix(condition_list()) is
non-NULL, has nrow(rc_matrix) > 0 and contains at least one parsed measurement
column with non-NA treatment values; if any of these checks fail set
contrast$matrix <- NULL, keep modeling_start disabled, and showNotification with
the error/message. Locate this logic in the observeEvent handling
input[[NAMESPACE_STATMODEL$comparison_mode]] (and loadpage_input()$proceed1) and
update the tryCatch flow around build_response_curve_matrix and the
enable/disable calls so that enable(NAMESPACE_STATMODEL$modeling_start) is
called only after the validated checks pass.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@R/module-statmodel-server.R`:
- Around line 86-106: The observer currently clears contrast$matrix but only
disables modeling_start in the error branch; change it so modeling_start is
disabled immediately when the observer runs (before attempting to auto-build),
and only re-enable it after verifying the auto-built rc_matrix from
build_response_curve_matrix(condition_list()) is non-NULL, has nrow(rc_matrix) >
0 and contains at least one parsed measurement column with non-NA treatment
values; if any of these checks fail set contrast$matrix <- NULL, keep
modeling_start disabled, and showNotification with the error/message. Locate
this logic in the observeEvent handling
input[[NAMESPACE_STATMODEL$comparison_mode]] (and loadpage_input()$proceed1) and
update the tryCatch flow around build_response_curve_matrix and the
enable/disable calls so that enable(NAMESPACE_STATMODEL$modeling_start) is
called only after the validated checks pass.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0eabaa54-2696-4b9b-aeb1-8cee7dec6b53

📥 Commits

Reviewing files that changed from the base of the PR and between 58c1642 and e6fb8dc.

📒 Files selected for processing (2)
  • R/module-statmodel-server.R
  • tests/testthat/test-statmodel-ui-options-contrasts.R
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/testthat/test-statmodel-ui-options-contrasts.R

@swaraj-neu
Copy link
Copy Markdown
Contributor Author

Response on coderabbitai review: Start is disabled at the top of the observer and only re-enabled after validation confirms the matrix has parsed measurement columns with non-NA treatment values.

Comment thread R/statmodel-ui-options-modeling.R Outdated
Comment thread tests/testthat/test-statmodel-ui-options-contrasts.R
Comment thread tests/testthat/test-statmodel-ui-options-contrasts.R
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
R/module-statmodel-server.R (1)

89-109: ⚠️ Potential issue | 🟠 Major

Disable Start on reset first, then re-enable only after strict response-curve metadata validation.

At Line 89, reset clears contrast$matrix but does not immediately disable Start. Also, Line 99’s validation is too permissive: non-empty matrices can still lack usable parsed treatment values, which can enable Start in a broken state.

Suggested safeguard
 observeEvent(c(input[[NAMESPACE_STATMODEL$comparison_mode]], loadpage_input()$proceed1), {
-  contrast$matrix = NULL
-  comp_list$dList = NULL
-  significant$result = NULL
+  contrast$matrix = NULL
+  comp_list$dList = NULL
+  significant$result = NULL
+  disable(NAMESPACE_STATMODEL$modeling_start)
 
   # Auto-build response curve metadata when dose response mode is selected
   if (isTRUE(input[[NAMESPACE_STATMODEL$comparison_mode]] == 
       CONSTANTS_STATMODEL$comparison_mode_response_curve)) {
     tryCatch({
       rc_matrix <- build_response_curve_matrix(condition_list())
-      if (is.null(rc_matrix) || nrow(rc_matrix) == 0) {
+      value_cols <- grep("_value$", colnames(rc_matrix), value = TRUE)
+      treatment_rows <- !(toupper(rc_matrix$GROUP) %in% c("DMSO", "CONTROL", "VEHICLE"))
+      has_non_na_treatment_values <- length(value_cols) > 0 &&
+        any(rowSums(!is.na(rc_matrix[treatment_rows, value_cols, drop = FALSE])) > 0)
+
+      if (!is.data.frame(rc_matrix) || nrow(rc_matrix) == 0 || !has_non_na_treatment_values) {
         stop("Unable to auto-build group metadata from the current conditions.")
       }
       contrast$matrix <- rc_matrix
       enable(NAMESPACE_STATMODEL$modeling_start)
     }, error = function(e) {
       contrast$matrix <- NULL
       disable(NAMESPACE_STATMODEL$modeling_start)
       showNotification(conditionMessage(e), type = "error", duration = 6)
     })
   }
 })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@R/module-statmodel-server.R` around lines 89 - 109, Reset currently leaves
the Start button enabled; immediately call
disable(NAMESPACE_STATMODEL$modeling_start) when you clear contrast$matrix in
the observeEvent (the block handling
input[[NAMESPACE_STATMODEL$comparison_mode]] and loadpage_input()$proceed1) so
Start is always off during reset, then after calling
build_response_curve_matrix(condition_list()) perform a strict validation of
rc_matrix (e.g. check nrow>0 and that parsed treatment/dose columns are present
and non-empty or call/implement a helper like
validate_response_curve_matrix(rc_matrix) to ensure usable parsed treatment
values); only if that strict validation passes set contrast$matrix <- rc_matrix
and enable(NAMESPACE_STATMODEL$modeling_start), otherwise set contrast$matrix <-
NULL, keep Start disabled and showNotification with the error message.
🧹 Nitpick comments (1)
tests/testthat/test-module-statmodel-ui.R (1)

15-21: Harden section-order assertions against false positives.

If one locator is missing, regexpr() can return -1 and still make relative ordering checks misleading. Assert presence first, then assert ordering.

Proposed test hardening
   pos_contrast <- regexpr("1\\. Define comparisons", ui_html)
   pos_modeling <- regexpr(NAMESPACE_STATMODEL$modeling_section_header, ui_html)
   pos_viz <- regexpr("3\\. Visualization", ui_html)
+
+  expect_gt(pos_contrast, 0, info = "Contrast section anchor should be present")
+  expect_gt(pos_modeling, 0, info = "Modeling section anchor should be present")
+  expect_gt(pos_viz, 0, info = "Visualization section anchor should be present")
   
   expect_true(pos_contrast < pos_modeling,
               info = "Contrast matrix section should appear before modeling section")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/testthat/test-module-statmodel-ui.R` around lines 15 - 21, The current
relative-order checks using regexpr (pos_contrast, pos_modeling, pos_viz) can
mislead if a locator is missing (regexpr returns -1); first assert each locator
is found (e.g., expect_true(pos_contrast != -1, info=...),
expect_true(pos_modeling != -1, ...), expect_true(pos_viz != -1, ...)) and only
then perform the ordering assertions (expect_true(pos_contrast < pos_modeling,
...) and expect_true(pos_modeling < pos_viz, ...)); update the test that
references NAMESPACE_STATMODEL$modeling_section_header and the regexpr results
to follow this two-step presence-then-order pattern.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@R/module-statmodel-server.R`:
- Around line 89-109: Reset currently leaves the Start button enabled;
immediately call disable(NAMESPACE_STATMODEL$modeling_start) when you clear
contrast$matrix in the observeEvent (the block handling
input[[NAMESPACE_STATMODEL$comparison_mode]] and loadpage_input()$proceed1) so
Start is always off during reset, then after calling
build_response_curve_matrix(condition_list()) perform a strict validation of
rc_matrix (e.g. check nrow>0 and that parsed treatment/dose columns are present
and non-empty or call/implement a helper like
validate_response_curve_matrix(rc_matrix) to ensure usable parsed treatment
values); only if that strict validation passes set contrast$matrix <- rc_matrix
and enable(NAMESPACE_STATMODEL$modeling_start), otherwise set contrast$matrix <-
NULL, keep Start disabled and showNotification with the error message.

---

Nitpick comments:
In `@tests/testthat/test-module-statmodel-ui.R`:
- Around line 15-21: The current relative-order checks using regexpr
(pos_contrast, pos_modeling, pos_viz) can mislead if a locator is missing
(regexpr returns -1); first assert each locator is found (e.g.,
expect_true(pos_contrast != -1, info=...), expect_true(pos_modeling != -1, ...),
expect_true(pos_viz != -1, ...)) and only then perform the ordering assertions
(expect_true(pos_contrast < pos_modeling, ...) and expect_true(pos_modeling <
pos_viz, ...)); update the test that references
NAMESPACE_STATMODEL$modeling_section_header and the regexpr results to follow
this two-step presence-then-order pattern.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 226bcb67-7a07-47cc-8aa1-dcbd0ea206f7

📥 Commits

Reviewing files that changed from the base of the PR and between e6fb8dc and 0bf4d88.

📒 Files selected for processing (6)
  • R/constants.R
  • R/module-statmodel-server.R
  • R/statmodel-server-options-modeling.R
  • R/statmodel-ui-options-modeling.R
  • tests/testthat/test-module-statmodel-ui.R
  • tests/testthat/test-statmodel-ui-options-contrasts.R
✅ Files skipped from review due to trivial changes (1)
  • R/constants.R
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/testthat/test-statmodel-ui-options-contrasts.R

Comment thread tests/testthat/test-statmodel-ui-options-contrasts.R Outdated
@tonywu1999 tonywu1999 merged commit 1c29944 into devel Apr 2, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

2 participants