Merge ratio response for visualization of dose response in Development#171
Merge ratio response for visualization of dose response in Development#171tonywu1999 merged 8 commits intodevelfrom
Conversation
…logic in Statistical modeling and inference
|
Failed to generate code suggestions for PR |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new exported UI checkbox Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant UI as Browser/UI
participant Server as ShinyServer
participant Visual as visualizeResponseProtein
participant Data as DataStore
User->>UI: view or toggle checkbox (visualization_response_curve_ratio_scale)
UI->>Server: reactive input value
Server->>Data: request protein response data
Data-->>Server: return data
Server->>Visual: call visualizeResponseProtein(data, ratio_response=isTRUE(input$visualization_response_curve_ratio_scale))
Visual-->>Server: return rendered plot
Server->>UI: send plot for display
UI->>User: display response-curve visualization
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (4)
tests/testthat/test-module-statmodel-ui.R (1)
247-247:grepl("value = TRUE", ui_html)is a dead branch.Rendered HTML never contains the R source string
"value = TRUE"— this condition is alwaysFALSEand the||means the assertion silently relies entirely on the second condition. Remove it to avoid misleading future readers.♻️ Suggested fix
- expect_true(grepl("value = TRUE", ui_html) || grepl('checked["\']', ui_html), - info = "Ratio scale checkbox should be checked by default") + expect_true(grepl('checked["\']', ui_html), + info = "Ratio scale checkbox should be checked by default")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/testthat/test-module-statmodel-ui.R` at line 247, The assertion in tests/testthat/test-module-statmodel-ui.R uses a dead branch grepl("value = TRUE", ui_html) which never matches rendered HTML; remove that branch and simplify the expect_true to only check the meaningful condition grepl('checked["\\\']', ui_html) (keep the existing fallback structure around the checked regex so the test asserts presence of a checked attribute).tests/testthat/test-module-statmodel-server.R (3)
593-603: Test name misrepresents the assertion.The test is named
"Ratio scale checkbox value is passed to visualizeResponseProtein"but only verifies that the constant key exists inNAMESPACE_STATMODEL. Consider renaming to something like"visualization_response_curve_ratio_scale is present in NAMESPACE_STATMODEL"to match what is actually tested.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/testthat/test-module-statmodel-server.R` around lines 593 - 603, Rename the test description to reflect what it actually asserts: change the test_that name from "Ratio scale checkbox value is passed to visualizeResponseProtein" to something like "visualization_response_curve_ratio_scale is present in NAMESPACE_STATMODEL" so the test accurately documents that it checks the presence and value of the visualization_response_curve_ratio_scale key in MSstatsShiny:::NAMESPACE_STATMODEL; keep the existing assertions and variable names (namespace, checkbox_id, NAMESPACE_STATMODEL, visualization_response_curve_ratio_scale) unchanged.
615-622: Misleading test name and near-duplicate of test at line 593.
"Ratio scale checkbox can be toggled between TRUE and FALSE"does not test toggling — it only re-checks the constant string definition, identical to the assertions in the first test. Either remove this test or replace it with atestServerblock that explicitly sets the input toFALSEand verifiesisTRUE(input[[...]])returnsFALSE.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/testthat/test-module-statmodel-server.R` around lines 615 - 622, The test named "Ratio scale checkbox can be toggled between TRUE and FALSE" is a near-duplicate that only re-checks the constant MSstatsShiny:::NAMESPACE_STATMODEL$visualization_response_curve_ratio_scale; either delete this duplicate test or replace it with a proper testServer block that mounts the server, sets input[[MSstatsShiny:::NAMESPACE_STATMODEL$visualization_response_curve_ratio_scale]] to FALSE, runs the server reactive(s), and asserts the server's input value (or the reactive/output that depends on it) reflects FALSE (e.g., using testServer to set input and expect_false/isFALSE on the observed value).
605-613: Overly broad regex can produce a false positive.The pattern
"checked=\"checked\"|checked"has a right-hand alternativecheckedthat matches any occurrence of the substringcheckedin the HTML — including tooltip text, "unchecked" labels, etc. Prefer an anchored pattern that targets only the checkbox element.♻️ Suggested tighter assertion
- expect_true(grepl("checked=\"checked\"|checked", ui_html), - info = "Ratio scale checkbox should be checked by default") + # Target the actual input element; "checked" on a checkbox in htmltools renders as checked="checked" + expect_true(grepl('type="checkbox"[^>]*checked|checked[^>]*type="checkbox"', ui_html), + info = "Ratio scale checkbox should be checked by default")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/testthat/test-module-statmodel-server.R` around lines 605 - 613, The current test uses a too-broad regex (grepl("checked=\"checked\"|checked", ui_html)) which can match unrelated occurrences; update the assertion to look specifically for an input element of type checkbox that has the checked attribute (e.g. a regex matching an <input ... type="checkbox" ... checked ...>), running grepl on ui_html (in the test_that block that calls MSstatsShiny:::create_response_curve_options and stores ui_html) so it only passes when the checkbox element itself is marked checked.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/testthat/test-module-statmodel-server.R`:
- Around line 593-603: Rename the test description to reflect what it actually
asserts: change the test_that name from "Ratio scale checkbox value is passed to
visualizeResponseProtein" to something like
"visualization_response_curve_ratio_scale is present in NAMESPACE_STATMODEL" so
the test accurately documents that it checks the presence and value of the
visualization_response_curve_ratio_scale key in
MSstatsShiny:::NAMESPACE_STATMODEL; keep the existing assertions and variable
names (namespace, checkbox_id, NAMESPACE_STATMODEL,
visualization_response_curve_ratio_scale) unchanged.
- Around line 615-622: The test named "Ratio scale checkbox can be toggled
between TRUE and FALSE" is a near-duplicate that only re-checks the constant
MSstatsShiny:::NAMESPACE_STATMODEL$visualization_response_curve_ratio_scale;
either delete this duplicate test or replace it with a proper testServer block
that mounts the server, sets
input[[MSstatsShiny:::NAMESPACE_STATMODEL$visualization_response_curve_ratio_scale]]
to FALSE, runs the server reactive(s), and asserts the server's input value (or
the reactive/output that depends on it) reflects FALSE (e.g., using testServer
to set input and expect_false/isFALSE on the observed value).
- Around line 605-613: The current test uses a too-broad regex
(grepl("checked=\"checked\"|checked", ui_html)) which can match unrelated
occurrences; update the assertion to look specifically for an input element of
type checkbox that has the checked attribute (e.g. a regex matching an <input
... type="checkbox" ... checked ...>), running grepl on ui_html (in the
test_that block that calls MSstatsShiny:::create_response_curve_options and
stores ui_html) so it only passes when the checkbox element itself is marked
checked.
In `@tests/testthat/test-module-statmodel-ui.R`:
- Line 247: The assertion in tests/testthat/test-module-statmodel-ui.R uses a
dead branch grepl("value = TRUE", ui_html) which never matches rendered HTML;
remove that branch and simplify the expect_true to only check the meaningful
condition grepl('checked["\\\']', ui_html) (keep the existing fallback structure
around the checked regex so the test asserts presence of a checked attribute).
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
tests/testthat/test-module-statmodel-server.R (1)
603-609: UI test misplaced in the server test file; partially duplicatestest-module-statmodel-ui.Rlines 242–249.This test exercises
create_response_curve_options(a UI rendering function) and has no server-side reactive logic. Consider moving it totest-module-statmodel-ui.Rto keep the split clean, or deleting it in favour of the equivalent UI test.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/testthat/test-module-statmodel-server.R` around lines 603 - 609, The test "Ratio scale checkbox defaults to checked in UI" exercises the UI helper create_response_curve_options and duplicates an existing UI test; remove or relocate it out of server tests: either delete this test from test-module-statmodel-server.R, or move the test block (the expect_true assertion rendering create_response_curve_options and checking the 'checked' attribute) into the UI test file alongside the existing UI tests for create_response_curve_options so server tests only contain server/reactive specs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/testthat/test-module-statmodel-server.R`:
- Around line 605-609: The test currently checks for any checked checkbox in
ui_html which can give false positives; update the assertion to scope the regex
to the specific namespaced checkbox id
"visualization_response_curve_ratio_scale" (or its namespaced form in the
rendered HTML) and ensure the pattern requires that this element has
type="checkbox" and contains the checked attribute; modify the grepl call that
inspects ui_html to target id="...visualization_response_curve_ratio_scale"
combined with the checked/type attributes so the test only passes when that
specific checkbox is checked by default.
In `@tests/testthat/test-module-statmodel-ui.R`:
- Around line 242-249: The current regex 'checked["\']' is unanchored and can
match any checked checkbox; update the test in
tests/testthat/test-module-statmodel-ui.R that calls
create_response_curve_options to assert the checked state specifically for the
namespaced input ID visualization_response_curve_ratio_scale (remember
NS("test") will prefix it to "test-visualization_response_curve_ratio_scale").
Change the grepl to scope to that element, e.g. match the id and checked
attribute together (for example:
grepl('id=["\']test-visualization_response_curve_ratio_scale["\'][^>]*checked',
ui_html)), so the assertion only passes when that specific checkbox is checked
by default.
---
Nitpick comments:
In `@tests/testthat/test-module-statmodel-server.R`:
- Around line 603-609: The test "Ratio scale checkbox defaults to checked in UI"
exercises the UI helper create_response_curve_options and duplicates an existing
UI test; remove or relocate it out of server tests: either delete this test from
test-module-statmodel-server.R, or move the test block (the expect_true
assertion rendering create_response_curve_options and checking the 'checked'
attribute) into the UI test file alongside the existing UI tests for
create_response_curve_options so server tests only contain server/reactive
specs.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/testthat/test-module-statmodel-ui.R (1)
234-235: Consider scoping theicon-tooltipassertion to the ratio scale element.
grepl("icon-tooltip", ui_html)passes for any tooltip increate_response_curve_options. If the component already contains other tooltips (e.g., for existing controls), this assertion carries little signal. The tooltip-text check on line 238 provides the real specificity, but tying theicon-tooltipclass to the ratio scale ID would make test 2 more precise.♻️ Proposed tighter assertion
+ ratio_scale_id <- paste0("test-", + MSstatsShiny:::NAMESPACE_STATMODEL$visualization_response_curve_ratio_scale) - expect_true(grepl("icon-tooltip", ui_html), - info = "Tooltip class should be present") + # Verify an icon-tooltip appears in the vicinity of the ratio scale checkbox + expect_true( + grepl( + paste0(ratio_scale_id, '.*icon-tooltip|icon-tooltip.*', ratio_scale_id), + ui_html + ), + info = "Tooltip class should be present near ratio scale checkbox" + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/testthat/test-module-statmodel-ui.R` around lines 234 - 235, The current grepl("icon-tooltip", ui_html) check is too broad; restrict it to the ratio-scale control so it only matches the tooltip for that specific element (use the ratio scale element's unique ID or attribute present in create_response_curve_options, e.g., match the ID string used for the ratio scale control together with "icon-tooltip"). Update the assertion that reads grepl("icon-tooltip", ui_html) to a regex that ensures the ratio-scale element's identifier (the ratio scale ID used in create_response_curve_options) appears in the same tag or nearby markup as "icon-tooltip" so the test verifies the tooltip belongs to the ratio-scale control.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@tests/testthat/test-module-statmodel-ui.R`:
- Around line 242-256: Test uses an anchored regex to check the checkbox is
checked by default and the previous unanchored pattern was fixed; no code
changes needed—leave the test in tests/testthat/test-module-statmodel-ui.R as
is, which uses MSstatsShiny:::create_response_curve_options and
MSstatsShiny:::NAMESPACE_STATMODEL$visualization_response_curve_ratio_scale to
build the anchored pattern that matches either attribute ordering.
---
Nitpick comments:
In `@tests/testthat/test-module-statmodel-ui.R`:
- Around line 234-235: The current grepl("icon-tooltip", ui_html) check is too
broad; restrict it to the ratio-scale control so it only matches the tooltip for
that specific element (use the ratio scale element's unique ID or attribute
present in create_response_curve_options, e.g., match the ID string used for the
ratio scale control together with "icon-tooltip"). Update the assertion that
reads grepl("icon-tooltip", ui_html) to a regex that ensures the ratio-scale
element's identifier (the ratio scale ID used in create_response_curve_options)
appears in the same tag or nearby markup as "icon-tooltip" so the test verifies
the tooltip belongs to the ratio-scale control.
| "Use ratio scale", | ||
| class = "icon-wrapper", | ||
| icon("question-circle", lib = "font-awesome"), | ||
| div("When enabled, protein abundances are shown relative to the control (control = 1.0). Useful for drug experiments to display fold-change vs. untreated samples. Disable for non-drug experiments (e.g., time courses) where absolute abundances are more meaningful.", |
There was a problem hiding this comment.
Small nitpick: Change drug to chemoproteomic, mentioned "DMSO" which is common term for control for chemoproteomic experiments
Here's the new string:
"When enabled, protein abundances are shown relative to the control (control = 1.0). Useful for chemoproteomic experiments to display fold-change vs. DMSO. Disable for non-chemoproteomic experiments (e.g., time courses) where absolute abundances are more meaningful."
| # RESPONSE CURVE RATIO SCALE CHECKBOX TESTS | ||
| # ============================================================================ | ||
|
|
||
| test_that("visualization_response_curve_ratio_scale is present in NAMESPACE_STATMODEL", { |
There was a problem hiding this comment.
I'm not sure of the necessity of this test, but if this is necessary, would it make more sense to make a test_constants file? Instead of lumping this in the tests for the stat model server?
There was a problem hiding this comment.
Maybe it makes more sense to have a test that verifies the value passed into the ratio_response parameter of visualizeResponseProteins is set to the value of the ratio scale checkbox (while mocking visualizeResponseProteins)?
Some similar unit tests for reference where you set input[[NAMESPACE$ID]] ahead of time - https://github.com/Vitek-Lab/MSstatsShiny/blob/devel/tests/testthat/test-utils-statmodel-server.R
…pecify chemoproteomic experiments
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tests/testthat/test-module-statmodel-ui.R (1)
236-242: Optional: proximity regex is effectively a co-presence check, not a nearness check.Because
htmltools::renderTags(ui)$htmlreturns a single flat string,.*inpaste0(ratio_scale_id, '.*icon-tooltip|icon-tooltip.*', ratio_scale_id)matches the entire document — this passes as long as both strings appear anywhere in the output, in either order. Since the other assertions on lines 243–248 already confirm the tooltip exists, this is low risk, but if you want a genuine adjacency check you could addperl = TRUEand limit the wildcard with a character-count bound:- grepl( - paste0(ratio_scale_id, '.*icon-tooltip|icon-tooltip.*', ratio_scale_id), - ui_html - ), + grepl( + paste0(ratio_scale_id, '.{0,200}icon-tooltip|icon-tooltip.{0,200}', ratio_scale_id), + ui_html, + perl = TRUE + ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/testthat/test-module-statmodel-ui.R` around lines 236 - 242, The proximity regex currently uses grepl(..., ui_html) with paste0(ratio_scale_id, '.*icon-tooltip|icon-tooltip.*', ratio_scale_id) which only checks co-presence; update the expectation to perform a true adjacency check by using grepl with perl = TRUE and a bounded wildcard (e.g. '.{0,100}') between ratio_scale_id and 'icon-tooltip' (or vice versa) so the match limits distance, or alternatively parse ui_html with xml2/htmltools and assert that the tooltip node is a near sibling of the element identified by ratio_scale_id; target the test symbols ratio_scale_id and ui_html and replace the existing grepl call accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/testthat/test-module-statmodel-server.R`:
- Around line 596-600: The test defines mock_visualize and captured_args but
never injects it into visualizeResponseProtein/visualizeResponseProteins nor
asserts captured_args, so the test name is misleading; either remove
mock_visualize and rename the test to something like "ratio scale checkbox input
can be toggled" (updating any assertions to only check session$setInputs
behavior), or properly inject the mock into the server call (e.g., use
mockery::stub() or with_mock() to replace
visualizeResponseProtein/visualizeResponseProteins with mock_visualize), then
call the server, trigger session$setInputs, and add assertions that
captured_args$ratio_response (and other expected args) are passed through to the
mocked function; update the second occurrence (lines 638–652) the same way.
---
Nitpick comments:
In `@tests/testthat/test-module-statmodel-ui.R`:
- Around line 236-242: The proximity regex currently uses grepl(..., ui_html)
with paste0(ratio_scale_id, '.*icon-tooltip|icon-tooltip.*', ratio_scale_id)
which only checks co-presence; update the expectation to perform a true
adjacency check by using grepl with perl = TRUE and a bounded wildcard (e.g.
'.{0,100}') between ratio_scale_id and 'icon-tooltip' (or vice versa) so the
match limits distance, or alternatively parse ui_html with xml2/htmltools and
assert that the tooltip node is a near sibling of the element identified by
ratio_scale_id; target the test symbols ratio_scale_id and ui_html and replace
the existing grepl call accordingly.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
R/statmodel-ui-options-visualization.Rtests/testthat/test-module-statmodel-server.Rtests/testthat/test-module-statmodel-ui.R
🚧 Files skipped from review as they are similar to previous changes (1)
- R/statmodel-ui-options-visualization.R
| mock_visualize <- function(...) { | ||
| captured_args <<- list(...) | ||
| # Return a minimal ggplot to avoid rendering errors | ||
| ggplot2::ggplot() + ggplot2::theme_void() | ||
| } |
There was a problem hiding this comment.
mock_visualize is dead code and the test doesn't verify what its name claims.
mock_visualize (lines 596–600) captures arguments via <<-, but it is never injected into the server logic—visualizeResponseProtein/visualizeResponseProteins is never stubbed, and captured_args is never asserted on. The test only verifies that isTRUE(input[[...]]) returns the expected value after calling session$setInputs, which is trivially true (you set the input, then read it back). The test name "is passed to visualizeResponseProtein" is therefore misleading and the actual coverage gap noted in review (verifying ratio_response reaches the function call) remains unaddressed.
Either:
- Remove
mock_visualizeand rename the test to reflect what it actually tests ("ratio scale checkbox input can be toggled"), or - Properly inject the mock (e.g., via
mockery::stub()orwith_mock()) and assert oncaptured_args$ratio_response.
Also applies to: 638-652
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/testthat/test-module-statmodel-server.R` around lines 596 - 600, The
test defines mock_visualize and captured_args but never injects it into
visualizeResponseProtein/visualizeResponseProteins nor asserts captured_args, so
the test name is misleading; either remove mock_visualize and rename the test to
something like "ratio scale checkbox input can be toggled" (updating any
assertions to only check session$setInputs behavior), or properly inject the
mock into the server call (e.g., use mockery::stub() or with_mock() to replace
visualizeResponseProtein/visualizeResponseProteins with mock_visualize), then
call the server, trigger session$setInputs, and add assertions that
captured_args$ratio_response (and other expected args) are passed through to the
mocked function; update the second occurrence (lines 638–652) the same way.
Updated tooltip text for ratio scale option to clarify meaning of protein abundance values.

Checkbox:

Tooltip:

Checked response:

Unchecked response:

Motivation and Context
The Statistical modeling module previously always displayed dose-response curves on a ratio scale (protein abundances relative to control = 1.0). This prevented users from viewing absolute intensity trends that are more appropriate for non-drug experiments (e.g., time courses). This PR adds a user-configurable "Use ratio scale" option so response curves can be plotted either as ratio (default) or absolute intensities, enabling correct visualization for both drug/chemoproteomic experiments and non-drug experiments.
Summary of the solution
Detailed changes
R/constants.R
R/statmodel-ui-options-visualization.R
R/module-statmodel-server.R
ratio_response = isTRUE(input[[NAMESPACE_STATMODEL$visualization_response_curve_ratio_scale]])
ensuring plotting switches between ratio and absolute intensity based on user input. No other plotting parameters were changed.
Tests
Unit tests
UI tests
Server tests
Coding guidelines violations