fix: stabilize model toggle and refresh Agent dropdown after provider changes#13113
Conversation
… changes Two related regressions in the Model Provider management flow on release-1.9.3: 1. Toggle bounce — rapid clicks on a model Switch flickered between on and off before settling. ``handleModelToggle`` performed an optimistic ``setQueryData`` on ``useGetEnabledModels`` but never cancelled in-flight refetches. With React Query defaults (``staleTime: 0``, ``refetchOnWindowFocus: true``), a background refetch could land mid-debounce and overwrite the optimistic state with stale server data, then the deferred mutation eventually corrected it. Fix: ``cancelQueries`` before each optimistic update, per the canonical TanStack Query optimistic-updates pattern. 2. Agent dropdown stayed stale after the provider modal closed — disabled models remained listed in the Agent's Language Model dropdown. The post-close ``isRefreshingAfterClose`` loading gate only waited for ``useGetModelProviders`` to settle, not ``useGetEnabledModels``. When the providers refetch finished first, the loading state cleared and ``groupedOptions`` ran against the still-stale enabled-models cache. Fix: track both queries' ``isFetching`` flags in the gate's effect. No behavioral change to the sticky-default UX — the user's previously selected model continues to show in the dropdown with the wrench affordance when globally disabled, as designed. Also narrows the pre-existing ``catch (error: any)`` patterns in the edited hook file to ``unknown`` with a small shared ``getErrorMessage`` helper, satisfying the staged-file no-explicit-any pre-commit lint. Tests: - New ``useProviderConfiguration`` unit test asserts ``cancelQueries`` runs before ``setQueryData`` on toggle. - Extended ``ModelInputComponent`` test covers the dual-query loading gate (providers settles first, enabled-models still in flight → loading persists; then enabled-models settles → loading clears).
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
WalkthroughThis PR coordinates model loading state across two query lifecycles and prevents race conditions during model toggles. The hook cancels in-flight enabled-models queries at the start of ChangesModel Refetch Coordination and Toggle Safety
🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 9✅ Passed checks (9 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## release-1.9.3 #13113 +/- ##
================================================
Coverage ? 53.26%
================================================
Files ? 2033
Lines ? 184872
Branches ? 27404
================================================
Hits ? 98471
Misses ? 85291
Partials ? 1110
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/frontend/src/components/core/parameterRenderComponent/components/modelInputComponent/__tests__/ModelInputComponent.test.tsx (1)
395-398: ⚡ Quick winAvoid fixed sleeps in async UI tests.
Line 397 uses a hardcoded timeout, which can make this test flaky in slower CI runs. Prefer assertion-driven waiting.
Suggested change
- await new Promise((r) => setTimeout(r, 30)); - expect(screen.getByText("Loading models")).toBeInTheDocument(); + await waitFor(() => { + expect(screen.getByText("Loading models")).toBeInTheDocument(); + });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/frontend/src/components/core/parameterRenderComponent/components/modelInputComponent/__tests__/ModelInputComponent.test.tsx` around lines 395 - 398, The test uses a fixed sleep to wait for async UI state; replace the hardcoded await new Promise((r) => setTimeout(r, 30)) with an assertion-driven wait (e.g., use waitFor or findBy* from testing-library) so the test waits until the "Loading models" text appears after you call rerenderWithProvider(<ModelInputComponent {...defaultProps} />) with providersFetching = false; target the assertion using screen.findByText or await waitFor(() => expect(screen.getByText("Loading models")).toBeInTheDocument()) to avoid flakiness.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In
`@src/frontend/src/components/core/parameterRenderComponent/components/modelInputComponent/__tests__/ModelInputComponent.test.tsx`:
- Around line 395-398: The test uses a fixed sleep to wait for async UI state;
replace the hardcoded await new Promise((r) => setTimeout(r, 30)) with an
assertion-driven wait (e.g., use waitFor or findBy* from testing-library) so the
test waits until the "Loading models" text appears after you call
rerenderWithProvider(<ModelInputComponent {...defaultProps} />) with
providersFetching = false; target the assertion using screen.findByText or await
waitFor(() => expect(screen.getByText("Loading models")).toBeInTheDocument()) to
avoid flakiness.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 0450bdd4-991f-41e7-9937-154c9eba6a7f
📒 Files selected for processing (4)
src/frontend/src/components/core/parameterRenderComponent/components/modelInputComponent/__tests__/ModelInputComponent.test.tsxsrc/frontend/src/components/core/parameterRenderComponent/components/modelInputComponent/index.tsxsrc/frontend/src/modals/modelProviderModal/__tests__/useProviderConfiguration.test.tsxsrc/frontend/src/modals/modelProviderModal/hooks/useProviderConfiguration.ts
Addresses review feedback on #13113: ``cancelQueries`` only cancels refetches that are already in flight at click time. A new ``useGetEnabledModels`` refetch can still start during the 1s debounce window (or while the mutation is in flight) and overwrite the optimistic cache with stale server state, producing the same bounce we just fixed. Two coordinated changes keep the optimistic state protected for the entire pending-toggle window: 1. ``pendingModelToggles`` is no longer cleared upfront when the debounced flush sends a mutation. Instead, ``clearSentToggles`` removes only the entries whose value still matches what we sent on ``onSettled`` / ``onError``. This preserves the overlay across the in-flight mutation window AND correctly handles the case where the user re-toggled the same model mid-flight (now a fresh intent that survives clearing). 2. A new ``useEffect`` subscribes to ``useGetEnabledModels`` and re-applies the pending overlay whenever the data emission drifts from the user's pending intent. This catches any refetch (window focus, mount, reconnect, stale-time expiry) that lands between click and ``onSettled``, regardless of when it started. Tests: - New ``re-applies the pending overlay when a refetch surfaces stale data`` test simulates a stale refetch and asserts the effect re-applies the optimistic overlay. - New ``does not re-overlay when no toggles are pending`` test guards against spurious overlay calls on mount.
|
Addressed the review feedback in 0ba3881: The reviewer correctly pointed out that
Two new unit tests:
All 89 tests pass locally; biome and the staged-file no-any pre-commit pass. |
…e mutations
Addresses a follow-up race introduced by the previous commit: keeping
``pendingModelToggles`` populated through ``onSettled`` (so the
re-overlay effect could repel mid-flight refetches) also caused the
next flush to snapshot the in-flight entries again, sending duplicate
requests with non-deterministic success/failure ordering. The same
risk applied to ``flushPendingChanges`` on modal close.
Split the single buffer into two refs with single responsibilities:
- ``overlayToggles``: the union of every toggle still protecting the
UI. The re-overlay effect re-applies this whenever
``useGetEnabledModels`` emits new data. Drained per-key on
``onSettled``/``onError`` only when the overlay value still matches
what we sent (a user re-toggle mid-flight becomes a fresh intent
and must survive the clear).
- ``unsentToggles``: the strict subset that has NOT been sent in a
mutation yet, or was re-toggled since the last send. Drained
immediately at flush time so subsequent flushes never resend an
in-flight payload.
``handleModelToggle`` populates both buffers; flushes drain only
``unsentToggles`` and use ``overlayToggles`` for cache protection.
Two new unit tests:
- ``does not resend in-flight toggles when a new toggle is flushed``
asserts that after toggling A then B (with A's mutation in flight),
the second flush carries ONLY B.
- ``re-sends a model when the user re-toggles it after the previous
flush fired`` asserts that re-toggling the same model produces a
second mutation (the re-toggle is a fresh intent, not a duplicate).
|
Addressed the follow-up race in 4c8d782. The previous commit's choice to keep
Two new unit tests directly cover the duplicate-send concern:
91/91 tests pass; biome + staged-file no-any pre-commit pass. |
…ther
Removes the sticky-default carve-out so a globally-disabled model is
never shown in the Agent's Language Model dropdown, regardless of
whether it's the node's saved selection. Drops the Wrench/Configure
affordance — there's no per-trigger "this model isn't enabled for
your user" UX anymore; the dropdown is the single source of truth.
Changes in ``modelInputComponent``:
- ``groupedOptions``: drop the ``isStickyNotEnabled`` filter
bypass and the zero-provider import fallback. Disabled models
never pass the filter, even when tagged ``not_enabled_locally``.
- ``selectedModel``: drop the saved-value preservation branch.
If the saved name isn't in ``flatOptions``, fall through to the
first available option (or ``null`` if nothing is available).
- Auto-select effect: fires whenever the saved value isn't in
``flatOptions``, not just when the value is empty. This
realigns the node's stored value with the trigger so the
rendered selection and the run-time selection don't diverge.
- ``hasEnabledProviders``: redefined as "at least one model of
this component's type is enabled across any provider"
(derived from ``groupedOptions``). Routes a configured-but-
all-disabled provider to the Setup Provider CTA — same UX as
the never-configured case.
- Remove the Wrench JSX, the ``showConfigureAffordance``
derivation, and the now-orphaned ``hasProcessedEmptyRef``.
Tests:
- ``renders the Setup Provider CTA when no models are enabled``
(replaces the old disabled-combobox expectation).
- ``auto-selects an available model when the saved value is
globally disabled``.
- ``renders the Setup Provider CTA when a provider is configured
but all its models are disabled``.
- ``never renders the Configure wrench affordance``.
|
Per discussion: dropped the sticky-default UX entirely in a154693.
Test changes:
91/91 tests still pass; biome + staged-file no-any pre-commit pass. |
Cristhianzl
left a comment
There was a problem hiding this comment.
Code Review Summary
This PR fixes two related regressions in the Model Provider management flow on release-1.9.3 — a toggle "bounce" caused by an optimistic update without cancelQueries, and a stale Agent dropdown caused by a loading gate that watched only one of two refetched queries. Both fixes apply the canonical TanStack Query optimistic-update pattern, are well-commented with Why: rationale on every non-obvious block, and ship with focused adversarial unit tests (87 tests pass per the PR description). The split-buffer (overlayToggles / unsentToggles) design is correctly motivated by the duplicate-payload bug that a single buffer would create, and the re-overlay effect's loop-prevention invariant is defensible. The only structural concern is that useProviderConfiguration.ts already exceeded the 500-line hard limit before this PR, and this PR pushes it from ~681 to 821 lines — the toggle-queue logic should be extracted to a dedicated module as a follow-up.
Verdict: Approve with comments
Findings: 0 blockers, 1 important, 4 recommended
⚠️ Important (preferably this PR, otherwise tracked as follow-up)
I1 — useProviderConfiguration.ts is now 821 lines, exceeds the 500-line hard limit
File: src/frontend/src/modals/modelProviderModal/hooks/useProviderConfiguration.ts:1-821
Issue: The reviewer rule sets a hard limit of 500 lines per file, with up to 600–700 acceptable only if all other structural rules pass. The file is now 821 lines after this PR (~140 lines added net), well past even the flex cap. The hook also mixes multiple responsibilities — secret-variable CRUD, provider activation, provider disconnect, model toggle queue, debounced flush, async flush, optimistic cache management, re-overlay effect — which is a validate* AND format*-style mixed-responsibility violation extended to several categories.
Why it matters: Reviewer rule explicitly states "A 650-line file that passes all SRP and separation checks is acceptable. A 400-line file with mixed responsibilities is NOT." This file is over 800 lines AND has mixed responsibilities, so both gates are failing. The model-toggle queue (overlay buffer, unsent buffer, clearSentOverlay, flushModelToggles, flushPendingChanges, handleModelToggle, re-overlay effect) is the obvious extraction candidate — it's a self-contained unit with one responsibility and zero coupling to the variable-CRUD code above it.
Suggested fix: Extract into a dedicated hook in this PR if reviewer agrees; otherwise track as a follow-up cleanup ticket before the next change touches this file. The extracted shape would be:
modelProviderModal/hooks/
├── useProviderConfiguration.ts (variables / save / activate / disconnect)
└── useModelToggleQueue.ts (overlay + unsent buffers, flush, re-overlay)
Note: the existing legacy-code exemption in the rule allows up to CC 15 only if the change does NOT increase complexity. The PR does increase total file size and adds new functions, so the legacy exemption does not fully apply here.
Line-count evidence
$ wc -l src/frontend/src/modals/modelProviderModal/hooks/useProviderConfiguration.ts
821 useProviderConfiguration.tsPre-PR file was 681 lines (54 deletions + 681 = 735 pre-fix accounting roughly; the new code is +140 net).
💡 Recommended (can ship as a follow-up)
R1 — Duplicate "flush" logic between flushModelToggles and flushPendingChanges
File: src/frontend/src/modals/modelProviderModal/hooks/useProviderConfiguration.ts:608-700
Issue: Both functions share the same skeleton:
- Guard on
syncedSelectedProvider?.provider. - Snapshot
unsentToggles.currentintotogglesToSend. - Bail when empty.
- Build
updatesarray. - Capture
previousDatafromfallbackModelData.current. - Drain
unsentToggles. - Call mutation (sync vs async variant).
- On error:
clearSentOverlay, restorepreviousData, surface error toast.
The two variants exist for legitimate reasons (debounced vs awaitable), but the shared core is now copied twice — every future change must be made in two places. The PR makes this slightly worse by adding the new clearSentOverlay call site to both.
Why it matters: DRY violation on a security-relevant path (incorrect rollback would leak a forged-looking optimistic state). The cost of a code drift here is "the two paths fall out of sync and one of them retries forever / fails to rollback / fails to drain the overlay."
Suggested fix: Extract a buildAndConsumeToggleBatch() helper that returns { updates, previousData, providerName } | null and a rollbackToggleBatch(togglesToSend, previousData) helper for the error path. Both flush variants call those, then differ only in mutate(...) vs await mutateAsync(...). This eliminates the most error-prone copy/paste.
R2 — Re-overlay effect's loop-prevention invariant deserves an explicit Why: line
File: src/frontend/src/modals/modelProviderModal/hooks/useProviderConfiguration.ts:755-784
Issue: The effect calls queryClient.setQueryData(...), which causes useGetEnabledModels to emit new data, which re-runs the effect with the just-updated enabledModelsData. The infinite-loop guard is the if (!drifted) return; short-circuit on the next pass — once the overlay has been applied, current[model] === enabled for every overlay entry, drift is false, the effect no-ops. The existing comment block describes the purpose well but does not explicitly call out the loop-prevention role of the drifted check.
Why it matters: A future refactor that "simplifies" the drift check (e.g., always re-apply on data change) would silently introduce an infinite re-render loop. The invariant should be load-bearing in the comment, not implicit.
Suggested fix: Add one line above the drifted computation:
// Loop guard: the setQueryData below re-emits enabledModelsData and re-runs
// this effect; the drift check must return false on the second pass for the
// recursion to terminate. Don't replace this with an unconditional re-apply.
const drifted = Object.entries(overlay).some(...);R3 — Test gap: no adversarial coverage for the mutation error path
File: src/frontend/src/modals/modelProviderModal/__tests__/useProviderConfiguration.test.tsx:1-329
Issue: The new test file covers six positive-direction scenarios (cancel-before-set, no-op on null provider, re-overlay on stale refetch, no resend in-flight, re-toggle resend, no-op on mount). Per REVIEWER_RULE.md → TESTING → "Tests MUST Also Challenge the Code", both happy path AND adversarial tests must exist. The error path (mutation fails → rollback to previousData → overlay drained → re-overlay effect sees no drift) is exactly the subtle path most likely to regress, and it is not exercised.
Why it matters: The clearSentOverlay drain-before-rollback ordering in the onError branch is load-bearing (without it, the re-overlay effect would re-apply a stale overlay onto the just-rolled-back cache and undo the rollback). A regression that reordered those two lines would currently pass CI.
Suggested fix: Add tests that trigger the error callback:
it("rolls back to previousData when the toggle mutation fails", () => { /* ... */ });
it("does not re-apply the overlay after a failed mutation drains it", () => { /* ... */ });
it("preserves a mid-flight re-toggle when the original mutation fails", () => { /* ... */ });The existing mock infra (mutationCallbacks capture array) is already wired to do this — call mutationCallbacks[0].onError?.(new Error("boom")) from inside act().
R4 — flushPendingChanges success path leaves overlay drained but skips local invalidation
File: src/frontend/src/modals/modelProviderModal/hooks/useProviderConfiguration.ts:677-682
Issue: On success, flushPendingChanges drains the overlay via clearSentOverlay but does not itself invalidate the useGetEnabledModels query. The inline comment defers this to "refreshAllModelInputs which runs after this promise resolves." That happens in src/frontend/src/modals/modelProviderModal/index.tsx:27-28:
const flushPromise = flushRef.current?.();
onClose();
await flushPromise;
refreshAllModelInputs({ silent: true }); // ← this is the implicit invalidationThe coupling is correct for the only current call site, but it's invisible: a future caller that uses flushPendingChanges without immediately following with refreshAllModelInputs will silently leave the cache in optimistic-state-without-server-confirmation.
Why it matters: Action-at-a-distance pattern. The reviewer rule's comprehension audit asks "what would fail if a given block were removed?" — removing refreshAllModelInputs in index.tsx:28 would break flushPendingChanges's contract but no test or type declares the dependency.
Suggested fix (low-cost): Either invalidate ["useGetEnabledModels"] inside flushPendingChanges after success (matches flushModelToggles.onSettled), or rename to flushPendingChangesWithoutInvalidation and add a Why: comment naming the caller that owns the invalidation. The former is preferable — invalidations are idempotent and cheap.
✅ Action checklist for the author
Important (preferably this PR):
- I1 — Extract toggle-queue logic out of
useProviderConfiguration.ts(currently 821 lines, hard limit 500). Suggested module:useModelToggleQueue.ts. Acceptable as a same-day follow-up if reviewer agrees.
Recommended (can ship as follow-up PR):
- R1 — DRY the
flushModelToggles/flushPendingChangesshared skeleton behind two small helpers (buildAndConsumeToggleBatch,rollbackToggleBatch). - R2 — Add a one-line
Why:callout on thedriftedcheck explicitly naming it as the re-render loop guard. - R3 — Add adversarial tests for the mutation error path (rollback to
previousData, no re-overlay after drain, mid-flight re-toggle survival). - R4 — Either invalidate
["useGetEnabledModels"]insideflushPendingChangeson success, or document the implicit dependency on the caller'srefreshAllModelInputs.
Test suggestions (post-R3):
-
it("rolls back to previousData when the toggle mutation fails") -
it("does not re-apply the overlay after a failed mutation drains it") -
it("preserves a mid-flight re-toggle when the original mutation fails") -
it("invalidates useGetEnabledModels after a successful flushPendingChanges")(only if R4 is taken in this PR)
What was checked and passed
For the record — the following gates from REVIEWER_RULE.md were verified and passed:
- ✅ PII in logs — no
email,first_name,last_name,user.emailpatterns introduced anywhere. Error toasts surfaceerror.response.data.detailto the UI; nothing is logged. - ✅ Security mindset (the Five Questions) — the PR's threat surface is a UI cache layer, not authentication/authorization/payments. No trust-without-verify boundary is introduced; the optimistic cache update is purely client-side and is reconciled by
invalidateQueriesononSettled. - ✅ Comprehension audit — every non-obvious block has a
Why:comment block explaining the design choice (split buffers, drain-before-rollback ordering, re-overlay loop guard, dual-fetch loading gate). The author's PR description articulates the root cause and the fix in standalone prose. - ✅ AI-generated-code lens — N/A (no security-critical path is in this diff; the code is UI cache management).
- ✅ AI runtime resilience — N/A (no LLM call introduced).
- ✅ DRY —
getErrorMessageis a net DRY improvement, replacing four duplicatederror?.response?.data?.detail || error?.messagechains. R1 calls out the remaining duplication. - ✅ Cyclomatic complexity — every new function is ≤ CC 6 (
clearSentOverlay≈ 3,flushModelToggles≈ 5,handleModelToggle≈ 4, re-overlay effect ≈ 6). - ✅ Nesting depth — max 3 levels in any new function.
- ✅ Strong typing —
any→unknownmigration across fourcatchblocks viagetErrorMessage. The remainingas { response?: ... }cast insidegetErrorMessageis acceptable: an Axios error shape is the only realistic input at those sites, and the cast is type-narrowed and immediately defended by optional chaining. - ✅ Tests — coverage on both fixes; happy + structural paths covered. (R3 flags the missing error-path coverage.)
- ✅ Loading-gate fix — the
ModelInputComponentchange correctly addsisFetchingEnabledModelsto both the predicate and the dependency array; the new test simulates the exact race (providers settles first while enabled-models is still in flight) and asserts the loading button persists until both settle. - ✅ No
console.log,print(), orevalintroduced anywhere in the diff. - ✅ No
Fixes #N/Closes #N/Resolves #Nkeywords in this document.
Reviewer's notes
- The split between
overlayToggles(the union of every un-confirmed toggle, for UI protection across refetches) andunsentToggles(the strict subset awaiting next flush) is the correct shape. A single buffer would either resend in-flight payloads on each new toggle, or lose UI protection mid-flight. Both alternatives are documented in the block comment. This is exemplaryWhy:documentation. - The
cancelQueriesplacement at the top ofhandleModelTogglematches the canonical TanStack Query optimistic-update pattern. Pairing it with the re-overlay effect (to cover the window AFTER click, whencancelQueriesno longer protects) is the correct two-layer defense. - The dual
clearSentOverlaycall (bothonErrorandonSettled) initially looked redundant, but theonErrorcall is load-bearing: it must run BEFORE thesetQueryData(previousData)rollback so the re-overlay effect can't re-apply the stale overlay over the rollback. The current code orders these correctly; R2's loop-guard comment partially covers this, but a small parallel comment in theonErrorblock would also help.
Addresses review I1 (file size + mixed responsibilities) and R1, R2,
R3, R4 in one pass.
I1 — Extract toggle-queue logic out of useProviderConfiguration. The
parent hook drops from 821 lines to 601 and now owns a single
responsibility (variable CRUD + provider lifecycle). The toggle queue
— overlay/unsent buffers, debounced flush, awaitable flush, optimistic
cache management, re-overlay effect — lives in a new 292-line
useModelToggleQueue.ts with one clear responsibility.
R1 — Inside the new hook, DRY the two flush variants behind shared
helpers:
- ``buildAndConsumeToggleBatch()`` snapshots unsentToggles into the
mutation payload, captures the pre-toggle cache for rollback, and
drains unsentToggles atomically.
- ``rollbackToggleBatch()`` drains overlay BEFORE restoring
previousData (load-bearing order — otherwise the re-overlay effect
would re-apply the stale overlay over the rollback).
The two flush callers now differ only in ``mutate`` vs awaitable
``mutateAsync`` plumbing.
R2 — Explicit loop-guard comment on the drift check inside the
re-overlay effect. Names the ``drifted`` short-circuit as the
termination condition so a future refactor that "simplifies" to an
unconditional re-apply doesn't silently introduce a render loop.
R3 — Adversarial coverage for the mutation error path:
- ``rolls back to previousData when the toggle mutation fails``
- ``does not re-apply the overlay after a failed mutation drains it``
(asserts the drain-before-rollback ordering)
- ``preserves a mid-flight re-toggle when the original mutation
fails`` (asserts a user re-toggle survives the originating
mutation's failure)
R4 — ``flushPendingChanges`` now invalidates the affected queries
inline on success, instead of relying on the caller's
``refreshAllModelInputs`` for the load-bearing invalidation. The
caller's refresh remains as an additive per-node template refresh.
Test infra: moved toggle-queue tests from
``useProviderConfiguration.test.tsx`` to a dedicated
``useModelToggleQueue.test.tsx`` that exercises the extracted hook
directly. Debounce mock changed from synchronous pass-through to
explicit ``runDebounced()`` so individual tests can choose whether to
exercise the debounced path or the awaitable ``flushPendingChanges``
path.
95/95 tests pass (87 prior + new R3 coverage + R4 success-path
invalidation).
|
Addressed I1 + R1, R2, R3, R4 in 9c3b0e6. I1 — file size + mixed responsibilities R1 — DRY the flush helpers R2 — explicit loop-guard comment R3 — adversarial error-path coverage
R4 — invalidate inside flushPendingChanges Test infra: moved toggle tests from `useProviderConfiguration.test.tsx` to a dedicated `useModelToggleQueue.test.tsx` (exercising the extracted hook directly). Debounce mock changed from synchronous pass-through to explicit `runDebounced()` so each test chooses whether to exercise the debounced path or the awaitable `flushPendingChanges` path. 95/95 tests pass; biome + staged-file no-any pre-commit pass. |
Summary
Two related regressions on
release-1.9.3in the Model Provider management flow:Switchin the Google Generative AI (and other provider) sections flickered between on/off before settling.Both have a common shape: optimistic/cached state being overwritten or read while still stale.
Root cause
Toggle bounce
handleModelToggleperformed an optimisticqueryClient.setQueryDataon["useGetEnabledModels"]but did not callcancelQueriesfirst. With the project's defaultQueryClient(created with nodefaultOptionsincontexts/index.tsx), React Query falls back tostaleTime: 0,refetchOnWindowFocus: true,refetchOnMount: true. The flush mutation is debounced 1s. During that window any background refetch would land with stale server data and overwrite the optimistic state, snapping theSwitchback to its prior value — the visible bounce. When the deferred mutation finally landed,onSettledinvalidated again and the cache returned to the correct state.Agent dropdown stale
When the Manage Model Providers modal closes, the consuming
ModelInputComponentsetsisRefreshingAfterClose=trueand waits foruseGetModelProviders.isFetchingto cycle before clearing the loading button. It did not observeuseGetEnabledModels.isFetching. Both queries are invalidated together byrefreshAllModelInputs, but they refetch concurrently — if providers settles first, the loading state clears andgroupedOptionsfilters against still-staleenabledModelsData, leaking the just-disabled models back into the dropdown until the next interaction.Fix
queryClient.cancelQueries({ queryKey: ["useGetEnabledModels"] })at the top ofhandleModelToggle, before the fallback capture andsetQueryData. This is the canonical TanStack Query optimistic-updates pattern.isFetchingfromuseGetEnabledModels, include it in the post-close loading-gate effect's predicate and dependency array. The loading state now persists until both queries are fresh.The sticky-default UX is preserved — the user's previously-selected model continues to show in the dropdown with the wrench/Configure affordance even when globally disabled (by design via the
not_enabled_locallytag).Also narrowed the pre-existing
catch (error: any)patterns in the edited hook tounknownvia a tiny sharedgetErrorMessagehelper, to satisfy the staged-fileno-explicit-anypre-commit lint that fires whenever the file is touched.Test plan
npx jest src/modals/modelProviderModal src/components/core/parameterRenderComponent/components/modelInputComponent— 87/87 passuseProviderConfiguration.test.tsx: assertscancelQueriesis invoked beforesetQueryDataon toggleModelInputComponent.test.tsx: simulates providers settling first while enabled-models is still in flight — loading button persists until both settleSummary by CodeRabbit
Bug Fixes