ECHO-527 ongoing conversation tag issue resolved#368
Conversation
WalkthroughAdds chunk sorting by timestamp descending to conversation queries across multiple components. Reduces polling interval from 40 to 30 seconds. Refactors OngoingConversationsSummaryCard to use aggregate-based countDistinct for conversation counting with cache invalidation logic on state transitions. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Possibly related PRs
Suggested labels
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
echo/frontend/src/components/conversation/ConversationAccordion.tsx(1 hunks)echo/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx(1 hunks)echo/frontend/src/components/conversation/hooks/index.ts(4 hunks)
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: spashii
Repo: Dembrane/echo PR: 142
File: echo/frontend/src/lib/query.ts:680-696
Timestamp: 2025-05-13T15:20:25.165Z
Learning: In the Echo project, the conversation tags feature is designed with an implicit limit of 100 tags per conversation. Having more than 100 tags indicates a misuse of the feature rather than a limitation that needs to be addressed in the code. Errors for exceeding this limit should be handled at a logical level above the implementation.
Learnt from: ussaama
Repo: Dembrane/echo PR: 336
File: echo/server/dembrane/api/chat.py:593-630
Timestamp: 2025-10-15T11:06:42.397Z
Learning: In `echo/server/dembrane/api/chat.py`, the auto-select conversation flow (lines 589-631) deliberately uses incremental system message generation with `create_system_messages_for_chat` and `token_counter` for each candidate conversation to ensure accurate token count estimation before adding conversations. The user (ussaama) prioritizes accuracy over the O(n²) performance cost to stay within the 80% context threshold precisely.
Learnt from: ussaama
Repo: Dembrane/echo PR: 205
File: echo/frontend/src/lib/query.ts:1444-1506
Timestamp: 2025-07-10T12:48:20.683Z
Learning: ussaama prefers string concatenation over template literals for simple cases where readability is clearer, even when linting tools suggest template literals. Human readability takes precedence over strict linting rules in straightforward concatenation scenarios.
📚 Learning: 2025-08-19T10:22:55.323Z
Learnt from: ussaama
Repo: Dembrane/echo PR: 266
File: echo/frontend/src/components/conversation/ConversationAccordion.tsx:675-678
Timestamp: 2025-08-19T10:22:55.323Z
Learning: In echo/frontend/src/components/conversation/hooks/index.ts, the useConversationsCountByProjectId hook uses useSuspenseQuery with Directus aggregate, which always returns string numbers like "0", "1", "2" and suspends during loading instead of returning undefined. Therefore, Number(conversationsCountQuery.data) ?? 0 is safe and the Number() conversion is necessary for type conversion from string to number.
Applied to files:
echo/frontend/src/components/conversation/hooks/index.tsecho/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx
📚 Learning: 2025-08-19T10:22:55.323Z
Learnt from: ussaama
Repo: Dembrane/echo PR: 266
File: echo/frontend/src/components/conversation/ConversationAccordion.tsx:675-678
Timestamp: 2025-08-19T10:22:55.323Z
Learning: In echo/frontend/src/components/conversation/hooks/index.ts, the useConversationsCountByProjectId hook uses regular useQuery (not useSuspenseQuery), which means conversationsCountQuery.data can be undefined during loading states. When using Number(conversationsCountQuery.data) ?? 0, this creates NaN because Number(undefined) = NaN and NaN is not nullish, so the fallback doesn't apply. The correct pattern is Number(conversationsCountQuery.data ?? 0) to ensure the fallback happens before type conversion.
Applied to files:
echo/frontend/src/components/conversation/hooks/index.tsecho/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx
📚 Learning: 2025-10-15T11:06:42.397Z
Learnt from: ussaama
Repo: Dembrane/echo PR: 336
File: echo/server/dembrane/api/chat.py:593-630
Timestamp: 2025-10-15T11:06:42.397Z
Learning: In `echo/server/dembrane/api/chat.py`, the auto-select conversation flow (lines 589-631) deliberately uses incremental system message generation with `create_system_messages_for_chat` and `token_counter` for each candidate conversation to ensure accurate token count estimation before adding conversations. The user (ussaama) prioritizes accuracy over the O(n²) performance cost to stay within the 80% context threshold precisely.
Applied to files:
echo/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx
📚 Learning: 2025-10-24T08:08:23.128Z
Learnt from: ussaama
Repo: Dembrane/echo PR: 349
File: echo/frontend/src/components/report/hooks/index.ts:51-71
Timestamp: 2025-10-24T08:08:23.128Z
Learning: In echo/frontend/src/components/report/hooks/index.ts, the useGetProjectParticipants hook uses Directus aggregate which always returns count as a string (e.g., "0", "1", "2"). The pattern `Number.parseInt(result[0]?.count ?? "0", 10) || 0` is the correct way to handle this, with the string fallback "0" ensuring parseInt receives a valid string input.
<!--
Applied to files:
echo/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx
🧬 Code graph analysis (1)
echo/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx (1)
echo/frontend/src/lib/directus.ts (1)
directus(6-14)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: ci-check-server
🔇 Additional comments (3)
echo/frontend/src/components/conversation/ConversationAccordion.tsx (1)
695-704: Deep chunk sort matches live-detection semanticsSorting
deep.chunksby["-timestamp", "-created_at"]while keeping the_limitis exactly what you want here: you’re now always looking at the most recent chunks when evaluating things like “ongoing” state instead of relying on Directus’ default ordering. Clean, deterministic, and consistent with the hooks layer.echo/frontend/src/components/conversation/hooks/index.ts (1)
661-671: Newest-first deep chunk sort fixes live flag correctnessThe
_sort: ["-timestamp", "-created_at"]addition underdeep.chunksis spot on. With_limitdepending onloadChunks, you now reliably pull the most recent chunk(s) when computing theliveflag instead of gambling on backend ordering. This should directly stabilize the “ongoing” tag behavior.echo/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx (1)
17-20: State machine is genuinely over-invalidating and doing double writes—fix confirmed.The review is spot on. I traced through the actual logic:
Over-invalidation:
if (currentCount > 0 || hasOngoingConversations)will be true for every poll as long as there's ever been any ongoing conversation. Poll 1 (count=2, prev=false) → invalidates, sets true. Poll 2 (count=1, prev=true) → still true, invalidates again. Poll N (count=0, prev=true) → false || true still invalidates. You're hammering that cache unnecessarily.Double write: Lines 56 and 59 set
hasOngoingConversationstwice—first tofalse, then immediately tocurrentCount > 0. That's noisy and defeats the point of tracking state.Hardcoded interval confirmed: Line 64 has
refetchInterval: 30000whileTIME_INTERVAL_SECONDS = 30at line 10. Should beTIME_INTERVAL_SECONDS * 1000to stay DRY.Parsing: Current
Number((result[0]?.countDistinct?.conversation_id as string) ?? "0")is actually safe here, butparseInt(..., 10) || 0is slightly more robust (handles malformed input gracefully) and aligns with patterns elsewhere.The edge-detection fix (using
setHasOngoingConversations((previous) => { ... })) nails it—only invalidate on the 0→>0 transition, not on every poll.⛔ Skipped due to learnings
Learnt from: ussaama Repo: Dembrane/echo PR: 266 File: echo/frontend/src/components/conversation/ConversationAccordion.tsx:675-678 Timestamp: 2025-08-19T10:22:55.323Z Learning: In echo/frontend/src/components/conversation/hooks/index.ts, the useConversationsCountByProjectId hook uses regular useQuery (not useSuspenseQuery), which means conversationsCountQuery.data can be undefined during loading states. When using Number(conversationsCountQuery.data) ?? 0, this creates NaN because Number(undefined) = NaN and NaN is not nullish, so the fallback doesn't apply. The correct pattern is Number(conversationsCountQuery.data ?? 0) to ensure the fallback happens before type conversion.Learnt from: ussaama Repo: Dembrane/echo PR: 266 File: echo/frontend/src/components/conversation/ConversationAccordion.tsx:675-678 Timestamp: 2025-08-19T10:22:55.323Z Learning: In echo/frontend/src/components/conversation/hooks/index.ts, the useConversationsCountByProjectId hook uses useSuspenseQuery with Directus aggregate, which always returns string numbers like "0", "1", "2" and suspends during loading instead of returning undefined. Therefore, Number(conversationsCountQuery.data) ?? 0 is safe and the Number() conversion is necessary for type conversion from string to number.
| const TIME_INTERVAL_SECONDS = 30; | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Wire refetchInterval to TIME_INTERVAL_SECONDS to avoid future drift
Right now you’ve got TIME_INTERVAL_SECONDS = 30 in both hooks but still hard-code refetchInterval: 30000. That’s fine today, but it’s an easy place for things to fall out of sync the next time someone tweaks the window.
I’d tie the polling to the same constant:
- export const useConversationsByProjectId = (
+ export const useConversationsByProjectId = (
@@
- const TIME_INTERVAL_SECONDS = 30;
+ const TIME_INTERVAL_SECONDS = 30;
@@
- refetchInterval: 30000,
+ refetchInterval: TIME_INTERVAL_SECONDS * 1000,
@@
-export const useInfiniteConversationsByProjectId = (
+export const useInfiniteConversationsByProjectId = (
@@
- const TIME_INTERVAL_SECONDS = 30;
+ const TIME_INTERVAL_SECONDS = 30;
@@
- refetchInterval: 30000,
+ refetchInterval: TIME_INTERVAL_SECONDS * 1000,Keeps the “live window” and polling cadence locked together.
Also applies to: 732-733, 872-873, 955-956
🤖 Prompt for AI Agents
In echo/frontend/src/components/conversation/hooks/index.ts around lines 661-662
(also apply same change at 732-733, 872-873, 955-956): the refetchInterval is
hard-coded as 30000 while TIME_INTERVAL_SECONDS is defined as 30, which can
drift when one is changed; update the refetchInterval to derive from
TIME_INTERVAL_SECONDS (multiply by 1000) so polling cadence is tied to the
single constant, and ensure any other hard-coded 30000 occurrences in the listed
line ranges are replaced similarly.
| filter: { | ||
| conversation_id: { | ||
| project_id: projectId, | ||
| }, |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Make the nested project_id filter explicit under conversation_id
Minor clarity/readability nit: you’re currently relying on implicit equality sugar for the nested project filter:
conversation_id: {
project_id: projectId,
},I’d make this explicit so nobody has to mentally expand the Directus filter rules and so TypeScript tooling doesn’t get confused by the nesting:
- conversation_id: {
- project_id: projectId,
- },
+ conversation_id: {
+ project_id: {
+ _eq: projectId,
+ },
+ },Functionally the same intent, but clearer to future readers and less magical.
🤖 Prompt for AI Agents
In echo/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx
around lines 30 to 33, the Directus nested filter for conversation_id uses
implicit equality shorthand for project_id which is unclear; update the filter
to explicitly use the equality operator (e.g., set project_id: { _eq: projectId
} or the Directus explicit comparator your codebase uses) so the intent is
explicit, TypeScript tooling can infer types, and readers don’t need to expand
the implicit sugar.
Summary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.