Skip to content

ECHO-527 ongoing conversation tag issue resolved#368

Merged
spashii merged 2 commits intomainfrom
ongoing-conversation-tag-issue
Nov 21, 2025
Merged

ECHO-527 ongoing conversation tag issue resolved#368
spashii merged 2 commits intomainfrom
ongoing-conversation-tag-issue

Conversation

@ussaama
Copy link
Copy Markdown
Contributor

@ussaama ussaama commented Nov 21, 2025

Summary by CodeRabbit

Release Notes

  • Performance & Bug Fixes
    • Conversations now display sorted by most recent activity first.
    • Accelerated data refresh rate for faster updates to ongoing conversations.
    • Optimized query efficiency for improved responsiveness.

✏️ Tip: You can customize this high-level summary in your review settings.

@linear
Copy link
Copy Markdown

linear bot commented Nov 21, 2025

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Nov 21, 2025

Walkthrough

Adds chunk sorting by timestamp descending to conversation queries across multiple components. Reduces polling interval from 40 to 30 seconds. Refactors OngoingConversationsSummaryCard to use aggregate-based countDistinct for conversation counting with cache invalidation logic on state transitions.

Changes

Cohort / File(s) Summary
Chunk retrieval sorting
echo/frontend/src/components/conversation/ConversationAccordion.tsx, echo/frontend/src/components/conversation/hooks/index.ts
Adds _sort: ["-timestamp", "-created_at"] directive to chunk retrievals in deep.chunks queries across conversation-by-project and infinite-conversations hooks to order results by timestamp descending
Polling interval reduction & aggregate optimization
echo/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx, echo/frontend/src/components/conversation/hooks/index.ts
Reduces TIME_INTERVAL_SECONDS from 40 to 30 in both hooks; replaces readItems-based counting with countDistinct aggregation; introduces queryClient and local state to track ongoing conversations and invalidate related project conversations cache on state transitions

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

  • OngoingConversationsSummaryCard: Requires careful inspection of new state management logic, cache invalidation patterns, and aggregate query transition behavior
  • Aggregate query shift: Verify countDistinct semantics match previous readItems behavior for conversation uniqueness
  • Cache invalidation timing: Confirm the logic for triggering invalidateQueries on conversation count transitions doesn't cause stale data or excessive refetches

Possibly related PRs

Suggested labels

bug

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Linked Issues check ❓ Inconclusive The linked issue ECHO-527 lacks specific technical requirements or acceptance criteria, making it impossible to validate whether code changes meet stated objectives. Add detailed technical requirements and acceptance criteria to ECHO-527 to enable validation that the sorting, polling interval, and cache invalidation changes address the stated issue.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly references ECHO-527 and describes the fix being applied to the ongoing conversation tag issue, accurately summarizing the main changes across the three modified files.
Out of Scope Changes check ✅ Passed All changes appear focused on the ongoing conversation tag issue: sorting chunks, adjusting polling intervals, and adding cache invalidation logic are all cohesive modifications targeting this problem area.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ongoing-conversation-tag-issue

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@coderabbitai coderabbitai bot added the bug Something isn't working label Nov 21, 2025
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 956df04 and 936ad9d.

📒 Files selected for processing (3)
  • echo/frontend/src/components/conversation/ConversationAccordion.tsx (1 hunks)
  • echo/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx (1 hunks)
  • echo/frontend/src/components/conversation/hooks/index.ts (4 hunks)
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: spashii
Repo: Dembrane/echo PR: 142
File: echo/frontend/src/lib/query.ts:680-696
Timestamp: 2025-05-13T15:20:25.165Z
Learning: In the Echo project, the conversation tags feature is designed with an implicit limit of 100 tags per conversation. Having more than 100 tags indicates a misuse of the feature rather than a limitation that needs to be addressed in the code. Errors for exceeding this limit should be handled at a logical level above the implementation.
Learnt from: ussaama
Repo: Dembrane/echo PR: 336
File: echo/server/dembrane/api/chat.py:593-630
Timestamp: 2025-10-15T11:06:42.397Z
Learning: In `echo/server/dembrane/api/chat.py`, the auto-select conversation flow (lines 589-631) deliberately uses incremental system message generation with `create_system_messages_for_chat` and `token_counter` for each candidate conversation to ensure accurate token count estimation before adding conversations. The user (ussaama) prioritizes accuracy over the O(n²) performance cost to stay within the 80% context threshold precisely.
Learnt from: ussaama
Repo: Dembrane/echo PR: 205
File: echo/frontend/src/lib/query.ts:1444-1506
Timestamp: 2025-07-10T12:48:20.683Z
Learning: ussaama prefers string concatenation over template literals for simple cases where readability is clearer, even when linting tools suggest template literals. Human readability takes precedence over strict linting rules in straightforward concatenation scenarios.
📚 Learning: 2025-08-19T10:22:55.323Z
Learnt from: ussaama
Repo: Dembrane/echo PR: 266
File: echo/frontend/src/components/conversation/ConversationAccordion.tsx:675-678
Timestamp: 2025-08-19T10:22:55.323Z
Learning: In echo/frontend/src/components/conversation/hooks/index.ts, the useConversationsCountByProjectId hook uses useSuspenseQuery with Directus aggregate, which always returns string numbers like "0", "1", "2" and suspends during loading instead of returning undefined. Therefore, Number(conversationsCountQuery.data) ?? 0 is safe and the Number() conversion is necessary for type conversion from string to number.

Applied to files:

  • echo/frontend/src/components/conversation/hooks/index.ts
  • echo/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx
📚 Learning: 2025-08-19T10:22:55.323Z
Learnt from: ussaama
Repo: Dembrane/echo PR: 266
File: echo/frontend/src/components/conversation/ConversationAccordion.tsx:675-678
Timestamp: 2025-08-19T10:22:55.323Z
Learning: In echo/frontend/src/components/conversation/hooks/index.ts, the useConversationsCountByProjectId hook uses regular useQuery (not useSuspenseQuery), which means conversationsCountQuery.data can be undefined during loading states. When using Number(conversationsCountQuery.data) ?? 0, this creates NaN because Number(undefined) = NaN and NaN is not nullish, so the fallback doesn't apply. The correct pattern is Number(conversationsCountQuery.data ?? 0) to ensure the fallback happens before type conversion.

Applied to files:

  • echo/frontend/src/components/conversation/hooks/index.ts
  • echo/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx
📚 Learning: 2025-10-15T11:06:42.397Z
Learnt from: ussaama
Repo: Dembrane/echo PR: 336
File: echo/server/dembrane/api/chat.py:593-630
Timestamp: 2025-10-15T11:06:42.397Z
Learning: In `echo/server/dembrane/api/chat.py`, the auto-select conversation flow (lines 589-631) deliberately uses incremental system message generation with `create_system_messages_for_chat` and `token_counter` for each candidate conversation to ensure accurate token count estimation before adding conversations. The user (ussaama) prioritizes accuracy over the O(n²) performance cost to stay within the 80% context threshold precisely.

Applied to files:

  • echo/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx
📚 Learning: 2025-10-24T08:08:23.128Z
Learnt from: ussaama
Repo: Dembrane/echo PR: 349
File: echo/frontend/src/components/report/hooks/index.ts:51-71
Timestamp: 2025-10-24T08:08:23.128Z
Learning: In echo/frontend/src/components/report/hooks/index.ts, the useGetProjectParticipants hook uses Directus aggregate which always returns count as a string (e.g., "0", "1", "2"). The pattern `Number.parseInt(result[0]?.count ?? "0", 10) || 0` is the correct way to handle this, with the string fallback "0" ensuring parseInt receives a valid string input.
<!--

Applied to files:

  • echo/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx
🧬 Code graph analysis (1)
echo/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx (1)
echo/frontend/src/lib/directus.ts (1)
  • directus (6-14)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: ci-check-server
🔇 Additional comments (3)
echo/frontend/src/components/conversation/ConversationAccordion.tsx (1)

695-704: Deep chunk sort matches live-detection semantics

Sorting deep.chunks by ["-timestamp", "-created_at"] while keeping the _limit is exactly what you want here: you’re now always looking at the most recent chunks when evaluating things like “ongoing” state instead of relying on Directus’ default ordering. Clean, deterministic, and consistent with the hooks layer.

echo/frontend/src/components/conversation/hooks/index.ts (1)

661-671: Newest-first deep chunk sort fixes live flag correctness

The _sort: ["-timestamp", "-created_at"] addition under deep.chunks is spot on. With _limit depending on loadChunks, you now reliably pull the most recent chunk(s) when computing the live flag instead of gambling on backend ordering. This should directly stabilize the “ongoing” tag behavior.

echo/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx (1)

17-20: State machine is genuinely over-invalidating and doing double writes—fix confirmed.

The review is spot on. I traced through the actual logic:

  • Over-invalidation: if (currentCount > 0 || hasOngoingConversations) will be true for every poll as long as there's ever been any ongoing conversation. Poll 1 (count=2, prev=false) → invalidates, sets true. Poll 2 (count=1, prev=true) → still true, invalidates again. Poll N (count=0, prev=true) → false || true still invalidates. You're hammering that cache unnecessarily.

  • Double write: Lines 56 and 59 set hasOngoingConversations twice—first to false, then immediately to currentCount > 0. That's noisy and defeats the point of tracking state.

  • Hardcoded interval confirmed: Line 64 has refetchInterval: 30000 while TIME_INTERVAL_SECONDS = 30 at line 10. Should be TIME_INTERVAL_SECONDS * 1000 to stay DRY.

  • Parsing: Current Number((result[0]?.countDistinct?.conversation_id as string) ?? "0") is actually safe here, but parseInt(..., 10) || 0 is slightly more robust (handles malformed input gracefully) and aligns with patterns elsewhere.

The edge-detection fix (using setHasOngoingConversations((previous) => { ... })) nails it—only invalidate on the 0→>0 transition, not on every poll.

⛔ Skipped due to learnings
Learnt from: ussaama
Repo: Dembrane/echo PR: 266
File: echo/frontend/src/components/conversation/ConversationAccordion.tsx:675-678
Timestamp: 2025-08-19T10:22:55.323Z
Learning: In echo/frontend/src/components/conversation/hooks/index.ts, the useConversationsCountByProjectId hook uses regular useQuery (not useSuspenseQuery), which means conversationsCountQuery.data can be undefined during loading states. When using Number(conversationsCountQuery.data) ?? 0, this creates NaN because Number(undefined) = NaN and NaN is not nullish, so the fallback doesn't apply. The correct pattern is Number(conversationsCountQuery.data ?? 0) to ensure the fallback happens before type conversion.
Learnt from: ussaama
Repo: Dembrane/echo PR: 266
File: echo/frontend/src/components/conversation/ConversationAccordion.tsx:675-678
Timestamp: 2025-08-19T10:22:55.323Z
Learning: In echo/frontend/src/components/conversation/hooks/index.ts, the useConversationsCountByProjectId hook uses useSuspenseQuery with Directus aggregate, which always returns string numbers like "0", "1", "2" and suspends during loading instead of returning undefined. Therefore, Number(conversationsCountQuery.data) ?? 0 is safe and the Number() conversion is necessary for type conversion from string to number.

Comment on lines +661 to 662
const TIME_INTERVAL_SECONDS = 30;

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Wire refetchInterval to TIME_INTERVAL_SECONDS to avoid future drift

Right now you’ve got TIME_INTERVAL_SECONDS = 30 in both hooks but still hard-code refetchInterval: 30000. That’s fine today, but it’s an easy place for things to fall out of sync the next time someone tweaks the window.

I’d tie the polling to the same constant:

-	export const useConversationsByProjectId = (
+	export const useConversationsByProjectId = (
@@
-	const TIME_INTERVAL_SECONDS = 30;
+	const TIME_INTERVAL_SECONDS = 30;
@@
-		refetchInterval: 30000,
+		refetchInterval: TIME_INTERVAL_SECONDS * 1000,
@@
-export const useInfiniteConversationsByProjectId = (
+export const useInfiniteConversationsByProjectId = (
@@
-	const TIME_INTERVAL_SECONDS = 30;
+	const TIME_INTERVAL_SECONDS = 30;
@@
-		refetchInterval: 30000,
+		refetchInterval: TIME_INTERVAL_SECONDS * 1000,

Keeps the “live window” and polling cadence locked together.

Also applies to: 732-733, 872-873, 955-956

🤖 Prompt for AI Agents
In echo/frontend/src/components/conversation/hooks/index.ts around lines 661-662
(also apply same change at 732-733, 872-873, 955-956): the refetchInterval is
hard-coded as 30000 while TIME_INTERVAL_SECONDS is defined as 30, which can
drift when one is changed; update the refetchInterval to derive from
TIME_INTERVAL_SECONDS (multiply by 1000) so polling cadence is tied to the
single constant, and ensure any other hard-coded 30000 occurrences in the listed
line ranges are replaced similarly.

Comment on lines +30 to +33
filter: {
conversation_id: {
project_id: projectId,
},
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Make the nested project_id filter explicit under conversation_id

Minor clarity/readability nit: you’re currently relying on implicit equality sugar for the nested project filter:

conversation_id: {
  project_id: projectId,
},

I’d make this explicit so nobody has to mentally expand the Directus filter rules and so TypeScript tooling doesn’t get confused by the nesting:

-            conversation_id: {
-              project_id: projectId,
-            },
+            conversation_id: {
+              project_id: {
+                _eq: projectId,
+              },
+            },

Functionally the same intent, but clearer to future readers and less magical.

🤖 Prompt for AI Agents
In echo/frontend/src/components/conversation/OngoingConversationsSummaryCard.tsx
around lines 30 to 33, the Directus nested filter for conversation_id uses
implicit equality shorthand for project_id which is unclear; update the filter
to explicitly use the equality operator (e.g., set project_id: { _eq: projectId
} or the Directus explicit comparator your codebase uses) so the intent is
explicit, TypeScript tooling can infer types, and readers don’t need to expand
the implicit sugar.

@ussaama ussaama requested a review from spashii November 21, 2025 13:23
@spashii spashii merged commit 2535955 into main Nov 21, 2025
11 checks passed
@spashii spashii deleted the ongoing-conversation-tag-issue branch November 21, 2025 13:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants