Conversation
…upgrade - Pin/unpin up to 3 projects to top of list - Project cards display language - Admin: project creator name visible + filterable - Upload, crop, remove profile picture - Self-service display name + password change - Settings reorganized: Account & Security, Appearance, Project Defaults - Multiple reports per project - Guided reports with custom instructions - Scheduled reports with auto conversation inclusion - Background processing with visible status - Create custom templates by duplicating/editing built-in ones - Flat "All templates" view with search, drag-and-drop reorder - Quick access bar (top 3-5 pinned templates) - Settings view for contextual suggestions toggle - Backend: eliminated prompt_template_star table, replaced author_display_name with is_anonymous - Backend: user_created ownership fix for non-admin users - Backend: fields=["*"] fix for missing Directus fields - Upgraded to Gemini 2.5 across all features
WalkthroughAdds a prompt-template system and community features, two‑phase report generation with SSE progress and scheduling, extensive Frontend UI/hooks for templates/reports/settings, new Directus schema snapshots (prompt template collections, report fields, avatars), backend APIs and tasks for templates/reports, avatar upload/crop, and assorted UX/LLM model routing updates. Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
LGTM. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
| ) | ||
| if not isinstance(items, list): | ||
| return [] | ||
| return [item["prompt_template_id"] for item in items] |
Check warning
Code scanning / CodeQL
Information exposure through an exception Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 15 days ago
General fix approach: do not propagate raw exception or stack-trace-like information into values that may be returned to clients. Instead, log the detailed error on the server and return either a generic, non-sensitive message or raise a custom exception that higher layers convert into a generic HTTP error.
Best fix here: change directus.search so that when parsing response.json()["data"] fails, it does not put exc into the returned dict. Instead, it should log the exception (using the module’s logging facilities) and return a generic error object that contains no internal details, for example {"error": "No data found for this request."}. This keeps the behavior (the method still returns some error structure instead of raising) while removing the dataflow of exception text to API code, which in turn ensures endpoints like get_my_community_stars can no longer return exception-derived text to users.
Concretely, in echo/server/dembrane/directus.py:
- In the
searchmethod’s innertry/except(aroundresponse.json()["data"]), replacereturn {"error": f"No data found for this request : {exc}"}with:- a logging call recording the exception (using an existing logger if present, or adding one using
logging.getLogger(__name__)), and - a return of a generic, static error dict, e.g.
{"error": "No data found for this request."}.
This keeps the public API ofsearch/get_itemscompatible (still returns something JSON-serializable) while eliminating exposure of exception details.
- a logging call recording the exception (using an existing logger if present, or adding one using
echo/server/dembrane/api/template.py can remain as-is; the offending sink [item["prompt_template_id"] for item in items] will no longer be able to carry stack-trace/exception text because such data is no longer embedded into items in the first place.
| @@ -10,9 +10,12 @@ | ||
| import urllib3 | ||
| import requests | ||
| from urllib3.exceptions import InsecureRequestWarning | ||
| import logging | ||
|
|
||
| from dembrane.settings import get_settings | ||
|
|
||
| logger = logging.getLogger(__name__) | ||
|
|
||
| # HTTP status codes that are typically recoverable | ||
| RECOVERABLE_STATUS_CODES = { | ||
| 401, # Unauthorized (token expired) | ||
| @@ -377,7 +377,8 @@ | ||
| try: | ||
| return response.json()["data"] | ||
| except Exception as exc: # noqa: BLE001 - want best-effort fallback | ||
| return {"error": f"No data found for this request : {exc}"} | ||
| logger.error("Failed to parse Directus SEARCH response JSON", exc_info=exc) | ||
| return {"error": "No data found for this request."} | ||
| except requests.exceptions.ConnectionError as exc: | ||
| raise DirectusServerError(exc) from exc | ||
| except AssertionError as exc: |
There was a problem hiding this comment.
Actionable comments posted: 90
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
echo/frontend/src/components/common/ExponentialProgress.tsx (1)
45-51:⚠️ Potential issue | 🟡 MinorPotential state inconsistency: reset ignores
startFromShip it with one callout:
useState(startFrom)initializes correctly on mount, but the effect resets to hardcoded0whenisLoadingflips false. If the component stays mounted andisLoadingcyclestrue → false → true, the second loading phase starts from 0 regardless ofstartFrom.If
startFromshould only apply on fresh mount (navigation resume case), this is fine. If you want each loading cycle to honor it, the reset should usestartFromtoo.🔧 Optional fix if per-cycle reset is intended
useEffect(() => { if (!isLoading) { - setProgress(0); + setProgress(startFrom); return; }And add
startFromto deps:- }, [isLoading, expectedDuration]); + }, [isLoading, expectedDuration, startFrom]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/components/common/ExponentialProgress.tsx` around lines 45 - 51, The effect in ExponentialProgress resets progress to a hardcoded 0 when isLoading becomes false, causing later loading cycles to start from 0 instead of the provided startFrom; update the useEffect to call setProgress(startFrom) (instead of setProgress(0)) and, if you intend per-cycle resets, include startFrom in the effect dependency array so the effect reacts to changes in startFrom as well; look for symbols useEffect, setProgress, progress, startFrom, and isLoading in ExponentialProgress to make the change.echo/frontend/src/components/report/ReportEditor.tsx (1)
58-67: 🧹 Nitpick | 🔵 TrivialLGTM, but tighten the type guard. The
projectId as stringcast works, but ifreport.project_idis somehownullon both branches, you're castingnulltostringwhich will blow up downstream.Consider a guard clause for cleaner DX:
Optional defensive tweak
const projectId = typeof report.project_id === "object" && report.project_id?.id ? report.project_id.id : report.project_id; + if (!projectId) { + console.error("Report missing project_id"); + return; + } + await updateReportMutation.mutateAsync({ - projectId: projectId as string, + projectId, payload: values, reportId: report.id, });Not blocking—current code ships if your data model guarantees project_id is always populated.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/components/report/ReportEditor.tsx` around lines 58 - 67, The current extraction of projectId from report.project_id can yield null/undefined and is then force-cast to string before calling updateReportMutation.mutateAsync; add a defensive guard: validate that projectId is a non-empty string (derived from report.project_id?.id or report.project_id) and if it isn’t, return early or throw a descriptive error instead of casting; update the call site (updateReportMutation.mutateAsync({ projectId, payload: values, reportId: report.id })) to only run when projectId is confirmed a string to avoid passing null into mutateAsync.echo/frontend/src/routes/settings/UserSettingsRoute.tsx (1)
67-79: 🧹 Nitpick | 🔵 TrivialConsider adding breadcrumb for navigation consistency.
Per coding guidelines for settings routes: "Provide ergonomic navigation in settings-like routes: breadcrumb + back action." Back action is present, but breadcrumb is missing.
As per coding guidelines: "Provide ergonomic navigation in settings-like routes: breadcrumb + back action (ActionIcon + navigate(-1)) with relevant iconography."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/routes/settings/UserSettingsRoute.tsx` around lines 67 - 79, Add a breadcrumb row before the Title in the Group to meet the settings-route guideline: insert a Breadcrumbs (or similar Breadcrumb) component with items representing the parent/settings path, keep the existing ActionIcon (ActionIcon with IconArrowLeft and onClick calling navigate("..") or navigate(-1)) as the back action, and ensure the breadcrumb visually precedes the Title (Title remains as is) and is accessible (aria-labels/Trans for labels). Update imports to include the Breadcrumbs component and ensure styling/spacing matches the existing Group usage.echo/server/dembrane/async_helpers.py (1)
249-260: 🧹 Nitpick | 🔵 TrivialContextVar is set but never cleared - verify this is intentional.
_worker_loop.set(loop)is called but never reset in thefinallyblock. Since_get_worker_loop()checksis_closed(), this is functionally safe, but the ContextVar will hold a reference to the closed loop until the context is garbage collected.If you want to be extra clean:
♻️ Optional: reset ContextVar in finally
try: result = loop.run_until_complete(coro) logger.debug("Completed async coroutine: %s", coro) return result finally: + _worker_loop.set(None) loop.close()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/async_helpers.py` around lines 249 - 260, The ContextVar _worker_loop is set to the new loop via _worker_loop.set(loop) but never reset, leaving a reference to a closed loop; update the finally block in the function that creates the fresh event loop to reset the ContextVar (use the token returned by _worker_loop.set(loop) or call _worker_loop.reset(token)) after loop.close(), ensuring you obtain and store the token when setting _worker_loop and then reset it in the finally so _get_worker_loop() no longer holds a closed loop reference; keep nest_asyncio.apply(loop), logger.debug lines unchanged.echo/frontend/src/routes/project/chat/ProjectChatRoute.tsx (1)
448-458:⚠️ Potential issue | 🟠 MajorThe hide toggle doesn't actually stop suggestion fetches.
enabled: shouldFetchSuggestionscovers the automatic query, but the later effects still callsuggestionsQuery.refetch()after context/message changes. WithhideAiSuggestions=true, the UI hides suggestions while the endpoint keeps getting hit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/routes/project/chat/ProjectChatRoute.tsx` around lines 448 - 458, The code hides suggestions visually but still triggers network calls because effects call suggestionsQuery.refetch() unconditionally; update those effects to respect the same gate used for the initial query by checking shouldFetchSuggestions (or suggestionsQuery.options.enabled) before calling suggestionsQuery.refetch(), e.g. wrap any refetch calls in a guard like if (!shouldFetchSuggestions) return or if (!suggestionsQuery.options.enabled) return so when hideAiSuggestions is true the refetch is skipped; references to change: shouldFetchSuggestions, hideAiSuggestions, and suggestionsQuery.refetch().echo/frontend/src/components/chat/TemplatesModal.tsx (1)
749-753:⚠️ Potential issue | 🟡 MinorLocalize the clear-search label.
aria-label="Clear search"bypasses Lingui extraction, so assistive copy stays English even when the rest of the modal is translated. Usearia-label={t\Clear search`}here. As per coding guidelines "Translations must usecomponent orttemplate literal in frontend React code, then extracted withpnpm messages:extractand compiled withpnpm messages:compile`"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/components/chat/TemplatesModal.tsx` around lines 749 - 753, The ActionIcon component's static aria-label="Clear search" is not localized; update the ActionIcon in TemplatesModal.tsx to use the lingui t template literal for the assistive label (aria-label={t`Clear search`}) so the string is extracted and compiled; locate the ActionIcon element around TemplatesModal and replace the literal aria-label value with the t`` template literal to follow the Translations guideline.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 1b6da19e-0ed1-4361-a87a-b7f446b0ee8b
⛔ Files ignored due to path filters (4)
echo/cypress/cypress/downloads/merged-c73ccb37-8d3b-42e0-a51e-3edf8e20469b-eaa71516-0fcf-49e5-b9a7-3d45f4b5c4a6.mp3is excluded by!**/*.mp3echo/cypress/cypress/screenshots/04-create-edit-delete-project.cy.js/Project Create, Edit, and Delete Flow -- should create a project, edit its name and portal settings, verify changes, and delete it (failed).pngis excluded by!**/*.pngecho/cypress/cypress/screenshots/30-report-lifecycle.cy.js/Report Lifecycle Flow -- creates a project and generates a report draft (failed).pngis excluded by!**/*.pngecho/frontend/pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (143)
echo/.vscode/sessions.jsonecho/AGENTS.mdecho/cookies.txtecho/cypress/cypress/downloads/transcript-1771916490192echo/cypress/support/functions/report/index.jsecho/directus/sync/collections/folders.jsonecho/directus/sync/collections/operations.jsonecho/directus/sync/collections/policies.jsonecho/directus/sync/snapshot/collections/prompt_template.jsonecho/directus/sync/snapshot/collections/prompt_template_preference.jsonecho/directus/sync/snapshot/collections/prompt_template_rating.jsonecho/directus/sync/snapshot/fields/directus_users/hide_ai_suggestions.jsonecho/directus/sync/snapshot/fields/project/pin_order.jsonecho/directus/sync/snapshot/fields/project_report/scheduled_at.jsonecho/directus/sync/snapshot/fields/project_report/status.jsonecho/directus/sync/snapshot/fields/project_report/user_instructions.jsonecho/directus/sync/snapshot/fields/prompt_template/content.jsonecho/directus/sync/snapshot/fields/prompt_template/date_created.jsonecho/directus/sync/snapshot/fields/prompt_template/date_updated.jsonecho/directus/sync/snapshot/fields/prompt_template/description.jsonecho/directus/sync/snapshot/fields/prompt_template/icon.jsonecho/directus/sync/snapshot/fields/prompt_template/id.jsonecho/directus/sync/snapshot/fields/prompt_template/is_anonymous.jsonecho/directus/sync/snapshot/fields/prompt_template/is_public.jsonecho/directus/sync/snapshot/fields/prompt_template/language.jsonecho/directus/sync/snapshot/fields/prompt_template/sort.jsonecho/directus/sync/snapshot/fields/prompt_template/tags.jsonecho/directus/sync/snapshot/fields/prompt_template/title.jsonecho/directus/sync/snapshot/fields/prompt_template/user_created.jsonecho/directus/sync/snapshot/fields/prompt_template_preference/date_created.jsonecho/directus/sync/snapshot/fields/prompt_template_preference/id.jsonecho/directus/sync/snapshot/fields/prompt_template_preference/prompt_template_id.jsonecho/directus/sync/snapshot/fields/prompt_template_preference/sort.jsonecho/directus/sync/snapshot/fields/prompt_template_preference/static_template_id.jsonecho/directus/sync/snapshot/fields/prompt_template_preference/template_type.jsonecho/directus/sync/snapshot/fields/prompt_template_preference/user_created.jsonecho/directus/sync/snapshot/fields/prompt_template_rating/chat_message_id.jsonecho/directus/sync/snapshot/fields/prompt_template_rating/date_created.jsonecho/directus/sync/snapshot/fields/prompt_template_rating/date_updated.jsonecho/directus/sync/snapshot/fields/prompt_template_rating/id.jsonecho/directus/sync/snapshot/fields/prompt_template_rating/prompt_template_id.jsonecho/directus/sync/snapshot/fields/prompt_template_rating/rating.jsonecho/directus/sync/snapshot/fields/prompt_template_rating/user_created.jsonecho/directus/sync/snapshot/fields/prompt_template_rating/user_updated.jsonecho/directus/sync/snapshot/relations/prompt_template/user_created.jsonecho/directus/sync/snapshot/relations/prompt_template_preference/prompt_template_id.jsonecho/directus/sync/snapshot/relations/prompt_template_preference/user_created.jsonecho/directus/sync/snapshot/relations/prompt_template_rating/chat_message_id.jsonecho/directus/sync/snapshot/relations/prompt_template_rating/prompt_template_id.jsonecho/directus/sync/snapshot/relations/prompt_template_rating/user_created.jsonecho/directus/sync/snapshot/relations/prompt_template_rating/user_updated.jsonecho/docs/branching_and_releases.mdecho/frontend/AGENTS.mdecho/frontend/index.htmlecho/frontend/package.jsonecho/frontend/src/App.tsxecho/frontend/src/components/announcement/AnnouncementDrawerHeader.tsxecho/frontend/src/components/announcement/AnnouncementIcon.tsxecho/frontend/src/components/announcement/AnnouncementItem.tsxecho/frontend/src/components/announcement/Announcements.tsxecho/frontend/src/components/announcement/TopAnnouncementBar.tsxecho/frontend/src/components/announcement/WhatsNewItem.tsxecho/frontend/src/components/announcement/hooks/index.tsecho/frontend/src/components/announcement/hooks/useProcessedAnnouncements.tsecho/frontend/src/components/auth/hooks/index.tsecho/frontend/src/components/chat/ChatHistoryMessage.tsxecho/frontend/src/components/chat/ChatTemplatesMenu.tsxecho/frontend/src/components/chat/CommunityTab.tsxecho/frontend/src/components/chat/CommunityTemplateCard.tsxecho/frontend/src/components/chat/PublishTemplateForm.tsxecho/frontend/src/components/chat/QuickAccessConfigurator.tsxecho/frontend/src/components/chat/TemplateRatingPills.tsxecho/frontend/src/components/chat/TemplatesModal.tsxecho/frontend/src/components/chat/hooks/useCommunityTemplates.tsecho/frontend/src/components/chat/hooks/useUserTemplates.tsecho/frontend/src/components/common/ClosableAlert.tsxecho/frontend/src/components/common/CopyRichTextIconButton.tsxecho/frontend/src/components/common/ExponentialProgress.tsxecho/frontend/src/components/common/ImageCropModal.tsxecho/frontend/src/components/common/UserAvatar.tsxecho/frontend/src/components/layout/Header.tsxecho/frontend/src/components/project/PinnedProjectCard.tsxecho/frontend/src/components/project/ProjectListItem.tsxecho/frontend/src/components/project/hooks/index.tsecho/frontend/src/components/project/webhooks/WebhookSettingsCard.tsxecho/frontend/src/components/report/CreateReportForm.tsxecho/frontend/src/components/report/ReportEditor.tsxecho/frontend/src/components/report/ReportFocusSelector.tsxecho/frontend/src/components/report/ReportRenderer.tsxecho/frontend/src/components/report/UpdateReportModalButton.tsxecho/frontend/src/components/report/hooks/index.tsecho/frontend/src/components/report/hooks/useReportProgress.tsecho/frontend/src/components/settings/AccountSettingsCard.tsxecho/frontend/src/components/settings/ChangePasswordCard.tsxecho/frontend/src/components/settings/WhitelabelLogoCard.tsxecho/frontend/src/components/settings/hooks/index.tsecho/frontend/src/config.tsecho/frontend/src/data/reportFocusOptions.jsonecho/frontend/src/lib/api.tsecho/frontend/src/lib/typesDirectus.d.tsecho/frontend/src/locales/de-DE.poecho/frontend/src/locales/de-DE.tsecho/frontend/src/locales/en-US.poecho/frontend/src/locales/en-US.tsecho/frontend/src/locales/es-ES.poecho/frontend/src/locales/es-ES.tsecho/frontend/src/locales/fr-FR.poecho/frontend/src/locales/fr-FR.tsecho/frontend/src/locales/it-IT.poecho/frontend/src/locales/it-IT.tsecho/frontend/src/locales/nl-NL.poecho/frontend/src/locales/nl-NL.tsecho/frontend/src/routes/participant/ParticipantReport.tsxecho/frontend/src/routes/project/ProjectsHome.tsxecho/frontend/src/routes/project/chat/ProjectChatRoute.tsxecho/frontend/src/routes/project/report/ProjectReportRoute.tsxecho/frontend/src/routes/settings/UserSettingsRoute.tsxecho/frontend/vite.config.tsecho/server/AGENTS.mdecho/server/dembrane/api/api.pyecho/server/dembrane/api/chat.pyecho/server/dembrane/api/project.pyecho/server/dembrane/api/stateless.pyecho/server/dembrane/api/template.pyecho/server/dembrane/api/user_settings.pyecho/server/dembrane/api/verify.pyecho/server/dembrane/async_helpers.pyecho/server/dembrane/chat_utils.pyecho/server/dembrane/reply_utils.pyecho/server/dembrane/report_events.pyecho/server/dembrane/report_generation.pyecho/server/dembrane/report_utils.pyecho/server/dembrane/scheduler.pyecho/server/dembrane/service/webhook.pyecho/server/dembrane/suggestion_utils.pyecho/server/dembrane/tasks.pyecho/server/dembrane/transcribe.pyecho/server/prompt_templates/system_report.de.jinjaecho/server/prompt_templates/system_report.en.jinjaecho/server/prompt_templates/system_report.es.jinjaecho/server/prompt_templates/system_report.fr.jinjaecho/server/prompt_templates/system_report.it.jinjaecho/server/prompt_templates/system_report.nl.jinja
💤 Files with no reviewable changes (1)
- echo/cypress/cypress/downloads/transcript-1771916490192
| ``` | ||
| Frontend (React/Vite/Mantine) → Backend API (FastAPI) → Directus (headless CMS/DB) | ||
| ↕ ↕ | ||
| Dramatiq Workers PostgreSQL | ||
| (gevent + standard) | ||
| ↕ | ||
| Redis (pub/sub, task broker, caching) | ||
| ↕ | ||
| Agent Service (LangGraph, port 8001) | ||
| ``` |
There was a problem hiding this comment.
Add a language identifier to the fenced block.
The architecture ASCII block is missing a fence language and triggers markdown linting.
✅ Suggested markdown fix
-```
+```text
Frontend (React/Vite/Mantine) → Backend API (FastAPI) → Directus (headless CMS/DB)
↕ ↕
Dramatiq Workers PostgreSQL
(gevent + standard)
↕
Redis (pub/sub, task broker, caching)
↕
Agent Service (LangGraph, port 8001)</details>
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 markdownlint-cli2 (0.22.0)</summary>
[warning] 185-185: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
Verify each finding against the current code and only fix it if needed.
In @echo/AGENTS.md around lines 185 - 194, The fenced ASCII architecture block
that begins with "Frontend (React/Vite/Mantine) → Backend API (FastAPI) →
Directus (headless CMS/DB)" is missing a fence language which triggers markdown
linting; add a language identifier (e.g., "text") immediately after the opening
triple backticks so the block starts with "```text" and leave the closing triple
backticks unchanged to satisfy the linter and preserve formatting.
</details>
<!-- fingerprinting:phantom:poseidon:hawk:ecb4e5f5-abbe-41ba-9503-68dfaf572288 -->
<!-- This is an auto-generated comment by CodeRabbit -->
| ## Tech Debt / Known Issues | ||
| - Some mypy errors in `llm_router.py` and `settings.py` (pre-existing, non-blocking) |
There was a problem hiding this comment.
Add a blank line after the heading for markdownlint compliance.
## Tech Debt / Known Issues should be followed by a blank line before the list.
✅ Suggested markdown fix
## Tech Debt / Known Issues
+
- Some mypy errors in `llm_router.py` and `settings.py` (pre-existing, non-blocking)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ## Tech Debt / Known Issues | |
| - Some mypy errors in `llm_router.py` and `settings.py` (pre-existing, non-blocking) | |
| ## Tech Debt / Known Issues | |
| - Some mypy errors in `llm_router.py` and `settings.py` (pre-existing, non-blocking) |
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 237-237: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/AGENTS.md` around lines 237 - 238, Add a blank line immediately after
the markdown heading "## Tech Debt / Known Issues" so the subsequent list "-
Some mypy errors in `llm_router.py` and `settings.py`..." is separated by an
empty line; update the AGENTS.md section containing that heading (look for the
exact heading text) to satisfy markdownlint by inserting one newline between the
heading and the list.
echo/cookies.txt
Outdated
| # Netscape HTTP Cookie File | ||
| # https://curl.se/docs/http-cookies.html | ||
| # This file was generated by libcurl! Edit at your own risk. |
There was a problem hiding this comment.
Blocker: don’t track a live cookie jar.
If any local auth/debug flow writes to this file, real session cookies become git-tracked and easy to commit by mistake. Please move the runtime jar to an ignored path or keep only a .example placeholder in the repo.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/cookies.txt` around lines 1 - 3, The repository currently includes a
live cookie jar file named cookies.txt which risks committing real session
cookies; replace the tracked runtime cookie jar with a placeholder (e.g.,
cookies.txt.example) and update the repo to ignore the runtime file by adding
cookies.txt to .gitignore (or move runtime writes to a persistent ignored path),
ensuring any code that writes/read the jar (search for references to
"cookies.txt" in your codebase) uses the ignored runtime location and not the
tracked file.
| "default_value": false, | ||
| "max_length": null, | ||
| "numeric_precision": null, | ||
| "numeric_scale": null, | ||
| "is_nullable": true, | ||
| "is_unique": false, |
There was a problem hiding this comment.
Avoid tri-state user preference values.
Line 32 defaults to false, but Line 36 still allows null. For a binary toggle, nullable schema creates ambiguity in API/UI logic. Make this non-nullable.
✅ Tighten field contract
- "is_nullable": true,
+ "is_nullable": false,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "default_value": false, | |
| "max_length": null, | |
| "numeric_precision": null, | |
| "numeric_scale": null, | |
| "is_nullable": true, | |
| "is_unique": false, | |
| "default_value": false, | |
| "max_length": null, | |
| "numeric_precision": null, | |
| "numeric_scale": null, | |
| "is_nullable": false, | |
| "is_unique": false, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/directus/sync/snapshot/fields/directus_users/hide_ai_suggestions.json`
around lines 32 - 37, The field "hide_ai_suggestions" currently sets
"default_value": false but leaves "is_nullable": true, creating a tri-state;
update the field schema so "is_nullable" is false (making the preference
strictly boolean) and keep "default_value": false to ensure existing records
resolve to a defined false value; verify the change in the JSON for the
"hide_ai_suggestions" field by adjusting the "is_nullable" property from true to
false.
| "is_nullable": true, | ||
| "is_unique": false, | ||
| "is_indexed": false, | ||
| "is_primary_key": false, |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add an index strategy for scheduler queries.
Line 38 sets scheduled_at as non-indexed. For background report runners, this becomes a bottleneck fast. Add at least an index on scheduled_at (ideally composite with status in the migration path).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/directus/sync/snapshot/fields/project_report/scheduled_at.json` around
lines 36 - 39, The scheduled_at field is marked non-indexed which will slow
scheduler queries; change the schema and migration to add an index on
scheduled_at (update scheduled_at.json to set is_indexed: true) and add a
database migration that creates an index on scheduled_at (preferably a composite
index with status, e.g., CREATE INDEX ON project_report (status, scheduled_at))
so background report runners can efficiently query by status and scheduled_at;
ensure the migration uses the same table/field names (project_report,
scheduled_at, status) and is reversible.
| <user_instructions> | ||
| The report creator has provided the following guidance. Pay special attention to these instructions when analyzing the transcripts and structuring your article: | ||
|
|
||
| {{ user_instructions }} | ||
| </user_instructions> |
There was a problem hiding this comment.
Escape user_instructions before injecting it into pseudo-XML blocks.
Raw interpolation lets user text close tags or inject new blocks, which weakens prompt-structure guarantees. Escape the value before render.
⚙️ Proposed fix
-{{ user_instructions }}
+{{ user_instructions | e }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <user_instructions> | |
| The report creator has provided the following guidance. Pay special attention to these instructions when analyzing the transcripts and structuring your article: | |
| {{ user_instructions }} | |
| </user_instructions> | |
| <user_instructions> | |
| The report creator has provided the following guidance. Pay special attention to these instructions when analyzing the transcripts and structuring your article: | |
| {{ user_instructions | e }} | |
| </user_instructions> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/prompt_templates/system_report.fr.jinja` around lines 18 - 22,
The template injects raw {{ user_instructions }} into a pseudo-XML
<user_instructions> block which allows tag-closing or injection; escape the
value before render by applying an XML/HTML escape to the user_instructions
variable so characters like <, >, &, " and ' are encoded and cannot break or
inject new blocks; update the template where the <user_instructions> block is
rendered (the user_instructions interpolation) to use the Jinja escape/xml-safe
filter or an explicit XML-escape helper so the original text is preserved but
cannot close or create tags.
| {% if user_instructions %} | ||
|
|
||
| <user_instructions> | ||
| The report creator has provided the following guidance. Pay special attention to these instructions when analyzing the transcripts and structuring your article: | ||
|
|
||
| {{ user_instructions }} | ||
| </user_instructions> | ||
| {% endif %} |
There was a problem hiding this comment.
Blocker: escape user_instructions before embedding in tagged prompt blocks.
Line 21 injects raw user content inside <user_instructions>. A crafted value containing tag-like text can break section boundaries and weaken prompt guardrails.
Proposed hardening
{% if user_instructions %}
<user_instructions>
-The report creator has provided the following guidance. Pay special attention to these instructions when analyzing the transcripts and structuring your article:
+Il creatore del report ha fornito le seguenti indicazioni. Presta particolare attenzione a queste istruzioni durante l'analisi delle trascrizioni e la strutturazione dell'articolo:
-{{ user_instructions }}
+{{ user_instructions | replace('<', '<') | replace('>', '>') }}
</user_instructions>
{% endif %}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {% if user_instructions %} | |
| <user_instructions> | |
| The report creator has provided the following guidance. Pay special attention to these instructions when analyzing the transcripts and structuring your article: | |
| {{ user_instructions }} | |
| </user_instructions> | |
| {% endif %} | |
| {% if user_instructions %} | |
| <user_instructions> | |
| Il creatore del report ha fornito le seguenti indicazioni. Presta particolare attenzione a queste istruzioni durante l'analisi delle trascrizioni e la strutturazione dell'articolo: | |
| {{ user_instructions | replace('<', '<') | replace('>', '>') }} | |
| </user_instructions> | |
| {% endif %} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/prompt_templates/system_report.it.jinja` around lines 16 - 23,
The template injects raw user_instructions into the <user_instructions> tagged
block which allows tag-like payloads to break prompt boundaries; update the
echo/server/prompt_templates system_report.it.jinja to escape or encode the
user_instructions variable before embedding (e.g., HTML/XML-escape & replace <,
>, &, " and ' or use Jinja's escape/filter) so that {{ user_instructions }}
cannot inject tags or close the <user_instructions> section—locate the usage of
the user_instructions variable inside the <user_instructions> block and apply
the sanitization/escaping filter consistently.
| <user_instructions> | ||
| The report creator has provided the following guidance. Pay special attention to these instructions when analyzing the transcripts and structuring your article: | ||
|
|
There was a problem hiding this comment.
Minor i18n drift: localize the guidance sentence in this Italian template.
Line 19 is in English while surrounding system instructions are Italian; keep prompt language consistent for stronger model steering.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/prompt_templates/system_report.it.jinja` around lines 18 - 20,
The user-facing guidance sentence inside the <user_instructions> block of the
system_report.it.jinja template is in English while the rest is Italian; replace
that English sentence with an Italian translation that preserves the original
meaning and tone so the entire prompt is consistently localized and the model
steering remains in Italian.
| {% if user_instructions %} | ||
|
|
||
| <user_instructions> | ||
| The report creator has provided the following guidance. Pay special attention to these instructions when analyzing the transcripts and structuring your article: | ||
|
|
||
| {{ user_instructions }} | ||
| </user_instructions> | ||
| {% endif %} |
There was a problem hiding this comment.
Guardrail gap: raw user_instructions can break prompt section boundaries.
Line 21 renders unsanitized user input inside XML-like tags. Escape delimiters before interpolation to preserve prompt structure.
Proposed hardening
{% if user_instructions %}
<user_instructions>
-The report creator has provided the following guidance. Pay special attention to these instructions when analyzing the transcripts and structuring your article:
+De maker van het rapport heeft de volgende richtlijnen gegeven. Besteed extra aandacht aan deze instructies bij het analyseren van de transcripties en het structureren van uw artikel:
-{{ user_instructions }}
+{{ user_instructions | replace('<', '<') | replace('>', '>') }}
</user_instructions>
{% endif %}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/prompt_templates/system_report.nl.jinja` around lines 16 - 23,
The template injects raw user_instructions into an XML-like <user_instructions>
section which can break prompt boundaries; update the template
(system_report.nl.jinja) to sanitize/escape the user_instructions variable
before interpolation (e.g., apply Jinja's escape filter or explicitly
replace/encode angle brackets and XML delimiters) so any "<", ">", or closing
tags in user input cannot terminate or inject new tags; ensure you target the {{
user_instructions }} usage and preserve the surrounding <user_instructions> tags
while returning an escaped-safe string.
| <user_instructions> | ||
| The report creator has provided the following guidance. Pay special attention to these instructions when analyzing the transcripts and structuring your article: | ||
|
|
There was a problem hiding this comment.
Minor i18n issue: localize the preface line to Dutch.
Line 19 is English in a Dutch template; align language for clearer model conditioning.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/prompt_templates/system_report.nl.jinja` around lines 18 - 20,
The user_instructions section in the system_report.nl.jinja Dutch template
contains English text that needs to be translated to Dutch. Locate the preface
line that begins with "The report creator has provided the following
guidance..." and translate it to Dutch to maintain consistency with the
template's language and ensure proper model conditioning for Dutch language
processing.
| "is_nullable": true, | ||
| "is_unique": false, | ||
| "is_indexed": false, | ||
| "is_primary_key": false, |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider indexing the foreign key column.
is_indexed: false on a UUID FK that'll be joined/filtered frequently. Might want an index here for perf at scale — Directus won't auto-create one. Not blocking, but worth a look.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@echo/directus/sync/snapshot/fields/prompt_template_rating/prompt_template_id.json`
around lines 39 - 42, The foreign-key field "prompt_template_id" currently has
"is_indexed": false which will hurt join/filter performance at scale; update the
field definition in prompt_template_rating/prompt_template_id.json to set
is_indexed to true (and add an appropriate index configuration if your schema
requires a named/index object), ensuring the UUID FK is indexed for queries that
join or filter on prompt_template_id.
| "stepInterval": 1 | ||
| }, | ||
| "readonly": false, | ||
| "required": true, |
There was a problem hiding this comment.
Required/nullable mismatch - potential data integrity gap.
Line 21 declares "required": true (UI level), but Line 38 has "is_nullable": true (schema level). This means the admin UI enforces a rating, but direct API writes or DB inserts could bypass it and insert NULL.
If ratings must always exist, align the schema:
🔧 Proposed fix: make schema non-nullable
- "is_nullable": true,
+ "is_nullable": false,Alternatively, if null ratings are valid (e.g., "not yet rated"), flip required to false in meta. Pick your semantics and align both layers.
Also applies to: 38-38
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/directus/sync/snapshot/fields/prompt_template_rating/rating.json` at
line 21, There is a mismatch between the UI-level "required": true and the
schema-level "is_nullable": true for the rating field; decide which semantics
you want and make both layers consistent: either set "is_nullable": false in the
schema (so DB disallows NULL) to match "required": true, or set "required":
false in the UI/meta to allow NULL values; update the rating field's JSON so the
"required" and "is_nullable" keys for the rating field are the same.
| const ALLOWED_TAGS = [ | ||
| "Workshop", | ||
| "Interview", | ||
| "Focus Group", | ||
| "Meeting", | ||
| "Research", | ||
| "Community", | ||
| "Education", | ||
| "Analysis", | ||
| ]; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Hardcoded English tag names - consider i18n.
The ALLOWED_TAGS array and tag badge text at line 89 are hardcoded English strings. If these tags are user-facing and need localization, wrap them with t.
🌐 Option: i18n for tags
-const ALLOWED_TAGS = [
- "Workshop",
- "Interview",
- "Focus Group",
- "Meeting",
- "Research",
- "Community",
- "Education",
- "Analysis",
-];
+const ALLOWED_TAGS = () => [
+ { value: "Workshop", label: t`Workshop` },
+ { value: "Interview", label: t`Interview` },
+ { value: "Focus Group", label: t`Focus Group` },
+ { value: "Meeting", label: t`Meeting` },
+ { value: "Research", label: t`Research` },
+ { value: "Community", label: t`Community` },
+ { value: "Education", label: t`Education` },
+ { value: "Analysis", label: t`Analysis` },
+];Then use tag.value for filtering and tag.label for display.
If these are intentionally treated as identifiers/proper nouns (not translated), you can ignore this. As per coding guidelines: "Translations must use <Trans> component or t template literal".
Also applies to: 78-91
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/frontend/src/components/chat/CommunityTab.tsx` around lines 21 - 30,
ALLOWED_TAGS is currently a hardcoded English string array and the tag badge
rendering in CommunityTab (badge at render path using tag value/display) should
be made i18n-safe: change ALLOWED_TAGS to objects like { value: "Workshop",
label: t`Workshop` } (or use <Trans> where appropriate), update any filtering
logic to compare against tag.value, and update badge display to show tag.label
(or pass the translated string) so UI text is localized while identifiers remain
stable; update the code paths in CommunityTab that reference ALLOWED_TAGS and
the badge render (the tag filter and the badge display code) accordingly.
| <Paper | ||
| p="md" | ||
| withBorder | ||
| className={`cursor-pointer transition-all ${ | ||
| isExpanded | ||
| ? "border-blue-300 bg-blue-50/30" | ||
| : "hover:border-gray-300 hover:bg-gray-50" | ||
| }`} | ||
| onClick={onToggleExpand} | ||
| > |
There was a problem hiding this comment.
Keyboard users can't expand this card.
Paper is clickable here, but it is not focusable and has no keyboard interaction or expanded state. The preview toggle needs a real button-like control.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/frontend/src/components/chat/CommunityTemplateCard.tsx` around lines 47
- 56, The Paper element used in CommunityTemplateCard is clickable but not
keyboard-accessible; update the component so the preview toggle is a real
button-like control (or a semantic button) that is focusable and responds to
Enter/Space: replace or wrap the clickable Paper with a button or add
tabIndex={0}, onKeyDown handler that calls onToggleExpand for Enter/Space, set
aria-expanded={isExpanded} and an appropriate aria-label, and keep the existing
onClick handler and visual classes to preserve styles; ensure all changes
reference the Paper usage, the isExpanded prop and the onToggleExpand handler.
| className={`cursor-pointer transition-all ${ | ||
| isExpanded | ||
| ? "border-blue-300 bg-blue-50/30" | ||
| : "hover:border-gray-300 hover:bg-gray-50" | ||
| }`} |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Move the state colors off hardcoded utility classes.
The expanded/hover border and background colors are baked into Tailwind palette classes, so theme changes will drift here. Keep the layout utilities, but drive those colors from Mantine props or CSS variables.
As per coding guidelines: "Keep static utility classes (borders, spacing, layout) in Tailwind; move theme-dependent colors to CSS variables".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/frontend/src/components/chat/CommunityTemplateCard.tsx` around lines 50
- 54, The current className in CommunityTemplateCard (the conditional using
isExpanded) hardcodes Tailwind palette classes for border and bg; refactor so
layout/touch utilities remain in Tailwind but color values come from CSS
variables or Mantine theme props: remove "border-blue-300 bg-blue-50/30" and
"hover:border-gray-300 hover:bg-gray-50" and replace them with classes that
reference CSS variables (e.g., border-[var(--ct-border)] and bg-[var(--ct-bg)])
or use Mantine theming values injected into the component, then set those
variables (or props) using Mantine's theme or a wrapping style attribute where
the expanded and hover color values are assigned; keep the isExpanded
conditional logic but only toggle variable-backed classes (refer to
CommunityTemplateCard and the isExpanded condition to locate the change).
| try: | ||
| with directus_client_context() as client: | ||
| chunks = client.get_items( | ||
| "conversation_chunk", | ||
| { | ||
| "query": { | ||
| "filter": {"conversation_id": {"_eq": conversation_id}}, | ||
| "fields": ["id", "transcript", "error"], | ||
| "sort": "timestamp", | ||
| "limit": 1500, | ||
| }, |
There was a problem hiding this comment.
Don't silently truncate transcripts at 1500 chunks.
This new path hard-caps every conversation transcript at 1500 rows. Long conversations will be cut off with no warning, so report content can miss the tail of the discussion even when the model context still has room. Reuse the shared transcript helper or paginate until exhaustion instead of hard-stopping here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/report_generation.py` around lines 177 - 187, The code
is truncating conversation transcripts by passing "limit": 1500 to
client.get_items when fetching "conversation_chunk"; remove the hard limit and
instead reuse the project's shared transcript helper (or implement pagination
via directus_client_context + client.get_items loop) to fetch all chunks for the
given conversation_id until exhausted; update the call site that assigns chunks
(and any logic relying on the 1500 cap) to use the helper or paginated iterator
so full transcripts are returned rather than silently truncated.
| language_to_locale = { | ||
| "en": "en-US", | ||
| "nl": "nl-NL", | ||
| "de": "de-DE", | ||
| "fr": "fr-FR", | ||
| "es": "es-ES", | ||
| "it": "it-IT", | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
DRY violation: language_to_locale mapping is duplicated.
This exact mapping exists at lines 299-306. Extract to a module-level constant to avoid drift.
♻️ Extract to module constant
+# Map short language code to full locale
+LANGUAGE_TO_LOCALE = {
+ "en": "en-US",
+ "nl": "nl-NL",
+ "de": "de-DE",
+ "fr": "fr-FR",
+ "es": "es-ES",
+ "it": "it-IT",
+}
+
class WebhookService:Then use LANGUAGE_TO_LOCALE in both build_report_payload and build_payload.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/service/webhook.py` around lines 141 - 148, Extract the
duplicated language_to_locale dict into a module-level constant named
LANGUAGE_TO_LOCALE and replace the local mappings in both build_report_payload
and build_payload with references to LANGUAGE_TO_LOCALE; locate the mapping
currently defined in build_report_payload and the identical one in
build_payload, remove those local definitions, add a single top-level
LANGUAGE_TO_LOCALE dict, and update both functions to read from that constant.
| @dramatiq.actor(queue_name="network", priority=50) | ||
| def task_create_report(project_id: str, report_id: int, language: str, user_instructions: str = "") -> None: | ||
| """ | ||
| Generate a report in a Dramatiq worker (fully synchronous). | ||
|
|
||
| Uses generate_report_content() which orchestrates: | ||
| - dramatiq.group() for summarization fan-out | ||
| - gevent.pool.Pool for concurrent transcript fetching | ||
| - router_completion() (sync litellm) for the LLM call | ||
|
|
||
| No asyncio event loops — safe under dramatiq-gevent. | ||
| """ | ||
| logger = getLogger("dembrane.tasks.task_create_report") | ||
| logger.info(f"Starting report generation for project {project_id}, report {report_id}") | ||
|
|
||
| from dembrane.report_utils import ReportGenerationError | ||
| from dembrane.report_events import publish_report_progress | ||
| from dembrane.report_generation import generate_report_content | ||
|
|
||
| with ProcessingStatusContext( | ||
| project_id=project_id, | ||
| event_prefix="task_create_report", | ||
| message=f"for report {report_id}", | ||
| ): | ||
| report_id_str = str(report_id) | ||
|
|
||
| # Idempotency guard: check report is still draft (or transitioning from scheduled) | ||
| try: | ||
| with directus_client_context() as client: | ||
| report = client.get_item("project_report", report_id_str) | ||
| if not report or report.get("status") not in ("draft", "scheduled"): | ||
| logger.info( | ||
| f"Report {report_id} is not draft/scheduled (status={report.get('status') if report else 'missing'}), skipping" | ||
| ) | ||
| return | ||
| # If report was scheduled, transition to draft before generating | ||
| if report.get("status") == "scheduled": | ||
| client.update_item("project_report", report_id_str, {"status": "draft"}) | ||
| except Exception as e: | ||
| logger.error(f"Failed to check report status: {e}") | ||
| raise | ||
|
|
||
| def progress_callback(event_type: str, message: str, detail: Optional[dict] = None) -> None: | ||
| try: | ||
| publish_report_progress(report_id, event_type, message, detail) | ||
| except Exception as e: | ||
| logger.warning(f"Failed to publish progress event: {e}") | ||
|
|
||
| try: | ||
| content = generate_report_content( | ||
| project_id, | ||
| language, | ||
| report_id=report_id, | ||
| progress_callback=progress_callback, | ||
| user_instructions=user_instructions, | ||
| ) |
There was a problem hiding this comment.
Don't block the same network queue you're fan-outing onto.
task_create_report runs on network, then generate_report_content() dispatches task_summarize_conversation onto network and waits for completion. With one worker, or with N concurrent reports on N workers, every worker can end up sleeping in _wait_for_summaries() while the summarizers never start.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/tasks.py` around lines 1095 - 1150, task_create_report
is running on the same Dramatiq queue ("network") that generate_report_content
fans out task_summarize_conversation onto and then waits in _wait_for_summaries,
causing deadlocks when workers are occupied; fix by ensuring the orchestrator
and workers use different queues: change task_create_report (or the actor that
calls generate_report_content) to a separate queue name (e.g.,
"report_generation") or alter
generate_report_content/task_summarize_conversation to dispatch summarizer
actors to a different queue than "network" so summarizers can run while
task_create_report is blocked waiting in _wait_for_summaries. Ensure you update
the actor definitions for task_create_report and/or task_summarize_conversation
accordingly.
| // Community features disabled until Directus fields are created | ||
| // (author_display_name, use_count, star_count, copied_from) | ||
| const showCommunity = false; |
There was a problem hiding this comment.
showCommunity = false hard-disables the marketplace.
With this constant pinned off, community rows never render and the publish/unpublish/star/copy flows are unreachable even though the hooks and API paths are wired. That ships the marketplace part of this PR behind a permanent kill switch.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/frontend/src/components/chat/TemplatesModal.tsx` around lines 221 - 223,
The file hardcodes showCommunity = false which permanently disables the
marketplace UI; remove the hard-coded constant and instead derive showCommunity
from a runtime feature flag or prop (e.g., a useFeatureFlag hook, settings
context, or a TemplatesModal prop) so the community rows and
publish/unpublish/star/copy flows can render when enabled; update the
TemplatesModal component to accept or read that flag (replace references to
showCommunity) and default to false only when the Directus fields are actually
missing so the marketplace can be toggled without changing code.
| const handleSaveCreate = async () => { | ||
| if (!formTitle.trim() || !formContent.trim()) return; | ||
| try { | ||
| await onCreateUserTemplate?.({ | ||
| title: formTitle.trim(), | ||
| content: formContent.trim(), | ||
| }); | ||
| setView("browse"); | ||
| } catch { | ||
| // stay on form so user can retry | ||
| } | ||
| }; | ||
|
|
||
| const handleSaveEdit = async () => { | ||
| if (!editingId || !formTitle.trim() || !formContent.trim()) return; | ||
| try { | ||
| await onUpdateUserTemplate?.({ | ||
| id: editingId, | ||
| title: formTitle.trim(), | ||
| content: formContent.trim(), | ||
| }); | ||
| setView("browse"); | ||
| } catch { | ||
| // stay on form so user can retry | ||
| } |
There was a problem hiding this comment.
Don't swallow form failures inside the modal.
Create/edit catches stay silent, and the delete path bounces back to browse immediately. If the mutation fails, the user is left on a stale screen with no inline error state. Keep the form open with a local error message and render a Mantine Alert, with any toast as secondary feedback. As per coding guidelines "Surface UI mutations with inline feedback: pair toasts with contextual Mantine Alert components inside modals/forms for errors or warnings"
Also applies to: 446-495
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/frontend/src/components/chat/TemplatesModal.tsx` around lines 316 - 340,
The create/edit handlers (handleSaveCreate, handleSaveEdit) currently swallow
errors; update both to catch errors into a local state (e.g., setLocalError or
formError) instead of silently returning or just staying on the form, keep the
modal view unchanged on failure, and render a Mantine Alert in the form using
that state to show the inline error; also ensure any existing toast behavior is
kept as secondary feedback. Do the same for the delete flow referenced around
the other block (the delete handler) so failures set the same inline error state
and do not immediately navigate back to "browse". Ensure you reference
handleSaveCreate, handleSaveEdit and the delete handler when making the changes
so the modal shows contextual inline errors while toasts remain optional
secondary notifications.
| @ProjectRouter.patch("/{project_id}/pin") | ||
| async def toggle_project_pin( | ||
| project_id: str, | ||
| body: PinProjectRequest, | ||
| auth: DependencyDirectusSession, | ||
| ) -> dict: | ||
| """Pin or unpin a project. Admins can only pin projects they own.""" | ||
| if body.pin_order is not None and body.pin_order not in (1, 2, 3): | ||
| raise HTTPException(status_code=400, detail="pin_order must be 1, 2, or 3") | ||
|
|
||
| client = auth.client | ||
|
|
||
| # Ownership check: admins can only pin/unpin projects they own | ||
| if auth.is_admin: | ||
| project_service = ProjectService(directus_client=client) | ||
| try: | ||
| project = await run_in_thread_pool(project_service.get_by_id_or_raise, project_id) | ||
| except ProjectNotFoundException as e: | ||
| raise HTTPException(status_code=404, detail="Project not found") from e | ||
|
|
||
| if project.get("directus_user_id") != auth.user_id: | ||
| raise HTTPException( | ||
| status_code=403, | ||
| detail="Admins can only pin projects they own", | ||
| ) | ||
|
|
||
| await run_in_thread_pool( | ||
| client.update_item, | ||
| "project", | ||
| project_id, | ||
| {"pin_order": body.pin_order}, | ||
| ) | ||
| return {"success": True, "pin_order": body.pin_order} |
There was a problem hiding this comment.
Pinning doesn't enforce the 3-slot invariant.
This only writes the current project's pin_order. It never clears the current occupant of that slot and never caps the user's pinned set at three, so multiple projects can share slot 1/2/3 and /home will arbitrarily hide the extras with limit: 3.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/api/project.py` around lines 196 - 228, The
toggle_project_pin handler currently only updates the target project's pin_order
and doesn't clear any existing project occupying that same slot or enforce slot
uniqueness per user; update toggle_project_pin so that when body.pin_order is
not None you: 1) query for any existing project owned by auth.user_id with the
same pin_order (use ProjectService or client.items query via ProjectService
methods) and, if found, clear its pin_order (set to None) via
client.update_item/run_in_thread_pool before updating the target project; 2)
perform both updates (clear old slot then set new slot) inside the same flow to
ensure each pin slot (1,2,3) is unique per user and thus the user's pinned set
remains capped at three; keep usage of ProjectService, client.update_item and
run_in_thread_pool to locate and update records.
| # Total views across all reports for the project | ||
| all_metrics = await run_in_thread_pool( | ||
| directus.get_items, | ||
| "project_report_metric", | ||
| { | ||
| "query": { | ||
| "filter": { | ||
| "project_report_id": { | ||
| "project_id": {"_eq": project_id}, | ||
| }, | ||
| }, | ||
| "aggregate": {"count": "*"}, | ||
| } | ||
| }, | ||
| ) | ||
| total = 0 | ||
| if all_metrics and len(all_metrics) > 0: | ||
| total = int(all_metrics[0].get("count", 0)) | ||
|
|
||
| # Recent views (last 10 minutes) | ||
| from datetime import datetime, timedelta, timezone | ||
| ten_mins_ago = (datetime.now(timezone.utc) - timedelta(minutes=10)).isoformat() | ||
| recent_metrics = await run_in_thread_pool( | ||
| directus.get_items, | ||
| "project_report_metric", | ||
| { | ||
| "query": { | ||
| "filter": { | ||
| "date_created": {"_gte": ten_mins_ago}, | ||
| "project_report_id": { | ||
| "project_id": {"_eq": project_id}, | ||
| }, | ||
| }, | ||
| "aggregate": {"count": "*"}, | ||
| } | ||
| }, | ||
| ) | ||
| recent = 0 | ||
| if recent_metrics and len(recent_metrics) > 0: | ||
| recent = int(recent_metrics[0].get("count", 0)) | ||
|
|
There was a problem hiding this comment.
/views is returning project-wide counts, not report-specific counts.
Both aggregates filter only by project_report_id.project_id = project_id. As written, every report in the same project gets the same total and recent values, and report_id is ignored entirely. Add the concrete report filter to both queries.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/api/project.py` around lines 837 - 877, The aggregate
queries for total and recent use directus.get_items on "project_report_metric"
but only filter by project_report_id.project_id = project_id, so report_id is
ignored; update both queries (the ones building all_metrics and recent_metrics
used to compute total and recent) to also filter the project_report_id to the
concrete report (e.g., add project_report_id.id/_eq: report_id or the
appropriate project_report_id.report_id/_eq: report_id depending on the Directus
relation field name) so the aggregates count metrics for that specific report;
apply the same added filter to both the "all_metrics" aggregate and the
"recent_metrics" aggregate invoked via run_in_thread_pool(directus.get_items,
...).
echo/server/dembrane/api/template.py
Outdated
| @TemplateRouter.get("/prompt-templates") | ||
| async def list_prompt_templates( | ||
| auth: DependencyDirectusSession, | ||
| ) -> List[PromptTemplateOut]: | ||
| try: | ||
| items = directus.get_items( | ||
| "prompt_template", | ||
| { | ||
| "query": { | ||
| "filter": {"user_created": {"_eq": auth.user_id}}, | ||
| "sort": ["sort"], | ||
| "fields": ["*", "user_created.id", "user_created.first_name"], | ||
| } | ||
| }, | ||
| ) | ||
| if not isinstance(items, list): | ||
| items = [] | ||
|
|
||
| results = [] | ||
| for item in items: | ||
| user_created = item.get("user_created") or {} | ||
| is_anonymous = item.get("is_anonymous", False) | ||
| # For own templates: compute author_display_name dynamically if public, | ||
| # but keep stored author_display_name for copied_from attribution | ||
| if item.get("is_public"): | ||
| if is_anonymous: | ||
| resolved_name = None | ||
| else: | ||
| resolved_name = user_created.get("first_name") if isinstance(user_created, dict) else None | ||
| else: | ||
| # Private template: use stored author_display_name (for "from {author}" on copies) | ||
| resolved_name = item.get("author_display_name") | ||
|
|
||
| item_data = {**item} | ||
| item_data["author_display_name"] = resolved_name | ||
| item_data["user_created"] = user_created.get("id") if isinstance(user_created, dict) else user_created | ||
| item_data.pop("is_anonymous", None) | ||
| results.append(PromptTemplateOut(**item_data)) | ||
| return results | ||
| except Exception as e: | ||
| logger.exception(f"Failed to list prompt templates: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to list templates") from e | ||
|
|
||
|
|
||
| @TemplateRouter.post("/prompt-templates") | ||
| async def create_prompt_template( | ||
| body: PromptTemplateCreateIn, | ||
| auth: DependencyDirectusSession, | ||
| ) -> PromptTemplateOut: | ||
| try: | ||
| result = directus.create_item( | ||
| "prompt_template", | ||
| { | ||
| "title": body.title, | ||
| "content": body.content, | ||
| "icon": body.icon, | ||
| }, | ||
| )["data"] | ||
| # Set ownership to the authenticated user (admin client sets user_created to admin) | ||
| directus.update_item("prompt_template", result["id"], {"user_created": auth.user_id}) | ||
| result["user_created"] = auth.user_id | ||
| return PromptTemplateOut(**result) | ||
| except Exception as e: | ||
| logger.error(f"Failed to create prompt template: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to create template") from e | ||
|
|
||
|
|
||
| @TemplateRouter.patch("/prompt-templates/{template_id}") | ||
| async def update_prompt_template( | ||
| template_id: str, | ||
| body: PromptTemplateUpdateIn, | ||
| auth: DependencyDirectusSession, | ||
| ) -> PromptTemplateOut: | ||
| # Verify ownership | ||
| try: | ||
| existing = directus.get_item("prompt_template", template_id) | ||
| if not existing or existing.get("user_created") != auth.user_id: | ||
| raise HTTPException(status_code=404, detail="Template not found") | ||
| except HTTPException: | ||
| raise | ||
| except Exception as e: | ||
| logger.error(f"Failed to verify template ownership: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to update template") from e | ||
|
|
||
| update_data = {k: v for k, v in body.model_dump().items() if v is not None} | ||
| if not update_data: | ||
| raise HTTPException(status_code=400, detail="No fields to update") | ||
|
|
||
| try: | ||
| result = directus.update_item("prompt_template", template_id, update_data)["data"] | ||
| return PromptTemplateOut(**result) | ||
| except Exception as e: | ||
| logger.error(f"Failed to update prompt template: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to update template") from e | ||
|
|
||
|
|
||
| @TemplateRouter.delete("/prompt-templates/{template_id}") | ||
| async def delete_prompt_template( | ||
| template_id: str, | ||
| auth: DependencyDirectusSession, | ||
| ) -> dict: | ||
| # Verify ownership | ||
| try: | ||
| existing = directus.get_item("prompt_template", template_id) | ||
| if not existing or existing.get("user_created") != auth.user_id: | ||
| raise HTTPException(status_code=404, detail="Template not found") | ||
| except HTTPException: | ||
| raise | ||
| except Exception as e: | ||
| logger.error(f"Failed to verify template ownership: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to delete template") from e | ||
|
|
||
| try: | ||
| directus.delete_item("prompt_template", template_id) | ||
| return {"status": "ok"} | ||
| except Exception as e: | ||
| logger.error(f"Failed to delete prompt template: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to delete template") from e | ||
|
|
||
|
|
||
| # ── Community Marketplace ── | ||
|
|
||
|
|
||
| @TemplateRouter.get("/community") | ||
| async def list_community_templates( | ||
| auth: DependencyDirectusSession, | ||
| search: Optional[str] = None, | ||
| tag: Optional[str] = None, | ||
| language: Optional[str] = None, | ||
| sort: str = "newest", # "newest", "most_starred", "most_used" | ||
| page: int = 1, | ||
| limit: int = 20, | ||
| ) -> List[CommunityTemplateOut]: | ||
| try: | ||
| filter_query: dict = {"is_public": {"_eq": True}} | ||
|
|
||
| if search: | ||
| filter_query["_or"] = [ | ||
| {"title": {"_contains": search}}, | ||
| {"description": {"_contains": search}}, | ||
| ] | ||
|
|
||
| if tag: | ||
| filter_query["tags"] = {"_contains": tag} | ||
|
|
||
| if language: | ||
| filter_query["language"] = {"_eq": language} | ||
|
|
||
| sort_mapping = { | ||
| "newest": "-date_created", | ||
| "most_starred": "-star_count", | ||
| "most_used": "-use_count", | ||
| } | ||
| sort_field = sort_mapping.get(sort, "-date_created") | ||
| offset = (page - 1) * limit | ||
|
|
||
| items = directus.get_items( | ||
| "prompt_template", | ||
| { | ||
| "query": { | ||
| "filter": filter_query, | ||
| "sort": [sort_field], | ||
| "fields": ["*", "user_created.id", "user_created.first_name"], | ||
| "limit": limit, | ||
| "offset": offset, | ||
| } | ||
| }, | ||
| ) | ||
| if not isinstance(items, list): | ||
| items = [] | ||
|
|
||
| results = [] | ||
| for item in items: | ||
| user_created = item.get("user_created") or {} | ||
| user_id = user_created.get("id") if isinstance(user_created, dict) else user_created | ||
| is_anonymous = item.get("is_anonymous", False) | ||
| if is_anonymous: | ||
| author_name = None | ||
| else: | ||
| author_name = user_created.get("first_name") if isinstance(user_created, dict) else None | ||
|
|
||
| results.append( | ||
| CommunityTemplateOut( | ||
| id=item["id"], | ||
| title=item["title"], | ||
| description=item.get("description"), | ||
| content=item["content"], | ||
| tags=item.get("tags"), | ||
| language=item.get("language"), | ||
| author_display_name=author_name, | ||
| star_count=item.get("star_count", 0), | ||
| use_count=item.get("use_count", 0), | ||
| date_created=item.get("date_created"), | ||
| is_own=(user_id == auth.user_id), | ||
| ) | ||
| ) | ||
| return results | ||
| except Exception as e: | ||
| logger.error(f"Failed to list community templates: {e}") | ||
| return [] | ||
|
|
||
|
|
||
| @TemplateRouter.get("/community/my-stars") | ||
| async def get_my_community_stars( | ||
| auth: DependencyDirectusSession, | ||
| ) -> List[str]: | ||
| try: | ||
| items = directus.get_items( | ||
| "prompt_template_rating", | ||
| { | ||
| "query": { | ||
| "filter": { | ||
| "user_created": {"_eq": auth.user_id}, | ||
| "chat_message_id": {"_null": True}, | ||
| "rating": {"_eq": 2}, | ||
| }, | ||
| "fields": ["prompt_template_id"], | ||
| } | ||
| }, | ||
| ) | ||
| if not isinstance(items, list): | ||
| return [] | ||
| return [item["prompt_template_id"] for item in items] | ||
| except Exception as e: | ||
| logger.error(f"Failed to get community stars: {e}") | ||
| return [] | ||
|
|
||
|
|
||
| @TemplateRouter.post("/prompt-templates/{template_id}/publish") | ||
| async def publish_template( | ||
| template_id: str, | ||
| body: PublishTemplateIn, | ||
| auth: DependencyDirectusSession, | ||
| ) -> PromptTemplateOut: | ||
| try: | ||
| existing = directus.get_item("prompt_template", template_id) | ||
| if not existing or existing.get("user_created") != auth.user_id: | ||
| raise HTTPException(status_code=404, detail="Template not found") | ||
| except HTTPException: | ||
| raise | ||
| except Exception as e: | ||
| logger.error(f"Failed to verify template ownership: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to publish template") from e | ||
|
|
||
| if body.tags: | ||
| if len(body.tags) > 3: | ||
| raise HTTPException(status_code=400, detail="Maximum 3 tags allowed") | ||
| for t in body.tags: | ||
| if t not in ALLOWED_TAGS: | ||
| raise HTTPException(status_code=400, detail=f"Invalid tag: {t}") | ||
|
|
||
| try: | ||
| update_data: dict = {"is_public": True, "is_anonymous": body.is_anonymous} | ||
| if body.description is not None: | ||
| update_data["description"] = body.description | ||
| if body.tags is not None: | ||
| update_data["tags"] = body.tags | ||
| if body.language is not None: | ||
| update_data["language"] = body.language | ||
|
|
||
| result = directus.update_item("prompt_template", template_id, update_data)["data"] | ||
| return PromptTemplateOut(**result) | ||
| except Exception as e: | ||
| logger.error(f"Failed to publish template: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to publish template") from e | ||
|
|
||
|
|
||
| @TemplateRouter.post("/prompt-templates/{template_id}/unpublish") | ||
| async def unpublish_template( | ||
| template_id: str, | ||
| auth: DependencyDirectusSession, | ||
| ) -> PromptTemplateOut: | ||
| try: | ||
| existing = directus.get_item("prompt_template", template_id) | ||
| if not existing or existing.get("user_created") != auth.user_id: | ||
| raise HTTPException(status_code=404, detail="Template not found") | ||
| except HTTPException: | ||
| raise | ||
| except Exception as e: | ||
| logger.error(f"Failed to verify template ownership: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to unpublish template") from e | ||
|
|
||
| try: | ||
| result = directus.update_item("prompt_template", template_id, {"is_public": False})["data"] | ||
| return PromptTemplateOut(**result) | ||
| except Exception as e: | ||
| logger.error(f"Failed to unpublish template: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to unpublish template") from e | ||
|
|
||
|
|
||
| @TemplateRouter.post("/prompt-templates/{template_id}/star") | ||
| async def toggle_star( | ||
| template_id: str, | ||
| auth: DependencyDirectusSession, | ||
| ) -> dict: | ||
| try: | ||
| existing_ratings = directus.get_items( | ||
| "prompt_template_rating", | ||
| { | ||
| "query": { | ||
| "filter": { | ||
| "user_created": {"_eq": auth.user_id}, | ||
| "prompt_template_id": {"_eq": template_id}, | ||
| "chat_message_id": {"_null": True}, | ||
| "rating": {"_eq": 2}, | ||
| }, | ||
| "fields": ["id"], | ||
| "limit": 1, | ||
| } | ||
| }, | ||
| ) | ||
|
|
||
| template = directus.get_item("prompt_template", template_id) | ||
| if not template: | ||
| raise HTTPException(status_code=404, detail="Template not found") | ||
|
|
||
| current_count = template.get("star_count", 0) or 0 | ||
|
|
||
| if isinstance(existing_ratings, list) and len(existing_ratings) > 0: | ||
| # Remove star | ||
| directus.delete_item("prompt_template_rating", existing_ratings[0]["id"]) | ||
| new_count = max(0, current_count - 1) | ||
| directus.update_item("prompt_template", template_id, {"star_count": new_count})["data"] | ||
| return {"starred": False, "star_count": new_count} | ||
| else: | ||
| # Add star | ||
| star_result = directus.create_item( | ||
| "prompt_template_rating", | ||
| {"prompt_template_id": template_id, "rating": 2}, | ||
| )["data"] | ||
| directus.update_item("prompt_template_rating", star_result["id"], {"user_created": auth.user_id}) | ||
| new_count = current_count + 1 | ||
| directus.update_item("prompt_template", template_id, {"star_count": new_count})["data"] | ||
| return {"starred": True, "star_count": new_count} | ||
| except HTTPException: | ||
| raise | ||
| except Exception as e: | ||
| logger.error(f"Failed to toggle star: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to toggle star") from e | ||
|
|
||
|
|
||
| @TemplateRouter.post("/prompt-templates/{template_id}/copy") | ||
| async def copy_template( | ||
| template_id: str, | ||
| auth: DependencyDirectusSession, | ||
| ) -> PromptTemplateOut: | ||
| try: | ||
| # Fetch source with nested user_created to resolve author name | ||
| source_items = directus.get_items( | ||
| "prompt_template", | ||
| { | ||
| "query": { | ||
| "filter": {"id": {"_eq": template_id}}, | ||
| "fields": ["*", "user_created.id", "user_created.first_name"], | ||
| "limit": 1, | ||
| } | ||
| }, | ||
| ) | ||
| if not isinstance(source_items, list) or len(source_items) == 0: | ||
| raise HTTPException(status_code=404, detail="Template not found") | ||
| source = source_items[0] | ||
|
|
||
| user_created = source.get("user_created") or {} | ||
| source_user_id = user_created.get("id") if isinstance(user_created, dict) else user_created | ||
| is_public = source.get("is_public", False) | ||
| is_own = source_user_id == auth.user_id | ||
| if not is_public and not is_own: | ||
| raise HTTPException(status_code=404, detail="Template not found") | ||
|
|
||
| # Snapshot author name at copy time | ||
| is_anonymous = source.get("is_anonymous", False) | ||
| if is_anonymous: | ||
| copied_author_name = None | ||
| else: | ||
| copied_author_name = user_created.get("first_name") if isinstance(user_created, dict) else None | ||
|
|
||
| new_template = directus.create_item( | ||
| "prompt_template", | ||
| { | ||
| "title": source["title"], | ||
| "content": source["content"], | ||
| "copied_from": template_id, | ||
| "author_display_name": copied_author_name, | ||
| }, | ||
| )["data"] | ||
| directus.update_item("prompt_template", new_template["id"], {"user_created": auth.user_id}) | ||
| new_template["user_created"] = auth.user_id | ||
|
|
||
| # Increment use_count on source | ||
| current_use_count = source.get("use_count", 0) or 0 | ||
| directus.update_item("prompt_template", template_id, {"use_count": current_use_count + 1})["data"] | ||
|
|
||
| return PromptTemplateOut(**new_template) | ||
| except HTTPException: | ||
| raise | ||
| except Exception as e: | ||
| logger.error(f"Failed to copy template: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to copy template") from e | ||
|
|
||
|
|
||
| # ── Quick-Access Preferences ── | ||
|
|
||
|
|
||
| @TemplateRouter.get("/quick-access") | ||
| async def list_quick_access( | ||
| auth: DependencyDirectusSession, | ||
| ) -> List[PromptTemplatePreferenceOut]: | ||
| try: | ||
| items = directus.get_items( | ||
| "prompt_template_preference", | ||
| { | ||
| "query": { | ||
| "filter": {"user_created": {"_eq": auth.user_id}}, | ||
| "sort": ["sort"], | ||
| "fields": [ | ||
| "id", | ||
| "template_type", | ||
| "static_template_id", | ||
| "prompt_template_id", | ||
| "sort", | ||
| ], | ||
| } | ||
| }, | ||
| ) | ||
| if not isinstance(items, list): | ||
| items = [] | ||
| return [PromptTemplatePreferenceOut(**item) for item in items] | ||
| except Exception as e: | ||
| logger.error(f"Failed to list quick access preferences: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to list preferences") from e | ||
|
|
||
|
|
||
| @TemplateRouter.put("/quick-access") | ||
| async def save_quick_access( | ||
| body: List[QuickAccessPreferenceIn], | ||
| auth: DependencyDirectusSession, | ||
| ) -> List[PromptTemplatePreferenceOut]: | ||
| if len(body) > 5: | ||
| raise HTTPException(status_code=400, detail="Maximum 5 quick access items") | ||
|
|
||
| try: | ||
| # Delete existing preferences | ||
| existing = directus.get_items( | ||
| "prompt_template_preference", | ||
| { | ||
| "query": { | ||
| "filter": {"user_created": {"_eq": auth.user_id}}, | ||
| "fields": ["id"], | ||
| } | ||
| }, | ||
| ) | ||
| if isinstance(existing, list): | ||
| for pref in existing: | ||
| directus.delete_item("prompt_template_preference", pref["id"]) | ||
|
|
||
| # Create new preferences | ||
| results = [] | ||
| for pref in body: | ||
| result = directus.create_item( | ||
| "prompt_template_preference", | ||
| { | ||
| "template_type": pref.template_type, | ||
| "static_template_id": pref.static_template_id, | ||
| "prompt_template_id": pref.prompt_template_id, | ||
| "sort": pref.sort, | ||
| }, | ||
| )["data"] | ||
| directus.update_item("prompt_template_preference", result["id"], {"user_created": auth.user_id}) | ||
| results.append(PromptTemplatePreferenceOut(**result)) | ||
|
|
||
| return results | ||
| except Exception as e: | ||
| logger.error(f"Failed to save quick access preferences: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to save preferences") from e | ||
|
|
||
|
|
||
| # ── Ratings ── | ||
|
|
||
|
|
||
| @TemplateRouter.post("/ratings") | ||
| async def rate_prompt_template( | ||
| body: PromptTemplateRatingIn, | ||
| auth: DependencyDirectusSession, | ||
| ) -> PromptTemplateRatingOut: | ||
| try: | ||
| # Check if user already rated this template (upsert pattern) | ||
| existing = directus.get_items( | ||
| "prompt_template_rating", | ||
| { | ||
| "query": { | ||
| "filter": { | ||
| "user_created": {"_eq": auth.user_id}, | ||
| "prompt_template_id": {"_eq": body.prompt_template_id}, | ||
| # If chat_message_id is provided, rate that specific use | ||
| # Otherwise, rate the template overall (favorite) | ||
| **({"chat_message_id": {"_eq": body.chat_message_id}} if body.chat_message_id else {"chat_message_id": {"_null": True}}), | ||
| }, | ||
| "fields": ["id"], | ||
| "limit": 1, | ||
| } | ||
| }, | ||
| ) | ||
|
|
||
| if isinstance(existing, list) and len(existing) > 0: | ||
| # Update existing rating | ||
| result = directus.update_item( | ||
| "prompt_template_rating", | ||
| existing[0]["id"], | ||
| {"rating": body.rating}, | ||
| )["data"] | ||
| return PromptTemplateRatingOut(**result) | ||
|
|
||
| # Create new rating | ||
| payload: dict = { | ||
| "prompt_template_id": body.prompt_template_id, | ||
| "rating": body.rating, | ||
| } | ||
| if body.chat_message_id: | ||
| payload["chat_message_id"] = body.chat_message_id | ||
|
|
||
| result = directus.create_item("prompt_template_rating", payload)["data"] | ||
| directus.update_item("prompt_template_rating", result["id"], {"user_created": auth.user_id}) | ||
| return PromptTemplateRatingOut(**result) | ||
| except Exception as e: | ||
| logger.error(f"Failed to rate prompt template: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to rate template") from e | ||
|
|
||
|
|
||
| @TemplateRouter.delete("/ratings/{rating_id}") | ||
| async def delete_rating( | ||
| rating_id: str, | ||
| auth: DependencyDirectusSession, | ||
| ) -> dict: | ||
| try: | ||
| existing = directus.get_item("prompt_template_rating", rating_id) | ||
| if not existing or existing.get("user_created") != auth.user_id: | ||
| raise HTTPException(status_code=404, detail="Rating not found") | ||
| directus.delete_item("prompt_template_rating", rating_id) | ||
| return {"status": "ok"} | ||
| except HTTPException: | ||
| raise | ||
| except Exception as e: | ||
| logger.error(f"Failed to delete rating: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to delete rating") from e | ||
|
|
||
|
|
||
| @TemplateRouter.get("/ratings") | ||
| async def list_my_ratings( | ||
| auth: DependencyDirectusSession, | ||
| ) -> List[PromptTemplateRatingOut]: | ||
| """List all ratings by the current user (for favorites and history).""" | ||
| try: | ||
| items = directus.get_items( | ||
| "prompt_template_rating", | ||
| { | ||
| "query": { | ||
| "filter": {"user_created": {"_eq": auth.user_id}}, | ||
| "sort": ["-date_created"], | ||
| "fields": [ | ||
| "id", | ||
| "prompt_template_id", | ||
| "rating", | ||
| "chat_message_id", | ||
| "date_created", | ||
| ], | ||
| } | ||
| }, | ||
| ) | ||
| if not isinstance(items, list): | ||
| items = [] | ||
| return [PromptTemplateRatingOut(**item) for item in items] | ||
| except Exception as e: | ||
| logger.error(f"Failed to list ratings: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to list ratings") from e | ||
|
|
||
|
|
||
| # ── AI Suggestions Toggle ── | ||
|
|
||
|
|
||
| @TemplateRouter.patch("/ai-suggestions") | ||
| async def toggle_ai_suggestions( | ||
| body: AiSuggestionsToggleIn, | ||
| auth: DependencyDirectusSession, | ||
| ) -> dict: | ||
| try: | ||
| directus.update_user(auth.user_id, {"hide_ai_suggestions": body.hide_ai_suggestions}) | ||
| return {"status": "ok", "hide_ai_suggestions": body.hide_ai_suggestions} | ||
| except Exception as e: | ||
| logger.error(f"Failed to toggle AI suggestions: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to update setting") from e |
There was a problem hiding this comment.
This router still does sync Directus I/O on the event loop.
Every directus.get_*/create_*/update_*/delete_* call here runs inside async def without run_in_thread_pool. One slow Directus round-trip will block unrelated requests on the same worker. Wrap these calls or move them behind async service helpers before shipping. As per coding guidelines "Always wrap blocking I/O calls using run_in_thread_pool from dembrane.async_helpers in backend code. Wrap calls to directus.*, conversation_service.*, project_service.*, S3 helpers, and CPU-heavy utilities like token counting or summary generation if they are sync."
🧰 Tools
🪛 GitHub Check: CodeQL
[warning] 330-330: Information exposure through an exception
Stack trace information flows to this location and may be exposed to an external user.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/api/template.py` around lines 108 - 697, The handlers in
this file (e.g., list_prompt_templates, create_prompt_template,
update_prompt_template, delete_prompt_template, list_community_templates,
publish_template, unpublish_template, toggle_star, copy_template,
list_quick_access, save_quick_access, rate_prompt_template, delete_rating,
list_my_ratings, toggle_ai_suggestions) call blocking directus.* methods
directly inside async functions; wrap every synchronous Directus call
(directus.get_items, get_item, create_item, update_item, delete_item,
update_user, etc.) with run_in_thread_pool from dembrane.async_helpers (or
delegate to async service helpers that already use run_in_thread_pool) so the
I/O runs off the event loop, and add the needed import; ensure each call
awaiting run_in_thread_pool returns the same shape used downstream and preserve
error handling and returned data types.
| @TemplateRouter.post("/prompt-templates/{template_id}/star") | ||
| async def toggle_star( | ||
| template_id: str, | ||
| auth: DependencyDirectusSession, | ||
| ) -> dict: | ||
| try: | ||
| existing_ratings = directus.get_items( | ||
| "prompt_template_rating", | ||
| { | ||
| "query": { | ||
| "filter": { | ||
| "user_created": {"_eq": auth.user_id}, | ||
| "prompt_template_id": {"_eq": template_id}, | ||
| "chat_message_id": {"_null": True}, | ||
| "rating": {"_eq": 2}, | ||
| }, | ||
| "fields": ["id"], | ||
| "limit": 1, | ||
| } | ||
| }, | ||
| ) | ||
|
|
||
| template = directus.get_item("prompt_template", template_id) | ||
| if not template: | ||
| raise HTTPException(status_code=404, detail="Template not found") | ||
|
|
||
| current_count = template.get("star_count", 0) or 0 | ||
|
|
||
| if isinstance(existing_ratings, list) and len(existing_ratings) > 0: | ||
| # Remove star | ||
| directus.delete_item("prompt_template_rating", existing_ratings[0]["id"]) | ||
| new_count = max(0, current_count - 1) | ||
| directus.update_item("prompt_template", template_id, {"star_count": new_count})["data"] | ||
| return {"starred": False, "star_count": new_count} | ||
| else: | ||
| # Add star | ||
| star_result = directus.create_item( | ||
| "prompt_template_rating", | ||
| {"prompt_template_id": template_id, "rating": 2}, | ||
| )["data"] | ||
| directus.update_item("prompt_template_rating", star_result["id"], {"user_created": auth.user_id}) | ||
| new_count = current_count + 1 | ||
| directus.update_item("prompt_template", template_id, {"star_count": new_count})["data"] | ||
| return {"starred": True, "star_count": new_count} |
There was a problem hiding this comment.
Star toggling bypasses template visibility checks.
Unlike copy_template, this path only checks that the template exists. Any authenticated user who knows a private template ID can create/delete a star on it and mutate star_count. Gate the operation on template.is_public or ownership before touching ratings.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/api/template.py` around lines 398 - 441, The toggle_star
handler currently allows any authenticated user to add/remove stars for any
template; update the toggle_star function to enforce visibility/ownership by
fetching the template (directus.get_item("prompt_template", template_id)) and
checking template.get("is_public") or that template.get("user_created") ==
auth.user_id before proceeding to create/delete ratings or update star_count; if
the check fails, raise HTTPException(403) (or similar) and do not call
directus.create_item/delete_item/update_item for ratings or star_count.
Follow-up fixes for templates, report overhaul, settings redesign, and LLM upgrade.
| # Never expose the raw tfa_secret — only expose a boolean flag | ||
| user["tfa_enabled"] = bool(user.get("tfa_secret")) | ||
| user.pop("tfa_secret", None) | ||
| return user |
Check warning
Code scanning / CodeQL
Information exposure through an exception Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 12 days ago
In general, the fix is to ensure that exception details are not embedded in data structures that will be returned to API clients. Instead, detailed exception information should be logged on the server, and only generic, non‑sensitive error messages or appropriate HTTP error codes should be exposed externally.
The most direct, low‑impact fix is to modify the search method in echo/server/dembrane/directus.py so that when parsing response.json()["data"] fails, it does not include exc in the returned data. We can still log exc for diagnostics, but the returned value should be either {"error": "No data found for this request"} (without the exception text) or, better, raise an appropriate custom exception that upstream code will catch and convert into a generic HTTP error. Since we must not change behavior more than necessary, we will preserve the structure of returning a dict with an "error" field but remove the exception message, and add server‑side logging. To log, we can use the standard library logging module, adding a getLogger call at the top of the file (without altering existing imports). No changes are needed in user_settings.py; once search no longer incorporates exc into its returned value, the taint source that CodeQL tracks disappears, and user returned from /me will not include stack trace information.
Concretely:
- In
echo/server/dembrane/directus.py, add a module‑level logger (e.g.,logger = getLogger(__name__)). - In the
searchmethod’s innerexcept Exception as exc:block, log the exception withlogger.error(...)and change the returned dict to a generic message without interpolatingexc.
| @@ -6,6 +6,7 @@ | ||
| from contextlib import contextmanager | ||
| from dataclasses import dataclass | ||
| from urllib.parse import urljoin | ||
| from logging import getLogger | ||
|
|
||
| import urllib3 | ||
| import requests | ||
| @@ -13,6 +14,8 @@ | ||
|
|
||
| from dembrane.settings import get_settings | ||
|
|
||
| logger = getLogger(__name__) | ||
|
|
||
| # HTTP status codes that are typically recoverable | ||
| RECOVERABLE_STATUS_CODES = { | ||
| 401, # Unauthorized (token expired) | ||
| @@ -377,7 +380,8 @@ | ||
| try: | ||
| return response.json()["data"] | ||
| except Exception as exc: # noqa: BLE001 - want best-effort fallback | ||
| return {"error": f"No data found for this request : {exc}"} | ||
| logger.error("Failed to parse Directus SEARCH response: %s", exc) | ||
| return {"error": "No data found for this request"} | ||
| except requests.exceptions.ConnectionError as exc: | ||
| raise DirectusServerError(exc) from exc | ||
| except AssertionError as exc: |
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
echo/server/dembrane/chat_utils.py (1)
606-614:⚠️ Potential issue | 🟠 MajorActually leave headroom before sending the batch.
Line 612 sets the limit to the full context window, so near-limit batches bypass the split logic and then fail later once JSON output and router-added tokens are counted. That turns a recoverable split case into a dropped batch.
🛠️ Proposed fix
- MAX_BATCH_CONTEXT = MAX_AUTO_SELECT_CONTEXT_LENGTH # Leave headroom for response + response_headroom = min( + 1024, + max(256, MAX_AUTO_SELECT_CONTEXT_LENGTH // 10), + ) + MAX_BATCH_CONTEXT = MAX_AUTO_SELECT_CONTEXT_LENGTH - response_headroom🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/chat_utils.py` around lines 606 - 614, The prompt-size check uses the full context window (MAX_AUTO_SELECT_CONTEXT_LENGTH) so near-limit inputs bypass splitting and later fail after router/JSON/response tokens are added; change the MAX_BATCH_CONTEXT calculation in the block around token_counter/get_completion_kwargs (where MAX_BATCH_CONTEXT is set) to subtract a reserved headroom (e.g., a constant like RESERVED_HEADROOM or computed response/router token estimate) from MAX_AUTO_SELECT_CONTEXT_LENGTH so batches trigger splitting earlier (ensure the reserved value is large enough to cover JSON/router tokens and expected response tokens).
♻️ Duplicate comments (27)
echo/frontend/src/components/announcement/Announcements.tsx (1)
211-226:⚠️ Potential issue | 🟡 MinorKeep the release-notes accent on theme vars.
Lines 214-226 hardcode Royal Blue four times. That bypasses the app theme and will drift as soon as the primary palette changes; use Mantine
primary/var(--mantine-color-primary-6)here instead.As per coding guidelines: "When adding new Mantine components, prefer referencing CSS variables in styles over hardcoded hex values".🎨 Suggested fix
<Sparkle size={16} weight="fill" - color="#4169e1" + color="var(--mantine-color-primary-6)" /> <Text size="sm" fw={500} - c="#4169e1" + c="primary" > <Trans>Release notes</Trans> </Text> {whatsNewExpanded ? ( - <CaretUp size={14} color="#4169e1" /> + <CaretUp size={14} color="var(--mantine-color-primary-6)" /> ) : ( - <CaretDown size={14} color="#4169e1" /> + <CaretDown size={14} color="var(--mantine-color-primary-6)" /> )}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/components/announcement/Announcements.tsx` around lines 211 - 226, The components Sparkle, Text, CaretUp and CaretDown are using the hardcoded hex "#4169e1" (tied to whatsNewExpanded block); change those color props to use the Mantine primary CSS variable instead (e.g., "var(--mantine-color-primary-6)") or use Mantine's semantic color prop (for Text use c="primary") so the release-notes accent follows the app theme; update all occurrences of "#4169e1" in this JSX (Sparkle, Text, CaretUp, CaretDown) to the chosen theme variable.echo/frontend/src/components/layout/Header.tsx (2)
63-74:⚠️ Potential issue | 🟠 MajorWrap Sentry calls in try/catch to ensure fallback fires.
If
createForm()oropen()throws, the fallback modal is skipped. Users get nothing.🔧 Proposed fix
const handleClick = async () => { + try { const feedback = Sentry.getFeedback(); if (feedback) { const form = await feedback.createForm(); if (form) { form.appendToDom(); form.open(); return; } } + } catch (err) { + console.warn("Sentry feedback failed:", err); + } onFallback(); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/components/layout/Header.tsx` around lines 63 - 74, The Sentry integration in handleClick may throw from Sentry.getFeedback(), feedback.createForm(), or form.open(), preventing onFallback() from running; wrap the Sentry sequence in a try/catch and only return early if all Sentry calls succeed (i.e., after form.open()), and call onFallback() from the catch (or fall through) so the fallback modal always displays when any Sentry step fails. Reference the handleClick function and the Sentry.getFeedback(), feedback.createForm(), and form.open() calls when applying the change.
178-188:⚠️ Potential issue | 🟡 MinorAdd accessible name to avatar menu trigger.
Icon-only button, no
aria-label. Screen readers will announce "button" with zero context.♿ Proposed fix
<ActionIcon color="gray" variant="transparent" radius="xl" size="lg" + aria-label={t`Open account menu`} {...testId("header-settings-gear-button")} > <UserAvatar size={32} /> </ActionIcon>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/components/layout/Header.tsx` around lines 178 - 188, The ActionIcon used as the avatar menu trigger (component ActionIcon wrapping <UserAvatar /> with testId "header-settings-gear-button") is icon-only and missing an accessible name; add an appropriate accessible name by including an aria-label (or aria-labelledby pointing to a hidden label) on that ActionIcon (e.g., aria-label="Account settings" or similar) so screen readers get context when announcing the button; ensure the label text is concise and matches the menu purpose.echo/server/dembrane/api/user_settings.py (2)
67-75:⚠️ Potential issue | 🟠 MajorBlocking I/O still stalling the event loop.
directus.get_users()is sync. In anasync defhandler, this blocks the entire event loop. Same story for all therequests.*calls below. Wrap withrun_in_thread_pooland add explicittimeout=to network calls.As per coding guidelines: "Always wrap blocking I/O calls using
run_in_thread_poolfromdembrane.async_helpersin backend code."🔧 Proposed fix pattern
- users = directus.get_users( + users = await run_in_thread_pool( + directus.get_users, { "query": { "filter": {"id": {"_eq": auth.user_id}}, "fields": USER_PROFILE_FIELDS, "limit": 1, } }, )For
requests.*calls:- response = requests.post( - url, - json={"password": body.new_password}, - headers=headers, - verify=directus.verify, - ) + response = await run_in_thread_pool( + lambda: requests.post( + url, + json={"password": body.new_password}, + headers=headers, + verify=directus.verify, + timeout=30, + ) + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/api/user_settings.py` around lines 67 - 75, The code calls blocking functions (directus.get_users and requests.*) from an async handler; wrap each blocking call (e.g., directus.get_users, requests.get/post/put/etc.) with run_in_thread_pool from dembrane.async_helpers and call them via await run_in_thread_pool(lambda: ...), and ensure every network call includes an explicit timeout= value passed into the underlying requests call; update the functions in user_settings.py that perform these calls to use run_in_thread_pool and propagate/handle timeouts accordingly.
394-403:⚠️ Potential issue | 🟠 MajorAvatar uploads need server-side validation.
Browser
acceptfilters are bypassable. Enforce MIME whitelist (image/png, image/jpeg, image/webp, image/gif) and size cap before forwarding to Directus. Same applies to the whitelabel-logo endpoint.🛡️ Proposed validation
async def upload_avatar( file: UploadFile, auth: DependencyDirectusSession, ) -> dict: """Upload a user avatar image.""" + ALLOWED_TYPES = {"image/png", "image/jpeg", "image/webp", "image/gif"} + MAX_SIZE_BYTES = 2 * 1024 * 1024 # 2MB + folder_id = _get_or_create_folder_id("avatars") if not folder_id: raise HTTPException(status_code=500, detail="Failed to get or create avatars folder") file_content = await file.read() + + if file.content_type not in ALLOWED_TYPES: + raise HTTPException(status_code=400, detail="Invalid image type. Allowed: PNG, JPEG, WebP, GIF") + + if len(file_content) > MAX_SIZE_BYTES: + raise HTTPException(status_code=400, detail="File too large. Maximum size: 2MB") + url = f"{directus.url}/files"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/api/user_settings.py` around lines 394 - 403, Add server-side validation in the upload flow before building files and calling requests.post: check the incoming file's content_type (from file.content_type) against a whitelist ["image/png","image/jpeg","image/webp","image/gif"] and enforce a maximum size (e.g., 5MB) using the read bytes length (file_content) and return a 400/appropriate error if invalid; apply the same validations to the whitelabel-logo upload path as well. Specifically update the logic around file_content, files and the requests.post call in user_settings.py so validation runs prior to constructing files = {"file": (...)} and before contacting directus (use directus.get_token()/directus.verify unchanged). Ensure error responses include a clear message for rejected MIME or size.echo/server/prompt_templates/system_report.fr.jinja (1)
18-21:⚠️ Potential issue | 🟠 MajorEscape the full
user_instructionsblock.Line 21 only strips one exact
<user_instructions>tag pair, so any other tag-like payload can still break prompt boundaries. Line 19 is also still English inside the French template; localize the preface while hardening this block.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/prompt_templates/system_report.fr.jinja` around lines 18 - 21, The template currently injects raw user_instructions inside a <user_instructions> tag pair and leaves an English preface; to harden it, escape or HTML-encode the entire user_instructions payload (use the Jinja escape filter, e.g., user_instructions | e('html') or an equivalent encoder) instead of only stripping a single tag, and replace the English preface with a French localized sentence; update the template expression that references user_instructions and the surrounding preface text so the full block is safely escaped and the introductory string is in French.echo/server/prompt_templates/system_report.es.jinja (1)
18-21:⚠️ Potential issue | 🟠 MajorEscape the full
user_instructionsblock.Line 21 only strips one exact
<user_instructions>tag pair, so any other tag-like payload can still break prompt boundaries. Line 19 is also still English inside the Spanish template; localize the preface while hardening this block.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/prompt_templates/system_report.es.jinja` around lines 18 - 21, The template currently unsafely injects user_instructions by stripping only one exact <user_instructions> tag pair and leaves an English preface; update the system_report.es.jinja handling of user_instructions so the entire block is escaped (do not attempt partial tag stripping), localize the preface text into Spanish, and stop using the fragile replace sequence — instead render {{ user_instructions | e }} (or the project's canonical escape/filter) inside a Spanish-language introductory sentence so the variable cannot break prompt boundaries and is safely escaped; refer to the user_instructions variable and the template block around the preface in system_report.es.jinja when making this change.echo/server/prompt_templates/system_report.nl.jinja (1)
18-21:⚠️ Potential issue | 🟠 MajorEscape the full
user_instructionsblock.Line 21 only strips one exact
<user_instructions>tag pair, so any other tag-like payload can still break prompt boundaries. Line 19 is also still English inside the Dutch template; localize the preface while hardening this block.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/prompt_templates/system_report.nl.jinja` around lines 18 - 21, The template currently un-safely injects user_instructions using string replace of literal tags; update the Jinja expression that renders user_instructions (the existing "{{ user_instructions | replace(...) }}" usage) to properly escape HTML/XML and template markers (e.g., use Jinja's escape filter like "|e" or a robust sanitizer) so any angle brackets or tag-like payload cannot break prompt boundaries, and localize the preface text from English into Dutch in this template so the surrounding sentence is Dutch rather than "The report creator has provided the following guidance."echo/server/prompt_templates/system_report.de.jinja (1)
18-21:⚠️ Potential issue | 🟠 MajorEscape the full
user_instructionsblock.Line 21 only strips one exact
<user_instructions>tag pair, so any other tag-like payload can still break prompt boundaries. Line 19 is also still English inside the German template; localize the preface while hardening this block.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/prompt_templates/system_report.de.jinja` around lines 18 - 21, The template currently injects raw user_instructions and does a fragile replace of a single <user_instructions> tag; change the interpolation to output an escaped version of the whole block (use the Jinja escape filter, e.g. replace the existing "{{ user_instructions | replace(...) }}" with "{{ user_instructions | e }}" or an equivalent safe-escape filter) and remove the brittle replace calls, and also localize the surrounding preface text (replace the English preface on line 19 with a German sentence) so the template no longer contains mixed English and a fragile tag-stripping expression; target the "{{ user_instructions | replace(...)" expression and the literal "<user_instructions>" tag text in the template when making this change.echo/server/prompt_templates/system_report.it.jinja (1)
18-21:⚠️ Potential issue | 🟠 MajorEscape the full
user_instructionsblock.Line 21 only strips one exact
<user_instructions>tag pair, so any other tag-like payload can still break prompt boundaries. Line 19 is also still English inside the Italian template; localize the preface while hardening this block.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/prompt_templates/system_report.it.jinja` around lines 18 - 21, The template currently uses "{{ user_instructions | replace("</user_instructions>", "") | replace("<user_instructions>", "") }}" which only strips one exact tag pair and leaves tag-like payloads and an English preface; change rendering of the user_instructions block in system_report.it.jinja to output an escaped/encoded version of the entire user_instructions value (preventing any angle-bracket/tag interpretation) instead of ad-hoc replace calls, and replace the English preface ("The report creator has provided the following guidance...") with an Italian localized string so the template is both hardened and fully localized; look for the user_instructions variable and the preface text in system_report.it.jinja to implement these fixes.echo/frontend/src/components/chat/ChatTemplatesMenu.tsx (1)
343-352:⚠️ Potential issue | 🟠 MajorReuse the same key when re-selecting the modal-only template.
This branch rebuilds the key with an
id.length > 10heuristic and even encodes built-ins withselectedModalTemplate.title. Re-clicking a selected modal template can therefore emit a different key than the one already selected, which desyncs highlighting and parent state. ReuseselectedTemplateKeyhere, or encode from the already-decoded source/id.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/components/chat/ChatTemplatesMenu.tsx` around lines 343 - 352, The click handler recomputes template keys using a fragile id.length > 10 heuristic and selectedModalTemplate.title, which can produce a different key than the one currently selected; change the onClick to reuse the already-selected key (selectedTemplateKey) or derive the key from the decoded source/id used by the parent instead of rebuilding it here so highlighting and parent state stay in sync; update the handleTemplateSelect invocation to pass selectedTemplateKey (or the canonical encoded key from the parent) when the modal template is re-selected and remove the ad-hoc id/title encoding logic in this branch.echo/frontend/src/components/report/UpdateReportModalButton.tsx (1)
197-216:⚠️ Potential issue | 🟠 MajorKeep the modal form mounted on failure.
error ? <Alert> : ...turns this into an alert-only modal, so users can’t change options or retry after a 409/500 without leaving the flow. Render theAlertabove the active form instead.As per coding guidelines "Surface UI mutations with inline feedback: pair toasts with contextual Mantine
Alertcomponents inside modals/forms for errors or warnings".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/components/report/UpdateReportModalButton.tsx` around lines 197 - 216, The modal currently returns only an <Alert> when error is truthy, unmounting the form; update UpdateReportModalButton so the form controlled by showSchedule (and its inputs/buttons) stays mounted regardless of error and instead render the <Alert> conditionally above the form when error exists; keep the existing is409Error logic for title/color and translatable text, and ensure the form’s retry/submit handlers (the same handlers used when no error) remain available so users can adjust options and re-submit without leaving the modal.echo/frontend/src/components/report/CreateReportForm.tsx (1)
104-125:⚠️ Potential issue | 🟠 MajorKeep the create form visible when the mutation fails.
This early return swaps the whole flow for an
Alert, so users can’t tweak settings or retry after an error. Keep the form mounted and render the error inline above it.As per coding guidelines "Surface UI mutations with inline feedback: pair toasts with contextual Mantine
Alertcomponents inside modals/forms for errors or warnings".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/components/report/CreateReportForm.tsx` around lines 104 - 125, The CreateReportForm currently early-returns an Alert when error is truthy, unmounting the form; instead keep the form mounted and render the Alert inline above the form controls: remove the early return in CreateReportForm, and in the component's render/return JSX insert the Alert (using is409Error to set title, color, and message) conditionally above the form markup so users can still edit fields and retry while seeing inline feedback.echo/server/dembrane/report_generation.py (1)
120-128:⚠️ Potential issue | 🟠 MajorDon’t silently truncate transcripts at 1500 chunks.
This hard cap drops the tail of long conversations even when the report still has token budget left. Page
conversation_chunkuntil exhaustion, or reuse the shared transcript helper here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/report_generation.py` around lines 120 - 128, The current call to client.get_items("conversation_chunk", ...) uses a hard limit of 1500 and can truncate long transcripts; change the logic in report_generation.py to page through conversation_chunk results until no more items (using the query filter on conversation_id) or call the existing shared transcript helper instead of a single get_items with limit; ensure you accumulate transcripts (fields "id","transcript","error") across pages and preserve sort order so the full conversation is returned rather than truncating at 1500.echo/frontend/src/routes/project/report/ProjectReportRoute.tsx (3)
451-469:⚠️ Potential issue | 🟠 MajorMake “Generate now” atomic.
This cancels the scheduled report before starting the replacement. If
createReport(...)fails or returns409, the scheduled job is already gone and the user is left with nothing in flight. Create first, or do the swap server-side.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/routes/project/report/ProjectReportRoute.tsx` around lines 451 - 469, The current handleGenerateNow calls cancelSchedule before createReport which can leave nothing scheduled if createReport fails; change the flow to call createReport(...) first and only call cancelSchedule({ projectId, reportId: report.id }) in createReport's onSuccess (and then call onReset()), and ensure createReport's error handler preserves the schedule on failures (including handling 409 to treat as "already exists" without deleting the schedule). Alternatively implement a server-side atomic endpoint (e.g., replace-schedule-or-create) and call that from handleGenerateNow so the swap is done atomically on the backend; locate this logic in handleGenerateNow and the usages of createReport and cancelSchedule to make the changes.
687-691:⚠️ Potential issue | 🟠 MajorDon’t fall back to
latestReportwhile a selected report is loading.When
selectedReportIdpoints at an older report,activeReportis brieflyundefinedand this fallback renders the latest report’s content/actions under the new selection. Only uselatestReportwhen nothing is explicitly selected.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/routes/project/report/ProjectReportRoute.tsx` around lines 687 - 691, The current fallback expression for data uses latestReport whenever activeReport is falsy, causing the UI to show the latest report while a user-selected report is still loading; update the logic in ProjectReportRoute where data is computed (the expression referencing activeReport and latestReport) to only use latestReport when there is no explicit selection (i.e., selectedReportId is null/undefined) and latestReport.status is not "scheduled" or "draft"; keep the existing status checks but gate the fallback on selectedReportId so a loading selected report does not get replaced by latestReport.
533-540:⚠️ Potential issue | 🟡 MinorKeep reschedule bounds aligned with create.
This picker allows
new Date()onward and has no max, while the create flow enforces +10 minutes and +30 days. That lets users pick times the initial flow rejects and that the scheduler may miss.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/routes/project/report/ProjectReportRoute.tsx` around lines 533 - 540, The DateTimePicker in ProjectReportRoute (props on the DateTimePicker using value newDate and onChange setNewDate) currently allows any date from new Date() onward; update its bounds to match the create flow by setting minDate to now + 10 minutes and adding a maxDate of now + 30 days so users cannot pick times the scheduler will reject; compute these bounds at render (e.g., derive minDate and maxDate before the JSX) and pass them into the DateTimePicker props.echo/server/dembrane/tasks.py (2)
1297-1305:⚠️ Potential issue | 🟠 MajorDon’t let terminal progress publishing downgrade a completed report.
Once the row is saved as
archived, a failure inpublish_report_progress(report_id, "completed", ...)falls into the genericexceptpath below and overwrites the report toerror. Wrap terminal progress publishing separately and never flip a persisted success back to failure.Also applies to: 1330-1342
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/tasks.py` around lines 1297 - 1305, The report row is already persisted with client.update_item(..., {"status": "archived"}) but a failure in publish_report_progress(report_id, "completed", ...) can fall into the generic except and flip the persisted status to "error"; change the flow so publish_report_progress is invoked in its own try/except after directus_client_context().update_item (do not re-raise on failure), log any publish errors via logger.exception or logger.error, and ensure the generic exception handler only updates status to "error" for failures that occur before the update_item call; apply the same isolation for the other terminal publish at the block around lines 1330-1342 (same pattern involving publish_report_progress and directus_client_context/update_item).
1466-1473:⚠️ Potential issue | 🟠 MajorEnqueue before leaving
scheduled.If
.send()throws after this status update, the row is stranded indraftandtask_check_scheduled_reportswill never pick it up again. Lettask_create_reportown the transition, or roll it back on enqueue failure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/tasks.py` around lines 1466 - 1473, The code updates the report status to "draft" before calling task_create_report.send so if .send() fails the row is stranded; either move the status transition into the task_create_report worker (so the send happens while the report remains "scheduled") or wrap the enqueue in a try/except and roll the status back to "scheduled" on failure: locate the directus_client_context()/client.update_item("project_report", str(report_id), {"status": "draft"}) call and either (A) remove that update here and let task_create_report perform the update, or (B) call task_create_report.send(...) first and only update status on success, or if you must update first then catch exceptions from task_create_report.send and call client.update_item(..., {"status":"scheduled"}) to revert before re-raising/logging; also reference task_check_scheduled_reports which relies on the "scheduled" status.echo/frontend/src/components/chat/TemplatesModal.tsx (2)
228-230:⚠️ Potential issue | 🟠 Major
showCommunity = falseships a permanent kill switch.This keeps the publish/star/copy/community rows unreachable even though the rest of the flow is wired up. Gate it with a runtime feature flag or capability check instead of a hardcoded
false.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/components/chat/TemplatesModal.tsx` around lines 228 - 230, Currently TemplatesModal.tsx defines a hardcoded kill switch via const showCommunity = false which permanently hides publish/star/copy rows; replace this with a runtime feature flag or capability check (e.g., call your existing feature flag helper like getFeatureFlag('community') or a capability accessor like useUserCapabilities()/props.canUseCommunity) and use that boolean for showCommunity, defaulting to false if the flag/capability is absent; update any references to showCommunity in the component to rely on this runtime value instead of the hardcoded constant.
334-358:⚠️ Potential issue | 🟠 MajorSurface create/edit failures inside the form.
Both save handlers swallow rejected mutations and leave the user on the form with no inline state. Keep the form mounted, capture the error locally, and render a Mantine
Alertso retry is possible without guessing what happened.As per coding guidelines "Surface UI mutations with inline feedback: pair toasts with contextual Mantine
Alertcomponents inside modals/forms for errors or warnings".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/components/chat/TemplatesModal.tsx` around lines 334 - 358, The save handlers (handleSaveCreate and handleSaveEdit) currently swallow rejected promises and give no inline feedback; update both to catch the thrown error into a local state (e.g., add a saveError string via useState) instead of an empty catch, keep the modal/form mounted (do not call setView on failure), and render a Mantine Alert in the form when saveError is set; specifically capture errors from onCreateUserTemplate and onUpdateUserTemplate (use the unique symbols handleSaveCreate, handleSaveEdit, onCreateUserTemplate, onUpdateUserTemplate, formTitle, formContent, editingId, setView) and set saveError to err.message || String(err) so users can retry with contextual inline feedback (optionally still emit a toast but ensure the Alert appears inside the modal).echo/server/dembrane/api/project.py (3)
867-918:⚠️ Potential issue | 🟠 Major
/viewsstill ignoresreport_id.Both aggregates filter only by
project_report_id.project_id = project_id, so every report in the same project returns the sametotalandrecentvalues. Add the concrete report filter to both queries.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/api/project.py` around lines 867 - 918, The get_report_views handler currently aggregates metrics for the whole project because both queries only filter by project_report_id.project_id; update both Directus query filters in get_report_views to include the concrete report id (use report_id) as well—for example add project_report_id.id (or the proper PK field name for the report) equal to report_id alongside the existing project_id filter in both the "all_metrics" and "recent_metrics" get_items calls so totals are scoped to the specific report.
640-649:⚠️ Potential issue | 🔴 CriticalAdd a report-to-project guard, not just a project guard.
_verify_project_access()only proves access to the path project.update_report,check_report_needs_update, andstream_report_progressstill trustreport_iddirectly, so a user who can access project A can mutate or observe report B from another project by swapping the ID.Also applies to: 730-784, 921-1063
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/api/project.py` around lines 640 - 649, The current _verify_project_access only checks project-level access and does not ensure a given report_id actually belongs to that project, allowing a user with access to project A to operate on report B; update the guard to fetch the report via ReportService (e.g., ReportService.get_by_id_or_raise) using the report_id provided to endpoints like update_report, check_report_needs_update, and stream_report_progress, then verify the report's project id (e.g., report["project_id"] or report.project_id) matches the project_id param and raise HTTPException(403) on mismatch; either extend _verify_project_access to accept an optional report_id and perform this check or add a new helper _verify_report_belongs_to_project(report_id, project_id, auth) and call it from those handlers.
196-228:⚠️ Potential issue | 🟠 MajorPin slot uniqueness still isn't enforced.
This only updates the target row. If slot
1/2/3is already occupied, two projects can share the samepin_order, and/homewill arbitrarily hide the extras behindlimit: 3.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/api/project.py` around lines 196 - 228, The handler toggle_project_pin currently only updates the target project and doesn't clear any other project holding the same pin slot, allowing duplicate pin_order values; before calling client.update_item for the target, find any other project(s) with pin_order == body.pin_order (use ProjectService or client.items('project').read_by_query via project_service or client) excluding the target project_id, and set their pin_order to None (via run_in_thread_pool(client.update_item, "project", other_id, {"pin_order": None})); then proceed to update the target project as before (and when body.pin_order is None just unpin the target). Ensure you reference ProjectService, project_service, run_in_thread_pool, and client.update_item when implementing this fix.echo/server/dembrane/api/template.py (3)
398-441:⚠️ Potential issue | 🟠 Major
toggle_star()still bypasses template visibility checks.This path only verifies that the template exists. Anyone who knows a private template ID can create/delete a star and mutate
star_count; gate it onis_publicor ownership before touching ratings.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/api/template.py` around lines 398 - 441, The toggle_star handler currently only checks existence and lets anyone modify ratings; update toggle_star to check template visibility/ownership by inspecting template.get("is_public") and template.get("user_created") and only allow rating changes if template["is_public"] is truthy or auth.user_id equals template["user_created"]; if neither condition holds raise HTTPException(status_code=403). Perform this visibility/ownership check before calling directus.get_items, directus.create_item, directus.delete_item, or directus.update_item so neither the rating row nor prompt_template.star_count can be mutated by unauthorized users.
424-440:⚠️ Potential issue | 🟠 MajorCached counters will drift under concurrent stars/copies.
Both paths do read-modify-write against the cached count. Two simultaneous requests can persist the same next value and lose an update. Recompute from the source table or use an atomic increment path.
Also applies to: 496-498
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/api/template.py` around lines 424 - 440, The star_count read-modify-write in the template star/unstar logic can race: when handling existing_ratings and when creating a new rating you use template.get("star_count") and then directus.update_item("prompt_template", template_id, {"star_count": new_count}), which can cause lost updates under concurrency; change the implementation to not rely on a cached read-modify-write—either recompute the count from the source ratings table (e.g., COUNT(*) on prompt_template_rating where prompt_template_id = template_id) before writing, or use an atomic increment/decrement API on the prompt_template record (or a DB-level increment operation) when calling directus.update_item; make the same change for both the delete path (prompt_template_rating delete + decrement) and create path (prompt_template_rating create + increment) and ensure the operations are performed transactionally if supported.
108-697:⚠️ Potential issue | 🟠 MajorThis router is still doing sync Directus I/O on the event loop.
Every handler here calls
directus.get_*/create_*/update_*/delete_*directly insideasync def. One slow Directus round-trip will stall unrelated requests on the same worker.As per coding guidelines "Always wrap blocking I/O calls using
run_in_thread_poolfromdembrane.async_helpersin backend code. Wrap calls todirectus.*..."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/api/template.py` around lines 108 - 697, The handlers perform synchronous Directus I/O inside async endpoints (e.g., list_prompt_templates, create_prompt_template, update_prompt_template, delete_prompt_template, list_community_templates, get_my_community_stars, publish_template, unpublish_template, toggle_star, copy_template, list_quick_access, save_quick_access, rate_prompt_template, delete_rating, list_my_ratings, toggle_ai_suggestions), which blocks the event loop; fix by wrapping every directus.* call in run_in_thread_pool from dembrane.async_helpers (e.g., replace directus.get_items(...), create_item(...), update_item(...), delete_item(...), get_item(...), update_user(...) with await run_in_thread_pool(lambda: directus.get_items(...)) or equivalent) and ensure any sequence of directus calls that must be ordered are awaited in order; update imports to include run_in_thread_pool and adjust error handling to await the wrapped calls.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 8b831661-2349-4cc4-ad2a-541d779fd604
📒 Files selected for processing (31)
echo/.gitignoreecho/.vscode/settings.jsonecho/directus/sync/snapshot/fields/project_report/error_message.jsonecho/frontend/src/components/announcement/Announcements.tsxecho/frontend/src/components/chat/ChatTemplatesMenu.tsxecho/frontend/src/components/chat/TemplatesModal.tsxecho/frontend/src/components/chat/templateKey.tsecho/frontend/src/components/layout/Header.tsxecho/frontend/src/components/report/CreateReportForm.tsxecho/frontend/src/components/report/UpdateReportModalButton.tsxecho/frontend/src/config.tsecho/frontend/src/lib/api.tsecho/frontend/src/lib/typesDirectus.d.tsecho/frontend/src/lib/typesDirectusContent.tsecho/frontend/src/routes/project/report/ProjectReportRoute.tsxecho/frontend/src/routes/settings/UserSettingsRoute.tsxecho/server/dembrane/api/api.pyecho/server/dembrane/api/conversation.pyecho/server/dembrane/api/project.pyecho/server/dembrane/api/template.pyecho/server/dembrane/api/user_settings.pyecho/server/dembrane/chat_utils.pyecho/server/dembrane/report_generation.pyecho/server/dembrane/report_utils.pyecho/server/dembrane/tasks.pyecho/server/prompt_templates/system_report.de.jinjaecho/server/prompt_templates/system_report.en.jinjaecho/server/prompt_templates/system_report.es.jinjaecho/server/prompt_templates/system_report.fr.jinjaecho/server/prompt_templates/system_report.it.jinjaecho/server/prompt_templates/system_report.nl.jinja
| // Auto-mark all as read after 1 second when drawer opens | ||
| // biome-ignore lint/correctness/useExhaustiveDependencies: only trigger on isOpen changes, mutate ref is stable | ||
| useEffect(() => { | ||
| if (isOpen) { | ||
| autoReadTimerRef.current = setTimeout(() => { | ||
| markAllAsReadMutation.mutate(); | ||
| }, 1000); | ||
| } | ||
|
|
||
| return () => { | ||
| if (autoReadTimerRef.current) { | ||
| clearTimeout(autoReadTimerRef.current); | ||
| autoReadTimerRef.current = null; | ||
| } | ||
| }; | ||
| }, [isOpen]); |
There was a problem hiding this comment.
Auto-read is racing first paint.
Line 64 starts the timer on drawer open, not after unread content is actually visible. On a slow load or error path, Line 66 can mark everything read before the user sees it, and Lines 109-110 then drop those items from the list. Gate this effect on a successful load with visible unread items.
Also applies to: 109-110
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/frontend/src/components/announcement/Announcements.tsx` around lines 61
- 76, The auto-read useEffect currently triggers on isOpen alone and can call
markAllAsReadMutation.mutate() before announcements are loaded or any unread
items are visible; update the effect in the Announcements component to also
check the data load status and presence of unread items (e.g., use the
announcements query's isSuccess/isFetched and an unreadCount or computed
visibleUnreadItems) before starting the autoReadTimerRef timeout, and likewise
ensure the cleanup logic and any logic that removes items (the code referenced
around the markAllAsReadMutation and the list-removal logic) only runs when
announcements are successfully loaded and there are visible unread items to
avoid racing first paint.
| ) : unreadAnnouncements.length === 0 && | ||
| whatsNewAnnouncements.length === 0 ? ( |
There was a problem hiding this comment.
Unread-only pagination can dead-end.
loadMoreRef only exists inside unreadAnnouncements.map. If the current page has zero visible unread items while hasNextPage is still true, nothing ever enters view, fetchNextPage() never runs, and the empty state at Lines 169-170 can render even though older unread announcements still exist.
⚡ Suggested fix
{unreadAnnouncements.map((announcement, index) => (
<AnnouncementItem
key={announcement.id}
announcement={announcement}
index={index}
- ref={
- index === unreadAnnouncements.length - 1
- ? loadMoreRef
- : undefined
- }
/>
))}
+ {hasNextPage && <Box ref={loadMoreRef} h={1} />}
{isFetchingNextPage && (
<Center py="xl">
<Loader size="md" />Also applies to: 179-186
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/frontend/src/components/announcement/Announcements.tsx` around lines 169
- 170, The pagination sentinel (loadMoreRef) is only rendered inside
unreadAnnouncements.map so if a page has zero visible unread items fetchNextPage
never triggers; move or duplicate the sentinel so it renders whenever
hasNextPage is true regardless of unreadAnnouncements length (e.g., render a
standalone loadMoreRef element after the unreadAnnouncements list when
hasNextPage) and apply the same change to the whatsNewAnnouncements block
referenced (lines ~179-186) so fetchNextPage can run even when the current page
shows no items.
| is_anonymous: boolean; | ||
| author_display_name: string | null; |
There was a problem hiding this comment.
Drop the retired author_display_name field.
The data model now keys prompt-template privacy off is_anonymous. Keeping author_display_name in this Directus type makes the frontend schema looser than the API and can hide stale callers that still render a name after the backend rename.
🛠️ Proposed fix
language: string | null;
is_anonymous: boolean;
- author_display_name: string | null;
use_count: number;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| is_anonymous: boolean; | |
| author_display_name: string | null; | |
| language: string | null; | |
| is_anonymous: boolean; | |
| use_count: number; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/frontend/src/lib/typesDirectus.d.ts` around lines 487 - 488, Remove the
retired field from the Directus type definition: delete the author_display_name
property from the interface/type that currently includes is_anonymous so the
frontend schema matches the API; search for occurrences of author_display_name
in echo/frontend/src/lib/typesDirectus.d.ts (and any places that import that
type) and remove the property and update any callers that still reference
author_display_name to use is_anonymous or the new display logic instead.
| const [showReschedule, setShowReschedule] = useState(false); | ||
| const [newDate, setNewDate] = useState<Date | null>( | ||
| report.scheduled_at ? new Date(report.scheduled_at) : null, | ||
| ); |
There was a problem hiding this comment.
Reset scheduled-view state when switching reports.
ScheduledReportView keeps showReschedule/newDate in local state, but this route reuses the same component instance for every scheduled report. Switching from one scheduled report to another can therefore show the previous report’s date and reschedule the new report to the wrong timestamp. Key this component by report.id, or reset that state when report.id changes.
Also applies to: 959-963
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/frontend/src/routes/project/report/ProjectReportRoute.tsx` around lines
446 - 449, ScheduledReportView holds local state showReschedule/newDate that
persists across reused component instances causing wrong dates when switching
reports; update the route to reset that state when report.id changes or key the
ScheduledReportView by report.id so React mounts a fresh instance. Specifically,
either add a useEffect watching report.id to call setShowReschedule(false) and
setNewDate(report.scheduled_at ? new Date(report.scheduled_at) : null), or wrap
the <ScheduledReportView ... /> with key={report.id} so showReschedule and
newDate are reinitialized per report.
| ]; | ||
|
|
||
| export const UserSettingsRoute = () => { | ||
| useDocumentTitle(t`Settings | Dembrane`); |
There was a problem hiding this comment.
Brand name should be lowercase.
Per guidelines, "dembrane" is always lowercase in UI copy.
- useDocumentTitle(t`Settings | Dembrane`);
+ useDocumentTitle(t`Settings | dembrane`);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useDocumentTitle(t`Settings | Dembrane`); | |
| useDocumentTitle(t`Settings | dembrane`); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/frontend/src/routes/settings/UserSettingsRoute.tsx` at line 56, The
title string passed to useDocumentTitle should use the lowercase brand name;
update the call to useDocumentTitle in UserSettingsRoute (the
useDocumentTitle(...) invocation) to use "Settings | dembrane" instead of
"Settings | Dembrane" so the UI copy follows the brand guideline.
| reports = await run_in_thread_pool( | ||
| directus.get_items, | ||
| "project_report", | ||
| { | ||
| "query": { | ||
| "filter": { | ||
| "project_id": {"_eq": project_id}, | ||
| "status": {"_in": ["archived", "published", "scheduled", "draft"]}, | ||
| }, | ||
| "fields": ["id", "status", "date_created", "language", "user_instructions", "content", "scheduled_at"], | ||
| "sort": ["-date_created"], | ||
| } | ||
| }, | ||
| ) | ||
| result = [] | ||
| for r in (reports or []): | ||
| result.append({ | ||
| "id": r["id"], | ||
| "status": r.get("status"), | ||
| "date_created": r.get("date_created"), | ||
| "language": r.get("language"), | ||
| "user_instructions": r.get("user_instructions"), | ||
| "scheduled_at": r.get("scheduled_at"), | ||
| "title": _extract_report_title(r.get("content")), | ||
| }) | ||
| return result |
There was a problem hiding this comment.
list_project_reports() drops terminal statuses.
The filter only includes archived/published/scheduled/draft, but this router also writes cancelled and checks for error. Those reports vanish from history as soon as they transition, so the UI loses the final state and retry/cancel context.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/api/project.py` around lines 670 - 695, The project
reports query in list_project_reports() currently filters statuses to only
["archived","published","scheduled","draft"], dropping reports with "cancelled"
or "error"; update the filter used in the directus.get_items call (the
"project_report" query) to include "cancelled" and "error" (or remove the status
filter entirely to return all statuses) so the returned result construction in
list_project_reports() preserves final states and retry/cancel context.
| class PromptTemplateOut(BaseModel): | ||
| id: str | ||
| title: str | ||
| content: str | ||
| icon: Optional[str] = None | ||
| sort: Optional[int] = None | ||
| is_public: bool = False | ||
| description: Optional[str] = None | ||
| tags: Optional[List[str]] = None | ||
| language: Optional[str] = None | ||
| author_display_name: Optional[str] = None | ||
| use_count: int = 0 | ||
| star_count: int = 0 | ||
| copied_from: Optional[str] = None | ||
| date_created: Optional[str] = None | ||
| date_updated: Optional[str] = None | ||
|
|
There was a problem hiding this comment.
Expose is_anonymous in template responses.
The router uses the flag internally, but PromptTemplateOut doesn't expose it and list_prompt_templates() explicitly strips it. The client now expects that field, so the anonymity state can't round-trip through edit/publish flows.
Also applies to: 126-145
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/api/template.py` around lines 18 - 34, PromptTemplateOut
currently omits the is_anonymous flag so template anonymity can't round-trip;
add an is_anonymous: Optional[bool] = False field to the PromptTemplateOut model
and update any other output model around lines 126-145 similarly to expose this
flag, and modify list_prompt_templates() so it no longer strips or omits
is_anonymous when building responses (pass through the template's is_anonymous
value into the response).
| existing_ratings = directus.get_items( | ||
| "prompt_template_rating", | ||
| { | ||
| "query": { | ||
| "filter": { | ||
| "user_created": {"_eq": auth.user_id}, | ||
| "prompt_template_id": {"_eq": template_id}, | ||
| "chat_message_id": {"_null": True}, | ||
| "rating": {"_eq": 2}, | ||
| }, | ||
| "fields": ["id"], | ||
| "limit": 1, | ||
| } | ||
| }, | ||
| ) | ||
|
|
||
| template = directus.get_item("prompt_template", template_id) | ||
| if not template: | ||
| raise HTTPException(status_code=404, detail="Template not found") | ||
|
|
||
| current_count = template.get("star_count", 0) or 0 | ||
|
|
||
| if isinstance(existing_ratings, list) and len(existing_ratings) > 0: | ||
| # Remove star | ||
| directus.delete_item("prompt_template_rating", existing_ratings[0]["id"]) | ||
| new_count = max(0, current_count - 1) | ||
| directus.update_item("prompt_template", template_id, {"star_count": new_count})["data"] | ||
| return {"starred": False, "star_count": new_count} | ||
| else: | ||
| # Add star | ||
| star_result = directus.create_item( | ||
| "prompt_template_rating", | ||
| {"prompt_template_id": template_id, "rating": 2}, | ||
| )["data"] |
There was a problem hiding this comment.
toggle_star() ignores an existing overall thumbs-down row.
The lookup only matches rating = 2 with chat_message_id IS NULL. If the user already has a rating = 1 overall rating for this template, starring will either create a second overall-rating record or trip a uniqueness constraint. Load any null-chat_message_id rating first and update that row instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/api/template.py` around lines 404 - 437, The toggle_star
handler currently queries prompt_template_rating for only rating=2 and treats
absence as "add star", which ignores an existing overall rating with rating=1;
change the lookup (the directus.get_items call that populates existing_ratings)
to fetch any row with chat_message_id null for the user/template (remove the
rating filter) and then: if a row exists and its rating != 2 call
directus.update_item on that rating row to set rating=2 (instead of creating a
new row), if it exists and rating==2 delete it as before, and only call
directus.create_item when no overall rating row exists; use the existing
directus.update_item, directus.create_item, and directus.delete_item helpers
referenced in this function (toggle_star) to perform the update/create/delete.
| if progress_callback: | ||
| progress_callback( | ||
| "summarizing", | ||
| "Summarizing conversations...", | ||
| {"total": len(conversation_with_chunks)}, | ||
| ) |
There was a problem hiding this comment.
Make progress updates best-effort.
These callback invocations now sit on the main report path, so any exception from the progress sink will unwind the whole generation even though the summaries/transcripts/LLM call may have succeeded. Wrap them in a local try/except (or route through a safe notifier) so a websocket/Redis/UI failure doesn’t fail the report itself.
Also applies to: 220-225, 329-330, 414-415
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/report_utils.py` around lines 191 - 196, The
progress_callback calls (e.g., progress_callback("summarizing", "Summarizing
conversations...", {...}) and similar invocations later) must be made
best-effort: wrap each call to progress_callback in a local try/except that
catches Exception and either logs the error (using the module logger/process
logger) or silently ignores it so a failing progress sink cannot unwind the
report generation; alternatively add a small helper like
safe_progress_notify(callback, *args, **kwargs) that wraps callback(...) in
try/except and use that helper at every progress_callback site referenced in the
diff.
| params = json.loads(params_raw) | ||
| client.delete(params_key) | ||
|
|
||
| logger.info(f"Summaries done for report {report_id}, dispatching phase 2") | ||
| task_create_report_continue.send( | ||
| params["project_id"], | ||
| report_id, | ||
| params["language"], | ||
| params.get("user_instructions", ""), | ||
| ) |
There was a problem hiding this comment.
Don’t delete the phase-2 payload before enqueue succeeds.
If task_create_report_continue.send(...) fails here, the Redis params are already gone, so retries can’t recover and the report never advances past phase 1. Delete that key only after a successful enqueue, or let phase 2 consume and clear it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/tasks.py` around lines 1101 - 1110, The code deletes the
phase-2 payload (client.delete(params_key)) before enqueuing the next task,
which means a failed task_create_report_continue.send(...) will leave no payload
for retries; change the flow so the Redis delete happens only after
task_create_report_continue.send(...) succeeds (or alternatively remove the
delete here and have the phase-2 handler consume and clear params_key itself).
Concretely, keep params = json.loads(params_raw) and call
task_create_report_continue.send(...) first (using report_id,
params["project_id"], params["language"], params.get("user_instructions","")),
check for success/exception, and only then call client.delete(params_key); if
you prefer the other option, remove client.delete(params_key) from this function
and ensure the phase-2 consumer clears params_key after processing.
There was a problem hiding this comment.
Actionable comments posted: 8
♻️ Duplicate comments (4)
echo/frontend/src/routes/project/report/ProjectReportRoute.tsx (4)
453-471:⚠️ Potential issue | 🟠 MajorMake “Generate now” atomic.
This cancels the scheduled report first and only then creates the replacement. If creation fails or returns a conflict, the user is left with no report in flight.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/routes/project/report/ProjectReportRoute.tsx` around lines 453 - 471, The current handleGenerateNow cancels the scheduled report before creating the replacement, which can leave no report in flight if createReport fails; change the flow so you call createReport first and only call cancelSchedule after createReport succeeds (use createReport's onSuccess to invoke cancelSchedule), handle createReport error cases (e.g., conflict) to avoid cancelling the existing schedule, and keep onReset called after the full operation succeeds; update references in handleGenerateNow, cancelSchedule, createReport, and onReset accordingly.
689-693:⚠️ Potential issue | 🟠 MajorDon’t flash
latestReportwhile a different report is loading.When
selectedReportIdpoints at an older report,activeReportis brieflyundefinedand this expression immediately swaps the UI back tolatestReport. The sidebar selection changes, but the content/actions belong to a different report until the fetch completes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/routes/project/report/ProjectReportRoute.tsx` around lines 689 - 693, The UI flashes latestReport because the data fallback uses latestReport even while a different report (selectedReportId) is loading; update the data selection logic in ProjectReportRoute.tsx (where activeReport and latestReport are used) to only fall back to latestReport when no other report is selected or when selectedReportId matches latestReport?.id and its status is not "scheduled" or "draft"; otherwise return undefined so the UI preserves the loading state for the selected report. Ensure the expression checks selectedReportId (and/or a loading flag) alongside latestReport?.id and its status before choosing latestReport.
535-541:⚠️ Potential issue | 🟡 MinorDon’t allow rescheduling to “right now.”
minDate={new Date()}still allows picking a timestamp only seconds away, which is brittle for a scheduled job and much looser than the rest of this flow.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/routes/project/report/ProjectReportRoute.tsx` around lines 535 - 541, The DateTimePicker in ProjectReportRoute currently uses minDate={new Date()} which still permits selecting a timestamp effectively "right now"; change this to enforce a small safety buffer (e.g., compute a minimum as new Date(Date.now() + BUFFER_MS) where BUFFER_MS is a defined constant like 2–5 minutes) and use that value for the minDate prop so newDate/setNewDate cannot pick an immediate timestamp; update any related validation in the schedule submission logic (same component/handler) to mirror this minimum check server-side or client-side to prevent bypass.
448-451:⚠️ Potential issue | 🟠 MajorReset scheduled-view state per report.
newDateandshowRescheduleare initialized fromreportonce, but this component instance is reused whenselectedScheduledReportchanges. Switching between scheduled reports can carry over the previous date and reschedule the wrong timestamp.Also applies to: 960-964
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/frontend/src/routes/project/report/ProjectReportRoute.tsx` around lines 448 - 451, The state variables newDate and showReschedule are only initialized once from report, causing stale values when the component is reused for a different selectedScheduledReport; add a useEffect that watches report (or selectedScheduledReport) and updates setNewDate(report.scheduled_at ? new Date(report.scheduled_at) : null) and resets setShowReschedule(false) (or to the appropriate initial value) whenever the report changes; apply the same pattern where similar state is initialized (the other newDate/showReschedule instances around the second block) so each report resets its scheduled-view state on change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@echo/frontend/src/components/report/ReportRenderer.tsx`:
- Around line 19-22: The Paper component in ReportRenderer.tsx currently uses
hardcoded Tailwind classes "bg-white" and "bg-gray-100" (see the Paper JSX with
className={fullscreen ? "bg-white" : "bg-gray-100"} and
testId("report-contribute-cta")), which should be replaced with theme-driven
values; change this to use a CSS variable or Mantine token instead (for example
switch className to reference a CSS var like "bg-[var(--app-background)]" or
apply a style/prop that uses theme.colors or var(--app-background) when
fullscreen toggles) so background colors come from theme vars rather than static
utilities, keeping the fullscreen boolean logic and testId(...) intact.
In `@echo/frontend/src/routes/participant/ParticipantReport.tsx`:
- Around line 29-51: In recordView, don't write the once-per-day localStorage
marker before the network call; only persist the `rm_${reportId}_updated` key
after createPublicReportMetric(projectId ?? "", { project_report_id: reportId,
type: "view" }) successfully resolves. Move the localStorage.setItem(...) into
the .then() callback (after the POST succeeds and before calling setTimeout(()
=> refetchViews(), 1000)), keep the existing try/catch around localStorage
reads/writes, and ensure the .catch() for createPublicReportMetric still
silences errors without setting the marker so failed requests can be retried the
next time recordView runs.
In `@echo/frontend/src/routes/project/ProjectsHome.tsx`:
- Around line 138-151: handleTogglePin races when two quick pin clicks compute
the same nextOrder from the current pinned snapshot; fix by serializing the
mutation or performing an optimistic local update before calling mutate. In
handleTogglePin (referencing handleTogglePin, getNextPinOrder, pinnedIds and
togglePinMutation), either early-return when togglePinMutation.isLoading to
block concurrent calls, or perform an immediate optimistic update to your local
pinned state (reserve the nextOrder in state) before calling
togglePinMutation.mutate/mutateAsync so subsequent calls compute based on that
optimistic state; if using mutateAsync prefer awaiting it (or queueing) to
ensure the next computation is based on the latest state. Ensure you
update/rollback optimistic state on mutation success/failure.
- Around line 193-199: The components PinnedProjectCard and ProjectListItem
should be typed to accept BffProjectSummary instead of Project; update their
prop types/signatures to expect BffProjectSummary and adjust any internal access
to use conversations_count (not conversations) or guard against missing
conversations, then remove the casts in ProjectsHome.tsx where you currently use
(project as Project) and (project as any). Specifically, change the props for
PinnedProjectCard and ProjectListItem to BffProjectSummary, update any
references to project?.conversations?.length to use project.conversations_count
or an explicit fallback, and remove the unnecessary casts around owner_name and
owner_email in the PinnedProjectCard/ProjectListItem usages (lines that pass
project to PinnedProjectCard, ProjectListItem, and when calling
handleSearchOwner/togglePinMutation).
In `@echo/frontend/src/routes/project/report/ProjectReportRoute.tsx`:
- Around line 994-1008: The code currently collapses the loading state by using
participantCount ?? 0 which treats "still loading" as zero participants; change
the handler in ProjectReportRoute's onChange so it explicitly checks for a
loading value (participantCount === undefined or null) and treats that as a
blocking state (abort the publish flow or show a loading/disabled UI) instead of
proceeding—use participantCount directly once confirmed loaded, then call
setPublishStatus(true) and open() when participantsToNotify > 0 or call
updateReport(...) when participantsToNotify === 0; keep references to
setPublishStatus, open, updateReport and participantCount to locate where to
change the logic.
In `@echo/server/dembrane/api/participant.py`:
- Around line 657-669: The query that fetches the latest published report
currently sorts by "-date_created" causing older-created-but-later-published
reports to be missed; in the call that builds reports (the
run_in_thread_pool(directus.get_items, "project_report", {...}) call in
participant.py) replace the sort key "-date_created" with the report publish
timestamp field (e.g., "-date_published" or "-published_at" depending on your
Directus schema) and ensure that field is included in "fields" so the API can
sort by it.
- Around line 624-626: Change the open-ended metric write path to validate and
harden inputs: replace the free-form type on ReportMetricCreateSchema with a
constrained enum of supported values (e.g., MetricType with allowed literals)
and update any handlers that insert into project_report_metric to use that enum
(refactor usages of ReportMetricCreateSchema). Additionally implement
server-side protections in the write handler that inserts into
project_report_metric: add per-project (and per-client IP or client-id) rate
limiting and idempotency (e.g., dedupe by a combination of project_report_id +
client_id + metric type + short time window or accept an optional
client-generated idempotency_key) and enforce limits before inserting; ensure
any rejected requests return proper 429/409 responses. Apply the same changes to
the other metric endpoint/handler referenced in the file (the second create path
around the other metric insert).
- Around line 703-730: The endpoint get_public_report_views is aggregating
metrics across all reports in a project because the Directus filter targets
project_report.project_id instead of a specific report; change the
route/signature to be report-scoped (e.g., add a report_id path or query
parameter) and update the directus.get_items filter for "project_report_metric"
to include the specific report id (use "project_report_id": {"_eq": report_id})
— you can also optionally keep project_id in the filter for extra safety — then
return PublicReportViewsResponse(recent=recent) as before.
---
Duplicate comments:
In `@echo/frontend/src/routes/project/report/ProjectReportRoute.tsx`:
- Around line 453-471: The current handleGenerateNow cancels the scheduled
report before creating the replacement, which can leave no report in flight if
createReport fails; change the flow so you call createReport first and only call
cancelSchedule after createReport succeeds (use createReport's onSuccess to
invoke cancelSchedule), handle createReport error cases (e.g., conflict) to
avoid cancelling the existing schedule, and keep onReset called after the full
operation succeeds; update references in handleGenerateNow, cancelSchedule,
createReport, and onReset accordingly.
- Around line 689-693: The UI flashes latestReport because the data fallback
uses latestReport even while a different report (selectedReportId) is loading;
update the data selection logic in ProjectReportRoute.tsx (where activeReport
and latestReport are used) to only fall back to latestReport when no other
report is selected or when selectedReportId matches latestReport?.id and its
status is not "scheduled" or "draft"; otherwise return undefined so the UI
preserves the loading state for the selected report. Ensure the expression
checks selectedReportId (and/or a loading flag) alongside latestReport?.id and
its status before choosing latestReport.
- Around line 535-541: The DateTimePicker in ProjectReportRoute currently uses
minDate={new Date()} which still permits selecting a timestamp effectively
"right now"; change this to enforce a small safety buffer (e.g., compute a
minimum as new Date(Date.now() + BUFFER_MS) where BUFFER_MS is a defined
constant like 2–5 minutes) and use that value for the minDate prop so
newDate/setNewDate cannot pick an immediate timestamp; update any related
validation in the schedule submission logic (same component/handler) to mirror
this minimum check server-side or client-side to prevent bypass.
- Around line 448-451: The state variables newDate and showReschedule are only
initialized once from report, causing stale values when the component is reused
for a different selectedScheduledReport; add a useEffect that watches report (or
selectedScheduledReport) and updates setNewDate(report.scheduled_at ? new
Date(report.scheduled_at) : null) and resets setShowReschedule(false) (or to the
appropriate initial value) whenever the report changes; apply the same pattern
where similar state is initialized (the other newDate/showReschedule instances
around the second block) so each report resets its scheduled-view state on
change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: e787519a-cf70-4710-8c69-677d964c5ed2
📒 Files selected for processing (22)
echo/directus/sync/collections/permissions.jsonecho/frontend/src/components/form/MarkdownWYSIWYG/styles.cssecho/frontend/src/components/report/ReportEditor.tsxecho/frontend/src/components/report/ReportRenderer.tsxecho/frontend/src/components/report/hooks/index.tsecho/frontend/src/lib/api.tsecho/frontend/src/locales/de-DE.poecho/frontend/src/locales/de-DE.tsecho/frontend/src/locales/en-US.poecho/frontend/src/locales/en-US.tsecho/frontend/src/locales/es-ES.poecho/frontend/src/locales/es-ES.tsecho/frontend/src/locales/fr-FR.poecho/frontend/src/locales/fr-FR.tsecho/frontend/src/locales/it-IT.poecho/frontend/src/locales/it-IT.tsecho/frontend/src/locales/nl-NL.poecho/frontend/src/locales/nl-NL.tsecho/frontend/src/routes/participant/ParticipantReport.tsxecho/frontend/src/routes/project/ProjectsHome.tsxecho/frontend/src/routes/project/report/ProjectReportRoute.tsxecho/server/dembrane/api/participant.py
| <Paper | ||
| p="xl" | ||
| className={fullscreen ? "bg-white" : "bg-gray-100"} | ||
| {...testId("report-contribute-cta")} |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Use theme vars instead of hardcoded surface colors here.
bg-white / bg-gray-100 bake this CTA into one palette, so fullscreen and print views drift from the app theme. Drive these backgrounds from CSS variables or Mantine tokens instead.
As per coding guidelines, "Keep static utility classes (borders, spacing, layout) in Tailwind; move theme-dependent colors to CSS variables instead of hardcoded hex values" and "Use var(--app-background) and var(--app-text) instead of hardcoded colors like #F6F4F1 or #2D2D2C to ensure theme changes propagate."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/frontend/src/components/report/ReportRenderer.tsx` around lines 19 - 22,
The Paper component in ReportRenderer.tsx currently uses hardcoded Tailwind
classes "bg-white" and "bg-gray-100" (see the Paper JSX with
className={fullscreen ? "bg-white" : "bg-gray-100"} and
testId("report-contribute-cta")), which should be replaced with theme-driven
values; change this to use a CSS variable or Mantine token instead (for example
switch className to reference a CSS var like "bg-[var(--app-background)]" or
apply a style/prop that uses theme.colors or var(--app-background) when
fullscreen toggles) so background colors come from theme vars rather than static
utilities, keeping the fullscreen boolean logic and testId(...) intact.
| const recordView = useCallback( | ||
| (reportId: number) => { | ||
| const key = `rm_${reportId}_updated`; | ||
| try { | ||
| const lastUpdated = localStorage.getItem(key); | ||
| if (lastUpdated) { | ||
| const hoursDiff = | ||
| (Date.now() - new Date(lastUpdated).getTime()) / (1000 * 60 * 60); | ||
| if (hoursDiff < 24) return; | ||
| } | ||
| localStorage.setItem(key, new Date().toISOString()); | ||
| } catch { | ||
| // Ignore localStorage errors | ||
| } | ||
|
|
||
| createPublicReportMetric(projectId ?? "", { | ||
| project_report_id: reportId, | ||
| type: "view", | ||
| }) | ||
| .then(() => { | ||
| setTimeout(() => refetchViews(), 1000); | ||
| }) | ||
| .catch(() => {}); |
There was a problem hiding this comment.
Write the once-per-day marker only after a successful POST.
Right now the localStorage timestamp is persisted before createPublicReportMetric(...) resolves. If that request fails, this browser suppresses retries for 24 hours and the view is lost.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/frontend/src/routes/participant/ParticipantReport.tsx` around lines 29 -
51, In recordView, don't write the once-per-day localStorage marker before the
network call; only persist the `rm_${reportId}_updated` key after
createPublicReportMetric(projectId ?? "", { project_report_id: reportId, type:
"view" }) successfully resolves. Move the localStorage.setItem(...) into the
.then() callback (after the POST succeeds and before calling setTimeout(() =>
refetchViews(), 1000)), keep the existing try/catch around localStorage
reads/writes, and ensure the .catch() for createPublicReportMetric still
silences errors without setting the marker so failed requests can be retried the
next time recordView runs.
| const handleTogglePin = useCallback( | ||
| (projectId: string) => { | ||
| if (pinnedIds.has(projectId)) { | ||
| togglePinMutation.mutate({ pin_order: null, projectId }); | ||
| } else { | ||
| const nextOrder = getNextPinOrder(); | ||
| if (nextOrder === null) { | ||
| toast.error(t`Unpin a project first (max ${MAX_PINNED})`); | ||
| return; | ||
| } | ||
| togglePinMutation.mutate({ pin_order: nextOrder, projectId }); | ||
| } | ||
| }, | ||
| [pinnedIds, getNextPinOrder, togglePinMutation], |
There was a problem hiding this comment.
Serialize pin mutations before computing the next slot.
nextOrder is derived from the current pinnedProjects snapshot, but this handler fires mutate and returns immediately. Two quick pin actions can compute the same slot before useProjectsHome refetches, so this route can submit conflicting pin_order values. Gate this path while the mutation is pending, or update local pin state optimistically first.
⚡ One straightforward fix
const handleTogglePin = useCallback(
- (projectId: string) => {
+ async (projectId: string) => {
+ if (togglePinMutation.isPending) return;
if (pinnedIds.has(projectId)) {
- togglePinMutation.mutate({ pin_order: null, projectId });
+ await togglePinMutation.mutateAsync({ pin_order: null, projectId });
} else {
const nextOrder = getNextPinOrder();
if (nextOrder === null) {
toast.error(t`Unpin a project first (max ${MAX_PINNED})`);
return;
}
- togglePinMutation.mutate({ pin_order: nextOrder, projectId });
+ await togglePinMutation.mutateAsync({ pin_order: nextOrder, projectId });
}
},
[pinnedIds, getNextPinOrder, togglePinMutation],
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handleTogglePin = useCallback( | |
| (projectId: string) => { | |
| if (pinnedIds.has(projectId)) { | |
| togglePinMutation.mutate({ pin_order: null, projectId }); | |
| } else { | |
| const nextOrder = getNextPinOrder(); | |
| if (nextOrder === null) { | |
| toast.error(t`Unpin a project first (max ${MAX_PINNED})`); | |
| return; | |
| } | |
| togglePinMutation.mutate({ pin_order: nextOrder, projectId }); | |
| } | |
| }, | |
| [pinnedIds, getNextPinOrder, togglePinMutation], | |
| const handleTogglePin = useCallback( | |
| async (projectId: string) => { | |
| if (togglePinMutation.isPending) return; | |
| if (pinnedIds.has(projectId)) { | |
| await togglePinMutation.mutateAsync({ pin_order: null, projectId }); | |
| } else { | |
| const nextOrder = getNextPinOrder(); | |
| if (nextOrder === null) { | |
| toast.error(t`Unpin a project first (max ${MAX_PINNED})`); | |
| return; | |
| } | |
| await togglePinMutation.mutateAsync({ pin_order: nextOrder, projectId }); | |
| } | |
| }, | |
| [pinnedIds, getNextPinOrder, togglePinMutation], |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/frontend/src/routes/project/ProjectsHome.tsx` around lines 138 - 151,
handleTogglePin races when two quick pin clicks compute the same nextOrder from
the current pinned snapshot; fix by serializing the mutation or performing an
optimistic local update before calling mutate. In handleTogglePin (referencing
handleTogglePin, getNextPinOrder, pinnedIds and togglePinMutation), either
early-return when togglePinMutation.isLoading to block concurrent calls, or
perform an immediate optimistic update to your local pinned state (reserve the
nextOrder in state) before calling togglePinMutation.mutate/mutateAsync so
subsequent calls compute based on that optimistic state; if using mutateAsync
prefer awaiting it (or queueing) to ensure the next computation is based on the
latest state. Ensure you update/rollback optimistic state on mutation
success/failure.
| <PinnedProjectCard | ||
| key={project.id} | ||
| project={project as Project} | ||
| onUnpin={handleTogglePin} | ||
| isUnpinning={togglePinMutation.isPending} | ||
| onSearchOwner={isAdmin ? handleSearchOwner : undefined} | ||
| /> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -n -C3 'interface BffProjectSummary|type .*Props|interface .*Props|project:\s*(Project|BffProjectSummary)|export (const|function) (PinnedProjectCard|ProjectListItem)' \
echo/frontend/src/components/project/PinnedProjectCard.tsx \
echo/frontend/src/components/project/ProjectListItem.tsx \
echo/frontend/src/components/project/hooks/index.tsRepository: Dembrane/echo
Length of output: 3518
🏁 Script executed:
#!/bin/bash
# Get the full BffProjectSummary interface definition
sed -n '35,50p' echo/frontend/src/components/project/hooks/index.tsRepository: Dembrane/echo
Length of output: 420
🏁 Script executed:
#!/bin/bash
# Find the Project type definition
rg -n 'type Project\s*=|interface Project\s*{' echo/frontend/src --type ts --type tsx -A10Repository: Dembrane/echo
Length of output: 196
🏁 Script executed:
#!/bin/bash
# Check what fields are accessed in PinnedProjectCard
cat -n echo/frontend/src/components/project/PinnedProjectCard.tsx | head -80Repository: Dembrane/echo
Length of output: 2667
🏁 Script executed:
#!/bin/bash
# Check what fields are accessed in ProjectListItem
cat -n echo/frontend/src/components/project/ProjectListItem.tsx | head -80Repository: Dembrane/echo
Length of output: 2806
🏁 Script executed:
#!/bin/bash
# Find where Project type is imported from in these files
rg -n "import.*Project|from.*Project" echo/frontend/src/components/project/PinnedProjectCard.tsx echo/frontend/src/components/project/ProjectListItem.tsxRepository: Dembrane/echo
Length of output: 39
🏁 Script executed:
#!/bin/bash
# Search for Project type definition
rg -n "^(type|interface|export) (type|interface)? Project( |<|\||=|\{)" echo/frontend/src --type ts --type tsx | head -20Repository: Dembrane/echo
Length of output: 84
🏁 Script executed:
#!/bin/bash
# Get the rest of PinnedProjectCard to see all field accesses
sed -n '45,110p' echo/frontend/src/components/project/PinnedProjectCard.tsxRepository: Dembrane/echo
Length of output: 1968
🏁 Script executed:
#!/bin/bash
# Search for Project type definition with correct file types
rg -n "^(type|interface|export).*(type|interface).*Project\b" echo/frontend/src -t ts | head -30Repository: Dembrane/echo
Length of output: 39
🏁 Script executed:
#!/bin/bash
# Check imports at the top of PinnedProjectCard and ProjectListItem more carefully
head -30 echo/frontend/src/components/project/PinnedProjectCard.tsxRepository: Dembrane/echo
Length of output: 621
🏁 Script executed:
#!/bin/bash
# Find where Project comes from - check all imports
rg -n "Project" echo/frontend/src/components/project/PinnedProjectCard.tsx echo/frontend/src/components/project/ProjectListItem.tsx | grep -i importRepository: Dembrane/echo
Length of output: 39
🏁 Script executed:
#!/bin/bash
# Search across the codebase for Project type/interface definition
find echo/frontend/src -name "*.ts" -o -name "*.tsx" | xargs rg -l "^export (type|interface) Project"Repository: Dembrane/echo
Length of output: 41
🏁 Script executed:
#!/bin/bash
# Check if Project is defined in a types file or similar
rg -n "^type Project|^interface Project|^export type Project|^export interface Project" echo/frontend/src --type tsRepository: Dembrane/echo
Length of output: 2076
🏁 Script executed:
#!/bin/bash
# Look for Project in the entire codebase more carefully
rg "type Project\s*=" echo/frontend/src -A5 | head -40Repository: Dembrane/echo
Length of output: 39
🏁 Script executed:
#!/bin/bash
# Check Directus types - Project might come from there
rg -l "Directus\|directus" echo/frontend/src/components/project/ --type tsRepository: Dembrane/echo
Length of output: 39
🏁 Script executed:
#!/bin/bash
# Look at what's actually available in the Directus schema
find echo/frontend/src -name "*types*" -o -name "*schema*" | head -20Repository: Dembrane/echo
Length of output: 225
🏁 Script executed:
#!/bin/bash
# Get the full Project interface definition from typesDirectus.d.ts
sed -n '255,300p' echo/frontend/src/lib/typesDirectus.d.tsRepository: Dembrane/echo
Length of output: 1968
🏁 Script executed:
#!/bin/bash
# Check if there's a conversations field on Project
rg -n "conversations" echo/frontend/src/lib/typesDirectus.d.ts | head -20Repository: Dembrane/echo
Length of output: 524
Type the components against BffProjectSummary, not Project.
The as Project casts in lines 193-199 and 281-287 are hiding a real contract mismatch. PinnedProjectCard and ProjectListItem both access project?.conversations?.length (fallback for conversations_count), but BffProjectSummary has no conversations field—only the full Project type does. These casts let that type error slip through to runtime.
Additionally, the (project as any).owner_name and (project as any).owner_email casts in both components are unnecessary; these fields already exist on BffProjectSummary.
Refactor the component prop types to accept BffProjectSummary and remove the casts in ProjectsHome.tsx.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/frontend/src/routes/project/ProjectsHome.tsx` around lines 193 - 199,
The components PinnedProjectCard and ProjectListItem should be typed to accept
BffProjectSummary instead of Project; update their prop types/signatures to
expect BffProjectSummary and adjust any internal access to use
conversations_count (not conversations) or guard against missing conversations,
then remove the casts in ProjectsHome.tsx where you currently use (project as
Project) and (project as any). Specifically, change the props for
PinnedProjectCard and ProjectListItem to BffProjectSummary, update any
references to project?.conversations?.length to use project.conversations_count
or an explicit fallback, and remove the unnecessary casts around owner_name and
owner_email in the PinnedProjectCard/ProjectListItem usages (lines that pass
project to PinnedProjectCard, ProjectListItem, and when calling
handleSearchOwner/togglePinMutation).
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (6)
echo/server/dembrane/api/template.py (4)
108-149:⚠️ Potential issue | 🟠 MajorAll
directus.*calls block the event loop — wrap withrun_in_thread_pool.Every endpoint in this router is
async defbut calls synchronousdirectus.get_items,create_item,update_item,delete_item, andupdate_userdirectly. This blocks unrelated requests on the same worker during slow Directus round-trips.Example fix for list_prompt_templates
+from dembrane.async_helpers import run_in_thread_pool + `@TemplateRouter.get`("/prompt-templates") async def list_prompt_templates( auth: DependencyDirectusSession, ) -> List[PromptTemplateOut]: try: - items = directus.get_items( + items = await run_in_thread_pool( + directus.get_items, "prompt_template", { "query": { "filter": {"user_created": {"_eq": auth.user_id}}, "sort": ["sort"], "fields": ["*", "user_created.id", "user_created.first_name"], } }, )Apply the same pattern to all ~30
directus.*calls throughout this file. Based on learnings: "Always wrap blocking I/O calls usingrun_in_thread_poolfromdembrane.async_helpersin backend code."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/api/template.py` around lines 108 - 149, The endpoint list_prompt_templates (TemplateRouter.get("/prompt-templates")) is calling the synchronous directus.get_items directly and must be wrapped in run_in_thread_pool to avoid blocking the event loop; replace directus.get_items(...) with an awaited call to run_in_thread_pool(directus.get_items, <same args>) from dembrane.async_helpers and apply the same pattern to every synchronous directus.* call in this file (e.g., create_item, update_item, delete_item, update_user) so all blocking I/O runs inside run_in_thread_pool with identical arguments and error handling preserved.
398-446:⚠️ Potential issue | 🟠 Major
toggle_star()has visibility bypass and ignores existing thumbs-down ratings.Two issues here:
Security: No check for
template.get("is_public") or source_user_id == auth.user_idbefore allowing the star. Anyone with a template ID can manipulate star counts on private templates.Data integrity: The lookup only matches
rating == 2. If the user has an existingrating == 1(thumbs-down) overall rating, this will create a duplicate row or violate uniqueness constraints.Proposed fix
template = directus.get_item("prompt_template", template_id) if not template: raise HTTPException(status_code=404, detail="Template not found") + + # Gate on visibility + user_created = template.get("user_created") + if not template.get("is_public") and user_created != auth.user_id: + raise HTTPException(status_code=404, detail="Template not found") - existing_ratings = directus.get_items( + # Fetch ANY overall rating (not just stars) + existing_ratings = await run_in_thread_pool( + directus.get_items, "prompt_template_rating", { "query": { "filter": { "user_created": {"_eq": auth.user_id}, "prompt_template_id": {"_eq": template_id}, "chat_message_id": {"_null": True}, - "rating": {"_eq": 2}, }, - "fields": ["id"], + "fields": ["id", "rating"], "limit": 1, } }, )Then handle the three cases: no row → create, row with
rating != 2→ update to 2, row withrating == 2→ delete.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/api/template.py` around lines 398 - 446, In toggle_star: prevent visibility bypass by checking the template's access before mutating stars—retrieve prompt_template and verify template.get("is_public") is True or template.get("source_user_id") == auth.user_id, returning 404/403 if not allowed; fix data integrity by querying prompt_template_rating for any existing row by this user (no filter on rating), then handle three cases: no row → create rating=2, existing row with rating!=2 → update that row's rating to 2, existing row with rating==2 → delete that row and decrement star_count; update prompt_template.star_count accordingly and persist via directus.update_item.
424-440:⚠️ Potential issue | 🟠 MajorRead-modify-write race on
star_countwill lose updates under concurrency.Two concurrent stars both read
star_count=5, both writestar_count=6— one increment is lost. The Directus client doesn't support atomic increments (per relevant snippet fromdirectus.py).Fix: recompute from source of truth
- current_count = template.get("star_count", 0) or 0 - if isinstance(existing_ratings, list) and len(existing_ratings) > 0: # Remove star directus.delete_item("prompt_template_rating", existing_ratings[0]["id"]) - new_count = max(0, current_count - 1) - directus.update_item("prompt_template", template_id, {"star_count": new_count})["data"] - return {"starred": False, "star_count": new_count} + new_count = _recompute_star_count(template_id) + directus.update_item("prompt_template", template_id, {"star_count": new_count}) + return {"starred": False, "star_count": new_count} else: # Add star ... + new_count = _recompute_star_count(template_id) + directus.update_item("prompt_template", template_id, {"star_count": new_count}) + return {"starred": True, "star_count": new_count} + +def _recompute_star_count(template_id: str) -> int: + items = directus.get_items( + "prompt_template_rating", + {"query": {"filter": {"prompt_template_id": {"_eq": template_id}, "rating": {"_eq": 2}, "chat_message_id": {"_null": True}}, "aggregate": {"count": "*"}}}, + ) + return items[0].get("count", 0) if items else 0Same issue applies to
use_countat lines 496-498.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/api/template.py` around lines 424 - 440, The current read-modify-write on template["star_count"] loses updates under concurrency; instead of incrementing/decrementing the cached value from template.get("star_count"), after creating or deleting a prompt_template_rating (see existing_ratings, directus.create_item and directus.delete_item) re-query the canonical count of ratings for that prompt (e.g. count records in "prompt_template_rating" filtered by prompt_template_id) and then call directus.update_item("prompt_template", template_id, {"star_count": computed_count}) with that recomputed value; apply the same pattern for use_count (recompute from its authoritative table before updating) so updates remain correct under concurrent requests.
18-34:⚠️ Potential issue | 🟡 MinorMissing
is_anonymousfield prevents round-trip in publish flows.The client needs to know the current anonymity state to pre-populate the publish form when editing a published template. Add
is_anonymous: bool = Falseto the schema and stop stripping it inlist_prompt_templates()(line 144).class PromptTemplateOut(BaseModel): id: str title: str content: str icon: Optional[str] = None sort: Optional[int] = None is_public: bool = False + is_anonymous: bool = False description: Optional[str] = None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/api/template.py` around lines 18 - 34, The PromptTemplateOut Pydantic model is missing the is_anonymous field which breaks publish/edit round-trips; add is_anonymous: bool = False to the PromptTemplateOut class and update list_prompt_templates() to include (not strip) the is_anonymous value when building/serializing template objects so the client receives the anonymity state for pre-populating the publish form.echo/server/dembrane/api/user_settings.py (2)
394-403:⚠️ Potential issue | 🟠 MajorGate avatar uploads before forwarding them as admin.
await file.read()currently accepts arbitrary bytes and unbounded size, then forwards them with an admin token. A forged request can bypass the browseracceptfilter, upload non-image content, or blow up storage/memory. Enforce an image allowlist and a max size before the Directus upload.🛡️ Minimal guardrail
file_content = await file.read() + allowed_types = {"image/png", "image/jpeg", "image/webp", "image/gif"} + max_size_bytes = 2 * 1024 * 1024 + if file.content_type not in allowed_types: + raise HTTPException(status_code=400, detail="Invalid avatar file type") + if len(file_content) > max_size_bytes: + raise HTTPException(status_code=400, detail="Avatar file is too large") url = f"{directus.url}/files"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/api/user_settings.py` around lines 394 - 403, The code currently reads the entire uploaded file via await file.read() and forwards it with an admin token (directus.get_token()) to requests.post without validating type or size; update the upload flow in user_settings.py to (1) check file.size or the length of file_content after a bounded read and reject if it exceeds a configured MAX_UPLOAD_BYTES, (2) validate the content-type and/or inspect magic bytes to ensure it matches an allowed image MIME allowlist (e.g., image/png,image/jpeg,image/webp) before creating files = {"file": (...)} and calling requests.post, and (3) return an appropriate error for invalid type/oversize instead of forwarding the request with the admin token. Ensure checks occur before using file_content, directus.get_token(), or requests.post so non-image or large payloads are never proxied.
67-75:⚠️ Potential issue | 🟠 MajorStill blocking the event loop on Directus/Requests hops.
These async paths still call synchronous
directus.*andrequests.*inline, so one slow Directus/network hop can stall unrelated FastAPI traffic. Push every blocking call throughawait run_in_thread_pool(...), make_get_or_create_folder_id()async (or wrap its body at the call site), and add an explicittimeout=to eachrequestscall.As per coding guidelines: "Always wrap blocking I/O calls using
run_in_thread_poolfromdembrane.async_helpersin backend code. Wrap calls todirectus.*,conversation_service.*,project_service.*, S3 helpers, and CPU-heavy utilities like token counting or summary generation if they are sync."Also applies to: 104-129, 145-150, 183-188, 218-223, 253-258, 273-290, 377-377, 390-403, 416-430
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@echo/server/dembrane/api/user_settings.py` around lines 67 - 75, The blocking Directus/requests calls must be wrapped in dembrane.async_helpers.run_in_thread_pool and `_get_or_create_folder_id` made async (or its body wrapped at each call) so they don't block the FastAPI event loop; replace direct synchronous calls like directus.get_users, directus.* methods, and any requests.get/post usage with await run_in_thread_pool(lambda: directus.get_users(...)) or await run_in_thread_pool(lambda: requests.get(..., timeout=...)) and add explicit timeout= to every requests.* call; also wrap other sync services referenced in the comment (conversation_service.*, project_service.*, S3 helper calls, token counting/summary utilities) with run_in_thread_pool where used so all blocking I/O/CPU-heavy work is run off the event loop.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@echo/server/dembrane/api/template.py`:
- Around line 598-609: The current inline conditional unpacking for the
"chat_message_id" filter is hard to read; refactor by constructing a local
filter_query dict before the request, e.g., start with filter_query =
{"user_created": {"_eq": auth.user_id}, "prompt_template_id": {"_eq":
body.prompt_template_id}} then add either filter_query["chat_message_id"] =
{"_eq": body.chat_message_id} if body.chat_message_id else
filter_query["chat_message_id"] = {"_null": True}, and finally pass that
filter_query into the existing query (replace the inline dict used where
"filter": {...} is set). This keeps behavior identical while making the logic
around chat_message_id clearer (references: auth.user_id,
body.prompt_template_id, body.chat_message_id, and the surrounding filter
usage).
- Around line 548-581: The current delete-all-then-create flow using
directus.get_items, directus.delete_item, directus.create_item and
directus.update_item in the template save routine can leave the user with
partial or no preferences if any create fails; change this to an atomic
operation by either performing the changes inside a transaction/rollback with
Directus (wrap deletes and creates in a transaction if supported) or implement a
diff-based update: fetch existing preferences, compute items to
add/update/delete, apply updates (create_item/update_item/delete_item) only for
the delta and only remove originals after successful creates/updates; ensure
returned objects are converted to PromptTemplatePreferenceOut only after all ops
succeed and log/raise consistent errors on failure.
In `@echo/server/dembrane/api/user_settings.py`:
- Around line 429-433: The current code only unlinks the avatar relation
(directus.update_user(auth.user_id, {"avatar": None})) but leaves the uploaded
file record/blob orphaned; modify the handler in user_settings.py to first fetch
the user's existing avatar id (read the user or the avatar field), if an avatar
id exists call the Directus client's asset/file delete method for that id (using
whatever client method your Directus wrapper exposes) and then call
directus.update_user(auth.user_id, {"avatar": None}); ensure you catch and log
errors around both the delete and the update (referencing directus.update_user
and the Directus client's delete method) and raise the same HTTPException on
failure.
- Around line 123-131: The current password verification calls Directus
/auth/login via requests.post (login_url = f"{directus.url}/auth/login") which
fails for users with 2FA because ChangePasswordSchema lacks an otp field; update
ChangePasswordSchema to include an optional otp string and, in the handler where
you build the login request, add otp to the JSON payload (e.g., json={"email":
email, "password": body.current_password, "otp": body.otp}) and pass it through
to requests.post (preserve directus.verify), or alternatively avoid
re-authenticating by validating the current user via their session token (use
the existing authenticated request context / session token instead of calling
/auth/login) and remove the login POST path for 2FA users.
---
Duplicate comments:
In `@echo/server/dembrane/api/template.py`:
- Around line 108-149: The endpoint list_prompt_templates
(TemplateRouter.get("/prompt-templates")) is calling the synchronous
directus.get_items directly and must be wrapped in run_in_thread_pool to avoid
blocking the event loop; replace directus.get_items(...) with an awaited call to
run_in_thread_pool(directus.get_items, <same args>) from dembrane.async_helpers
and apply the same pattern to every synchronous directus.* call in this file
(e.g., create_item, update_item, delete_item, update_user) so all blocking I/O
runs inside run_in_thread_pool with identical arguments and error handling
preserved.
- Around line 398-446: In toggle_star: prevent visibility bypass by checking the
template's access before mutating stars—retrieve prompt_template and verify
template.get("is_public") is True or template.get("source_user_id") ==
auth.user_id, returning 404/403 if not allowed; fix data integrity by querying
prompt_template_rating for any existing row by this user (no filter on rating),
then handle three cases: no row → create rating=2, existing row with rating!=2 →
update that row's rating to 2, existing row with rating==2 → delete that row and
decrement star_count; update prompt_template.star_count accordingly and persist
via directus.update_item.
- Around line 424-440: The current read-modify-write on template["star_count"]
loses updates under concurrency; instead of incrementing/decrementing the cached
value from template.get("star_count"), after creating or deleting a
prompt_template_rating (see existing_ratings, directus.create_item and
directus.delete_item) re-query the canonical count of ratings for that prompt
(e.g. count records in "prompt_template_rating" filtered by prompt_template_id)
and then call directus.update_item("prompt_template", template_id,
{"star_count": computed_count}) with that recomputed value; apply the same
pattern for use_count (recompute from its authoritative table before updating)
so updates remain correct under concurrent requests.
- Around line 18-34: The PromptTemplateOut Pydantic model is missing the
is_anonymous field which breaks publish/edit round-trips; add is_anonymous: bool
= False to the PromptTemplateOut class and update list_prompt_templates() to
include (not strip) the is_anonymous value when building/serializing template
objects so the client receives the anonymity state for pre-populating the
publish form.
In `@echo/server/dembrane/api/user_settings.py`:
- Around line 394-403: The code currently reads the entire uploaded file via
await file.read() and forwards it with an admin token (directus.get_token()) to
requests.post without validating type or size; update the upload flow in
user_settings.py to (1) check file.size or the length of file_content after a
bounded read and reject if it exceeds a configured MAX_UPLOAD_BYTES, (2)
validate the content-type and/or inspect magic bytes to ensure it matches an
allowed image MIME allowlist (e.g., image/png,image/jpeg,image/webp) before
creating files = {"file": (...)} and calling requests.post, and (3) return an
appropriate error for invalid type/oversize instead of forwarding the request
with the admin token. Ensure checks occur before using file_content,
directus.get_token(), or requests.post so non-image or large payloads are never
proxied.
- Around line 67-75: The blocking Directus/requests calls must be wrapped in
dembrane.async_helpers.run_in_thread_pool and `_get_or_create_folder_id` made
async (or its body wrapped at each call) so they don't block the FastAPI event
loop; replace direct synchronous calls like directus.get_users, directus.*
methods, and any requests.get/post usage with await run_in_thread_pool(lambda:
directus.get_users(...)) or await run_in_thread_pool(lambda: requests.get(...,
timeout=...)) and add explicit timeout= to every requests.* call; also wrap
other sync services referenced in the comment (conversation_service.*,
project_service.*, S3 helper calls, token counting/summary utilities) with
run_in_thread_pool where used so all blocking I/O/CPU-heavy work is run off the
event loop.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 83848291-463a-48aa-8466-886e877cbe5d
📒 Files selected for processing (2)
echo/server/dembrane/api/template.pyecho/server/dembrane/api/user_settings.py
| try: | ||
| # Delete existing preferences | ||
| existing = directus.get_items( | ||
| "prompt_template_preference", | ||
| { | ||
| "query": { | ||
| "filter": {"user_created": {"_eq": auth.user_id}}, | ||
| "fields": ["id"], | ||
| } | ||
| }, | ||
| ) | ||
| if isinstance(existing, list): | ||
| for pref in existing: | ||
| directus.delete_item("prompt_template_preference", pref["id"]) | ||
|
|
||
| # Create new preferences | ||
| results = [] | ||
| for pref in body: | ||
| result = directus.create_item( | ||
| "prompt_template_preference", | ||
| { | ||
| "template_type": pref.template_type, | ||
| "static_template_id": pref.static_template_id, | ||
| "prompt_template_id": pref.prompt_template_id, | ||
| "sort": pref.sort, | ||
| }, | ||
| )["data"] | ||
| directus.update_item("prompt_template_preference", result["id"], {"user_created": auth.user_id}) | ||
| results.append(PromptTemplatePreferenceOut(**result)) | ||
|
|
||
| return results | ||
| except Exception as e: | ||
| logger.error(f"Failed to save quick access preferences: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to save preferences") from None |
There was a problem hiding this comment.
Non-atomic replace pattern can leave user with partial/no preferences on failure.
If any create_item call fails after deletes succeed, the user loses their preferences. Ship it if the blast radius is acceptable (just preferences), but consider wrapping in a try/rollback or doing a diff-based update instead of delete-all-then-create.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/api/template.py` around lines 548 - 581, The current
delete-all-then-create flow using directus.get_items, directus.delete_item,
directus.create_item and directus.update_item in the template save routine can
leave the user with partial or no preferences if any create fails; change this
to an atomic operation by either performing the changes inside a
transaction/rollback with Directus (wrap deletes and creates in a transaction if
supported) or implement a diff-based update: fetch existing preferences, compute
items to add/update/delete, apply updates (create_item/update_item/delete_item)
only for the delta and only remove originals after successful creates/updates;
ensure returned objects are converted to PromptTemplatePreferenceOut only after
all ops succeed and log/raise consistent errors on failure.
| "filter": { | ||
| "user_created": {"_eq": auth.user_id}, | ||
| "prompt_template_id": {"_eq": body.prompt_template_id}, | ||
| # If chat_message_id is provided, rate that specific use | ||
| # Otherwise, rate the template overall (favorite) | ||
| **({"chat_message_id": {"_eq": body.chat_message_id}} if body.chat_message_id else {"chat_message_id": {"_null": True}}), | ||
| }, | ||
| "fields": ["id"], | ||
| "limit": 1, | ||
| } | ||
| }, | ||
| ) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
That conditional dict unpacking is galaxy brain but it ships.
**({"chat_message_id": {"_eq": body.chat_message_id}} if body.chat_message_id else {"chat_message_id": {"_null": True}}),Functional but future maintainers might appreciate a more explicit filter_query dict built beforehand.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/api/template.py` around lines 598 - 609, The current
inline conditional unpacking for the "chat_message_id" filter is hard to read;
refactor by constructing a local filter_query dict before the request, e.g.,
start with filter_query = {"user_created": {"_eq": auth.user_id},
"prompt_template_id": {"_eq": body.prompt_template_id}} then add either
filter_query["chat_message_id"] = {"_eq": body.chat_message_id} if
body.chat_message_id else filter_query["chat_message_id"] = {"_null": True}, and
finally pass that filter_query into the existing query (replace the inline dict
used where "filter": {...} is set). This keeps behavior identical while making
the logic around chat_message_id clearer (references: auth.user_id,
body.prompt_template_id, body.chat_message_id, and the surrounding filter
usage).
| try: | ||
| login_url = f"{directus.url}/auth/login" | ||
| login_response = requests.post( | ||
| login_url, | ||
| json={"email": email, "password": body.current_password}, | ||
| verify=directus.verify, | ||
| ) | ||
| if login_response.status_code != 200: | ||
| raise HTTPException(status_code=400, detail="Current password is incorrect") |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For Directus authentication, what does POST /auth/login return when a user has two-factor authentication enabled but no OTP is supplied, and can the OTP be sent in the same request body?
💡 Result:
When a user has two-factor authentication (2FA) enabled in Directus and no OTP is supplied in POST /auth/login, the endpoint returns a 401 Unauthorized error. This is because the login process validates the OTP if 2FA is enabled on the user (tfa_secret present) or enforced on the role. The OTP must be included in the same request body as email, password, and otp fields. Official docs confirm: "otp: The user's one-time-password (if MFA is enabled)" in the login body, and "A one-time password (OTP) is required when logging in via the Data Studio or API" after enabling 2FA. GitHub discussions/PRs detail that login succeeds without OTP only if 2FA isn't configured/enforced, otherwise it requires OTP and fails validation without it, typically as 401 since auth fails.
Citations:
- 1: https://directus.io/docs/guides/auth/2fa
- 2: https://directus.io/docs/api/authentication
- 3: API / Auth: Require OTP if Enforce TFA is enabled directus/directus#22190
🏁 Script executed:
# Search for OTP handling patterns in the backend
rg -i "otp|INVALID_OTP" echo/server --type py -A 3 -B 1 | head -50Repository: Dembrane/echo
Length of output: 1325
🏁 Script executed:
# Check the full context around the user_settings.py file
cat -n echo/server/dembrane/api/user_settings.py | sed -n '110,150p'Repository: Dembrane/echo
Length of output: 1852
🏁 Script executed:
# Look for other auth endpoints that might handle 2FA
fd -t f "auth" echo/server --type py | head -20Repository: Dembrane/echo
Length of output: 226
🏁 Script executed:
# Find the ChangePasswordSchema or similar in user_settings.py
rg "class.*Password.*Schema|def.*password" echo/server/dembrane/api/user_settings.py -A 5 -B 1 | head -40Repository: Dembrane/echo
Length of output: 459
🏁 Script executed:
# Look for the specific endpoint that handles password changes
rg "def.*password|@.*password" echo/server/dembrane/api/user_settings.py -B 2 -A 15 | head -80Repository: Dembrane/echo
Length of output: 696
Add OTP support to password verification or switch to a non-2FA verification path.
The code at lines 123–131 calls Directus /auth/login without OTP, but Directus returns 401 when a user has 2FA enabled. Since ChangePasswordSchema has no otp field, users with 2FA cannot change their password—they'll hit the 401 and get stuck with "Current password is incorrect". Either extend ChangePasswordSchema to accept OTP and pass it through, or use a different verification method that doesn't require a second factor (e.g., verify via the user's existing session token instead of re-authenticating).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/api/user_settings.py` around lines 123 - 131, The
current password verification calls Directus /auth/login via requests.post
(login_url = f"{directus.url}/auth/login") which fails for users with 2FA
because ChangePasswordSchema lacks an otp field; update ChangePasswordSchema to
include an optional otp string and, in the handler where you build the login
request, add otp to the JSON payload (e.g., json={"email": email, "password":
body.current_password, "otp": body.otp}) and pass it through to requests.post
(preserve directus.verify), or alternatively avoid re-authenticating by
validating the current user via their session token (use the existing
authenticated request context / session token instead of calling /auth/login)
and remove the login POST path for 2FA users.
| try: | ||
| directus.update_user(auth.user_id, {"avatar": None}) | ||
| except Exception as e: | ||
| logger.error(f"Failed to remove avatar: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to remove avatar") from None |
There was a problem hiding this comment.
“Remove avatar” only unlinks the relation.
Line 430 sets avatar to None, but the uploaded file record/blob remains in Directus. That means the new self-service delete flow retains personal data and leaks orphaned files. Fetch the current avatar id and delete the asset when the user removes or replaces an avatar.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@echo/server/dembrane/api/user_settings.py` around lines 429 - 433, The
current code only unlinks the avatar relation
(directus.update_user(auth.user_id, {"avatar": None})) but leaves the uploaded
file record/blob orphaned; modify the handler in user_settings.py to first fetch
the user's existing avatar id (read the user or the avatar field), if an avatar
id exists call the Directus client's asset/file delete method for that id (using
whatever client method your Directus wrapper exposes) and then call
directus.update_user(auth.user_id, {"avatar": None}); ensure you catch and log
errors around both the delete and the update (referencing directus.update_user
and the Directus client's delete method) and raise the same HTTPException on
failure.
Pin/unpin up to 3 projects to top of list
Project cards display language
Admin: project creator name visible + filterable
Upload, crop, remove profile picture
Self-service display name + password change
Settings reorganized: Account & Security, Appearance, Project Defaults
Multiple reports per project
Guided reports with custom instructions
Scheduled reports with auto conversation inclusion
Background processing with visible status
Create custom templates by duplicating/editing built-in ones
Flat "All templates" view with search, drag-and-drop reorder
Quick access bar (top 3-5 pinned templates)
Settings view for contextual suggestions toggle
Backend: eliminated prompt_template_star table, replaced author_display_name with is_anonymous
Backend: user_created ownership fix for non-admin users
Backend: fields=["*"] fix for missing Directus fields
Upgraded to Gemini 2.5 across all features
Summary by CodeRabbit
New Features
Improvements