[WEB 4252] chore: updated the favicon request for work item link#7173
[WEB 4252] chore: updated the favicon request for work item link#7173sriramveeraghanta merged 2 commits intopreviewfrom
Conversation
WalkthroughThis update modifies how the Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant API/View
participant BGTaskQueue
participant WorkItemTask
Client->>API/View: Create or update IssueLink
API/View->>API/View: Save IssueLink
API/View->>BGTaskQueue: Enqueue crawl_work_item_link_title(id, url)
BGTaskQueue->>WorkItemTask: Execute crawl_work_item_link_title
WorkItemTask->>WorkItemTask: Fetch and parse link title (with refined error handling)
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (1)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
apiserver/plane/bgtasks/work_item_link_task.py (1)
53-53: Consider if 1-second timeout is too aggressive.Reducing the timeout from 2 seconds to 1 second might cause more failures for legitimate but slower websites. This could result in more link previews missing both titles and favicons.
Consider testing with a range of websites to validate that 1 second is sufficient, or implement a tiered timeout approach:
-response = requests.get(url, headers=headers, timeout=1) +response = requests.get(url, headers=headers, timeout=2)-response = requests.get(favicon_url, headers=headers, timeout=1) +response = requests.get(favicon_url, headers=headers, timeout=2)Also applies to: 146-146
apiserver/plane/app/views/issue/link.py (1)
48-50: Good async conversion, but consider adding error handling.Converting to background tasks improves response times by moving HTTP operations out of the request-response cycle. However, consider adding error handling for task enqueuing failures.
Consider wrapping the task calls in try-except blocks:
-crawl_work_item_link_title.delay( - serializer.data.get("id"), serializer.data.get("url") -) +try: + crawl_work_item_link_title.delay( + serializer.data.get("id"), serializer.data.get("url") + ) +except Exception as e: + logger.warning(f"Failed to enqueue link title crawling task: {e}")Also applies to: 81-83
apiserver/plane/api/views/issue.py (1)
695-697: Async task integration implemented correctly.The background task calls are properly placed after
serializer.save()in both create and update operations, ensuring theIssueLinkobject exists before the task processes it.Consider adding error handling for task enqueuing failures (same pattern as suggested for the app views):
-crawl_work_item_link_title.delay( - serializer.data.get("id"), serializer.data.get("url") -) +try: + crawl_work_item_link_title.delay( + serializer.data.get("id"), serializer.data.get("url") + ) +except Exception as e: + logger.warning(f"Failed to enqueue link title crawling task: {e}")Also applies to: 725-727
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
apiserver/plane/api/views/issue.py(3 hunks)apiserver/plane/app/views/issue/link.py(2 hunks)apiserver/plane/bgtasks/work_item_link_task.py(3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
apiserver/plane/app/views/issue/link.py (2)
apiserver/plane/bgtasks/work_item_link_task.py (1)
crawl_work_item_link_title(169-175)web/core/local-db/utils/utils.ts (1)
delay(155-155)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Analyze (javascript)
🔇 Additional comments (3)
apiserver/plane/bgtasks/work_item_link_task.py (2)
49-61: Excellent error handling improvement.The nested try-except block ensures that favicon retrieval continues even if the title fetch fails, which aligns perfectly with the PR objective of maintaining favicon display despite title retrieval issues.
168-175: Function relocation looks good.Moving the
crawl_work_item_link_titlefunction to the end of the file with the@shared_taskdecorator is appropriate for better code organization.apiserver/plane/api/views/issue.py (1)
61-61: Import addition looks correct.The import of
crawl_work_item_link_titlefrom the background task module is properly placed and necessary for the async task integration.
* chore: added the favicon to link * chore: added none validation for soup
Description
This pull request moves the
crawl_work_item_link_titlefunctionality from a direct function call to a background job. It also improves error handling by ensuring that the favicon is still retrieved even if fetching the title from the URL fails.Type of Change
Summary by CodeRabbit
Bug Fixes
Chores