Conversation
…andling - Introduced new asynchronous functions `get_conversation_references` and `get_conversation_citations` in `chat_utils.py` to retrieve conversation details and citations based on the RAG prompt. - Updated `ProjectChatMessageModel` to include `conversation_references` and `citations` fields for structured data storage. - Refactored `post_chat` in `chat.py` to utilize the new citation and reference functions, improving the chat context management. - Added a new Jinja template for text structuring to facilitate citation mapping in responses. - Improved error handling and logging throughout the chat API for better traceability of citation generation processes.
… logging - Changed logging level from info to warning in get_conversation_references for better error visibility. - Updated get_conversation_citations to return structured citations by conversation, enhancing data organization. - Simplified citation extraction logic and improved error handling in post_chat, removing unnecessary database commits for conversation references and citations. - Enhanced clarity in variable naming and streamlined the return structure for better maintainability.
- Added debug print statements in `get_conversation_references` and `get_conversation_details_for_rag_query` for improved traceability of conversation data. - Updated `run_etl.py` to load conversation UUID from environment variables, enhancing configurability for ETL execution. - Implemented a check in `get_ratio_abs` to return an empty dictionary if no segment ratios are found, improving error handling.
…at_utils.py and chat.py - Updated get_conversation_references to return an empty list instead of a dictionary when no references are found, improving data consistency. - Enhanced get_conversation_citations to use a more descriptive key for conversation IDs, changing "conversation_id" to "conversation" for clarity. - Modified post_chat function to yield conversation references and citations with updated formatting, ensuring better compatibility with client-side applications.
…d chat API - Updated get_conversation_references and get_conversation_citations to accept project_ids as parameters, improving filtering based on project context. - Introduced get_project_id_from_conversation_id function to retrieve project IDs associated with conversation IDs, enhancing data integrity. - Refactored post_chat function to utilize updated reference and citation methods, ensuring better compatibility with project-specific data handling.
- Updated the table name in the AudioETLPipeline from "conversation_segment_conversation_chunk_1" to "conversation_segment_conversation_chunk" for consistency and accuracy in data mapping.
…gging in get_lightrag_prompt function - Upgraded lightrag-dembrane from version 1.2.7.1 to 1.2.7.4 in pyproject.toml and lock files for improved functionality. - Added a debug logging statement in the get_lightrag_prompt function to log the response, enhancing traceability during execution.
…oml and lock files - Upgraded lightrag-dembrane from version 1.2.7.4 to 1.2.7.6 for improved functionality and compatibility across the project.
- Introduced a new documentation file for LiteLLM configurations, detailing required settings for various models including LLM, audio transcription, text structure, embedding, and inference models. - Refactored database management by implementing a singleton `PostgresDBManager` class to handle PostgreSQLDB initialization and access. - Updated the main application to utilize the new `PostgresDBManager` for database connections, enhancing modularity and maintainability. - Improved the RAGManager to ensure proper initialization of the LightRAG instance. - Enhanced audio processing utilities and prompts to support new configurations and improve functionality.
- Removed the unused JSONB import from sqlalchemy.dialects.postgresql, streamlining the code and improving clarity. - Updated comments to reflect the current state of the database management approach.
WalkthroughThis PR introduces a comprehensive refactor and enhancement across the dembrane server. It adds detailed LiteLLM configuration documentation, externalizes and parameterizes prompt templates, and updates the chat and audio LightRAG pipelines for more robust citation and reference extraction. Several utility functions are added or refactored to improve environment variable management, database connectivity, and data normalization. The chat API is streamlined to yield conversation references and citations early in the streaming response. Configuration files are cleaned up, removing obsolete environment variable handling. New prompt templates are introduced, and dependency versions are updated for compatibility. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant API (chat.py)
participant ChatUtils
participant LLM (LiteLLM)
participant DB
Client->>API (chat.py): POST /chat (with rag_prompt)
API (chat.py)->>ChatUtils: get_conversation_references(rag_prompt, project_ids)
ChatUtils->>DB: Query conversation details
ChatUtils-->>API (chat.py): Return references
API (chat.py)->>Client: Stream "h:" + references
API (chat.py)->>LLM (LiteLLM): Streamed completion (rag_prompt)
LLM (LiteLLM)-->>API (chat.py): Stream response chunks
API (chat.py)->>ChatUtils: get_conversation_citations(rag_prompt, accumulated_response, project_ids)
ChatUtils->>LLM (LiteLLM): Completion (citation extraction prompt)
LLM (LiteLLM)-->>ChatUtils: Citation JSON
ChatUtils->>DB: Map segment ids to conversation/project ids
ChatUtils-->>API (chat.py): Return citations
API (chat.py)->>Client: Stream "h:" + citations
API (chat.py)->>Client: Stream LLM response
Possibly related PRs
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
⏰ Context from checks skipped due to timeout of 90000ms (1)
🔇 Additional comments (4)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 5
🔭 Outside diff range comments (3)
echo/server/dembrane/audio_lightrag/utils/lightrag_utils.py (1)
100-106:⚠️ Potential issueDivision-by-zero risk after chunk filter
If every
segmentis filtered out (none present insegment2chunk),chunk_ratios_absis{},total_ratio== 0 and we divide by zero. Add an early escape.- total_ratio = sum(chunk_ratios_abs.values()) - chunk_ratios_abs = {k:v/total_ratio for k,v in chunk_ratios_abs.items()} + total_ratio = sum(chunk_ratios_abs.values()) + if total_ratio == 0: + return {} + chunk_ratios_abs = {k: v / total_ratio for k, v in chunk_ratios_abs.items()}echo/server/dembrane/chat_utils.py (2)
94-100: 🛠️ Refactor suggestionGuard against empty Directus responses
directus.get_items("project", project_query)[0]will raise anIndexErrorwhen the query returns an empty list (e.g. invalidproject_idor connectivity hiccups). The surroundingKeyErrorexcept block won’t catch this, so the function will blow up, breaking the chat flow.- project = directus.get_items("project", project_query)[0] + project_items = directus.get_items("project", project_query) + if not project_items: + raise ValueError(f"Project {project_id} not found in Directus") + project = project_items[0]
196-213:⚠️ Potential issue
citations_by_conversation_dictmay be undefined – leads toUnboundLocalErroron failure pathIf
json.loadsor any logic inside thetryfails, execution jumps to theexceptblock and the variable is never initialised, but it is still referenced in thereturnstatement.- try: - ... - except Exception as e: - logger.warning(...) - return [citations_by_conversation_dict] + citations_by_conversation_dict: Dict[str, List[Dict[str, Any]]] = {"citations": []} + try: + ... + except Exception as e: + logger.warning(...) + return [citations_by_conversation_dict]
🧹 Nitpick comments (9)
echo/server/dembrane/audio_lightrag/main/run_etl.py (1)
80-86: Dynamic config using env vars - solid engineering!Converting from hardcoded UUID to environment-driven configuration is 💯. This aligns perfectly with our infrastructure-as-code principles and makes testing more flexible.
One minor optimization opportunity:
- from dotenv import load_dotenv - load_dotenv() + # load_dotenv() is already called at the top of the file (line 21)echo/docs/litellm_config.md (2)
5-12: Centralize variable-names & enclose them in back-ticks for clarityHumans scan docs; monospace makes env-vars pop. Suggest wrapping every variable name in back-ticks and removing the duplicated leading “LIGHTRAG_LITELLM_MODEL” in the bullet list header.
-**LIGHTRAG_LITELLM_MODEL**: Used by lightrag ... +`LIGHTRAG_LITELLM_MODEL`: Used by LightRAG ...
13-28: Missing “language” override noteThe models are language-agnostic by default, but we now pass
languageintorender_prompt. Add a short line telling operators that settingLANGUAGE(or whatever flag we end up exposing) changes the prompt locale, or they’ll wonder why French prompts still read English.echo/server/dembrane/audio_lightrag/utils/prompts.py (2)
4-14: Add lightweight docstrings + type-hints for kwargsGreat move externalising prompts (cheers!). Tiny ask: drop a one-liner docstring on each helper so future maintainers know which Jinja params are expected. Also,
languagecould be aLiteral["en", "fr", "de", ...]– cheap autocomplete win.class Prompts: @staticmethod - def audio_model_system_prompt(event_text: str, previous_conversation_text: str, language: str = "en") -> str: + def audio_model_system_prompt( + event_text: str, + previous_conversation_text: str, + language: str = "en", + ) -> str: + """Render the audio-model system prompt."""
17-22: Avoid instantiating empty dictsMicro-perf, but we can skip allocating
{}by passingNonetorender_promptand letting it coerce. Not blocking. LGTM otherwise.echo/server/dembrane/chat_utils.py (2)
158-166: Prefer narrow exception handling & preserve stack-traceCatching a bare
Exceptionhides the root cause (network error, JSON decode, auth failure, …). Additionally, turning every failure into an empty list can make upstream debugging painful.Recommend:
- Catch only the expected errors (
HTTPError,ValidationError, …).- Re-raise after logging or return a structured error object so the caller can decide how to react.
198-208: Sequential awaits inside the for-loop hurt performance
run_segment_id_to_conversation_idis awaited for every citation. When dozens of citations are present this becomes N network/DB round-trips in series.A quick win: accumulate segment IDs and dispatch them via
asyncio.gather(or havelightrag_utilsexpose a bulk endpoint).echo/server/dembrane/api/chat.py (2)
441-445: Minor typo & early‐stream contract
conversation_references_yeild→conversation_references_yield- Ensure the consumer can differentiate the two header messages (
referencesvscitations). Using the same"h:"channel for both may be fine, but document the ordering or add a"type"field to avoid brittle client parsing.- conversation_references_yeild = f"h:{json.dumps(conversation_references)}\n" - yield conversation_references_yeild + yield f'h:{{"type":"references","payload":{json.dumps(conversation_references)}}}\n'
476-478: Citations header mirrors reference header – disambiguateSame rationale as above: prefix with an explicit “type” so downstream clients don’t rely on position.
- citations_yeild = f"h:{json.dumps(citations_list)}\n" - yield citations_yeild + yield f'h:{{"type":"citations","payload":{json.dumps(citations_list)}}}\n'
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
echo/server/requirements-dev.lockis excluded by!**/*.lockecho/server/requirements.lockis excluded by!**/*.lock
📒 Files selected for processing (18)
echo/docs/litellm_config.md(1 hunks)echo/server/dembrane/api/chat.py(7 hunks)echo/server/dembrane/api/stateless.py(2 hunks)echo/server/dembrane/audio_lightrag/main/run_etl.py(1 hunks)echo/server/dembrane/audio_lightrag/pipelines/audio_etl_pipeline.py(1 hunks)echo/server/dembrane/audio_lightrag/pipelines/contextual_chunk_etl_pipeline.py(1 hunks)echo/server/dembrane/audio_lightrag/utils/lightrag_utils.py(8 hunks)echo/server/dembrane/audio_lightrag/utils/litellm_utils.py(2 hunks)echo/server/dembrane/audio_lightrag/utils/prompts.py(1 hunks)echo/server/dembrane/chat_utils.py(2 hunks)echo/server/dembrane/config.py(0 hunks)echo/server/dembrane/database.py(3 hunks)echo/server/dembrane/main.py(2 hunks)echo/server/dembrane/rag_manager.py(1 hunks)echo/server/prompt_templates/audio_model_system_prompt.en.jinja(1 hunks)echo/server/prompt_templates/text_structuring_model_message.en.jinja(1 hunks)echo/server/prompt_templates/text_structuring_model_system_prompt.en.jinja(1 hunks)echo/server/pyproject.toml(1 hunks)
💤 Files with no reviewable changes (1)
- echo/server/dembrane/config.py
🧰 Additional context used
🧬 Code Graph Analysis (4)
echo/server/dembrane/audio_lightrag/utils/litellm_utils.py (1)
echo/server/dembrane/audio_lightrag/utils/prompts.py (2)
Prompts(4-22)text_structuring_model_system_prompt(17-22)
echo/server/dembrane/api/stateless.py (2)
echo/server/dembrane/rag_manager.py (2)
RAGManager(13-51)get_rag(54-55)echo/server/dembrane/postgresdb_manager.py (1)
PostgresDBManager(9-61)
echo/server/dembrane/audio_lightrag/utils/prompts.py (1)
echo/server/dembrane/prompts.py (1)
render_prompt(55-88)
echo/server/dembrane/audio_lightrag/utils/lightrag_utils.py (1)
echo/server/dembrane/postgresdb_manager.py (1)
PostgresDBManager(9-61)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: ci-check-server
🔇 Additional comments (21)
echo/server/prompt_templates/text_structuring_model_system_prompt.en.jinja (1)
1-3: Rock-solid prompt template structure!This clean, minimal prompt template gives the AI clear instructions for text structuring with the crucial requirement to always provide English CONTEXTUAL_TRANSCRIPT. The asterisk emphasis on the translation requirement is a nice touch.
echo/server/prompt_templates/text_structuring_model_message.en.jinja (1)
1-8: Solid mapping instructions for reference handling!The template creates a clear contract for mapping segment IDs to reference text. The segmentation logic follows the exact format needed for citation extraction, and the template variables for accumulated_response and rag_prompt are perfectly positioned to receive runtime data.
echo/server/pyproject.toml (1)
50-50: Dependency version bump is on point!Upgrading lightrag-dembrane to 1.2.7.6 aligns perfectly with the new citation and reference extraction features. Ship it!
echo/server/dembrane/rag_manager.py (1)
6-8: Clean DB configuration import pattern!Centralizing the PostgreSQL config by importing DATABASE_URL and _load_postgres_env_vars creates a single source of truth for database connection handling.
echo/server/dembrane/database.py (3)
3-3: Clean import refactoring!Nice optimization of the typing imports. Removed unused
Dictwhile keeping essential typings, which aligns with our citation support initiative.
31-31: Reduced PostgreSQL dialect imports - LGTM!Removed unused
JSONBimport, keeping only the essentialUUIDimport. This is exactly what we want - lean and mean imports.
283-283: Renamed field to better reflect functionalityThe commented field rename from
prompt_conversationstoconversation_referencesperfectly aligns with our new citation and reference support architecture. Keeping it commented is the right call since we're handling these through application logic now.echo/server/dembrane/audio_lightrag/pipelines/audio_etl_pipeline.py (1)
82-82: Standardized collection naming - nice catch!Removing the trailing
_1suffix standardizes our naming across the entire ETL pipeline. This kind of consistency drives down maintenance costs and cognitive load. LGTM!echo/server/dembrane/api/stateless.py (2)
11-12: Clean import refactoringSolid module path updates to match our new architectural organization. These changes align perfectly with our modular design principles.
225-225: Enhanced logging - exactly what we needed!Adding debug logging for RAG responses gives us crucial visibility for debugging and monitoring. This kind of instrumentation is critical for distributed systems at scale. The triple stars make it easy to grep in logs too - nice touch!
echo/server/dembrane/audio_lightrag/utils/litellm_utils.py (2)
33-36: Solid i18n upgrade! 🚀Adding language param with "en" default makes the function ready for multilingual support. Clean backward-compatible change.
76-76: Perfect implementation of language param! LGTM.The language parameter is properly passed to the prompt template renderer. This creates a consistent internationalization pattern throughout the pipeline.
echo/server/dembrane/audio_lightrag/pipelines/contextual_chunk_etl_pipeline.py (1)
54-54: Clean DRY refactor! 👍Excellent simplification by passing parameters directly to
audio_model_system_prompt()instead of string formatting. This reduces code duplication and leverages the template rendering system properly.echo/server/dembrane/main.py (4)
20-27: Configuration cleanup looks great!Switched to DATABASE_URL pattern - much cleaner approach following 12-factor app principles.
30-31: Solid refactor using PostgresDBManager!Great abstraction moving from direct implementation to a proper manager class.
34-38: Clean import for the new utility function!Adding
_load_postgres_env_varsimport aligns with the DB connection refactoring.
48-53: DB initialization refactoring is 🔥Excellent modernization using environment variable parsing and manager pattern for DB connection. This is way more maintainable and follows best practices.
echo/server/prompt_templates/audio_model_system_prompt.en.jinja (1)
1-41: Stellar prompt template implementation!This template is a perfect example of externalizing prompt content using Jinja. The structure with clear task separation, detailed instructions, and proper context placeholders will yield much better audio transcription and analysis results.
Key strengths:
- Clear role definition for the AI
- Explicit task breakdown with detailed requirements
- Well-structured output format specification
- Context injection points for event data and conversation history
This is exactly how templates should be implemented - clean, explicit, and comprehensive.
echo/docs/litellm_config.md (1)
45-53: Default values ≠ enforced limitsStating “Max audio size 15 MB (default: 15)” implies drift between doc and code can brick prod. Either pull the default from
os.getenv(..., 15)directly here via a small doc-gen step, or add “subject to change – see env”.echo/server/dembrane/audio_lightrag/utils/lightrag_utils.py (1)
361-368: SQL template rename – verify downstreamTable name changed to
conversation_segment_conversation_chunk. Make sure all migrations + ORM models got updated; else we’ll see runtime 42P01 errors.echo/server/dembrane/api/chat.py (1)
448-456: Potential attribute mismatch in streamed chunks
chunk.choices[0].delta.contentfollows OpenAI’s streaming schema. LiteLLM forwarders sometimes exposechunk.choices[0].message.contentor even a raw string, depending on backend. A defensive guard avoids silentAttributeErrorcrashes mid-stream.content_piece = getattr(chunk.choices[0].delta, "content", None) if content_piece: ...
| from dembrane.audio_lightrag.utils.litellm_utils import embedding_func, llm_model_func | ||
| from dembrane.audio_lightrag.utils.lightrag_utils import _load_postgres_env_vars | ||
|
|
||
| _load_postgres_env_vars(str(DATABASE_URL)) |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Environment variable setup at the right time!
Setting up PostgreSQL env vars before logger initialization ensures DB config is ready early. This is the right sequence for initialization.
🏁 Script executed:
#!/bin/bash
# Verify error handling in _load_postgres_env_vars function
fd --type file --exec grep -l "_load_postgres_env_vars" {} \; | xargs grep -A 10 -B 10 "_load_postgres_env_vars" | grep -i "try\|except\|error"Length of output: 548
Add explicit error handling in _load_postgres_env_vars
There’s no try/except around your call, so missing or malformed DATABASE_URL will bubble up with unclear tracebacks. Wrap parsing and env‐var assignment in a try/except and raise a clear RuntimeError.
• Location: echo/server/dembrane/rag_manager.py
• Action: Update _load_postgres_env_vars to catch parsing/key errors, e.g.:
def _load_postgres_env_vars(db_url: str):
- # existing parsing logic…
+ try:
+ # parse URL, extract host, port, user, pass, name…
+ except Exception as e:
+ raise RuntimeError(f"Invalid DATABASE_URL (`{db_url}`): {e}")LGTM once error handling is in place.
Committable suggestion skipped: line range outside the PR's diff.
- Changed the return type of _load_postgres_env_vars from bool to None, reflecting its purpose of setting environment variables without returning a value. - Enhanced get_conversation_details_for_rag_query by bulk fetching conversation metadata, reducing redundant API calls and improving performance. - Added a check in get_ratio_abs to return an empty dictionary if no relevant chunks are found, enhancing error handling.
|
|
||
| ### Feature Flags | ||
| - `ENABLE_AUDIO_LIGHTRAG_INPUT`: Enable/disable audio input processing (default: false) | ||
| - `AUTO_SELECT_ENABLED`: Enable/disable auto-select feature (default: false) No newline at end of file |
| audio_model_prompt = Prompts.audio_model_system_prompt() | ||
| audio_model_prompt = audio_model_prompt.format(event_text = event_text, | ||
| previous_conversation_text = previous_contextual_transcript) | ||
| audio_model_prompt = Prompts.audio_model_system_prompt(event_text, previous_contextual_transcript) |
* Enhance chat functionality with citation and conversation reference handling - Introduced new asynchronous functions `get_conversation_references` and `get_conversation_citations` in `chat_utils.py` to retrieve conversation details and citations based on the RAG prompt. - Updated `ProjectChatMessageModel` to include `conversation_references` and `citations` fields for structured data storage. - Refactored `post_chat` in `chat.py` to utilize the new citation and reference functions, improving the chat context management. - Added a new Jinja template for text structuring to facilitate citation mapping in responses. - Improved error handling and logging throughout the chat API for better traceability of citation generation processes. * Refactor chat_utils.py and chat.py for improved citation handling and logging - Changed logging level from info to warning in get_conversation_references for better error visibility. - Updated get_conversation_citations to return structured citations by conversation, enhancing data organization. - Simplified citation extraction logic and improved error handling in post_chat, removing unnecessary database commits for conversation references and citations. - Enhanced clarity in variable naming and streamlined the return structure for better maintainability. * Orphan segment IDs fixed * Orphan segment IDs fix for citation * Enhance logging and error handling in chat_utils and audio ETL pipeline - Added debug print statements in `get_conversation_references` and `get_conversation_details_for_rag_query` for improved traceability of conversation data. - Updated `run_etl.py` to load conversation UUID from environment variables, enhancing configurability for ETL execution. - Implemented a check in `get_ratio_abs` to return an empty dictionary if no segment ratios are found, improving error handling. * Refactor citation handling and conversation reference structure in chat_utils.py and chat.py - Updated get_conversation_references to return an empty list instead of a dictionary when no references are found, improving data consistency. - Enhanced get_conversation_citations to use a more descriptive key for conversation IDs, changing "conversation_id" to "conversation" for clarity. - Modified post_chat function to yield conversation references and citations with updated formatting, ensuring better compatibility with client-side applications. * Enhance conversation reference and citation handling in chat_utils and chat API - Updated get_conversation_references and get_conversation_citations to accept project_ids as parameters, improving filtering based on project context. - Introduced get_project_id_from_conversation_id function to retrieve project IDs associated with conversation IDs, enhancing data integrity. - Refactored post_chat function to utilize updated reference and citation methods, ensuring better compatibility with project-specific data handling. * Fix table name in audio ETL pipeline mapping - Updated the table name in the AudioETLPipeline from "conversation_segment_conversation_chunk_1" to "conversation_segment_conversation_chunk" for consistency and accuracy in data mapping. * Update lightrag-dembrane dependency to version 1.2.7.4 and enhance logging in get_lightrag_prompt function - Upgraded lightrag-dembrane from version 1.2.7.1 to 1.2.7.4 in pyproject.toml and lock files for improved functionality. - Added a debug logging statement in the get_lightrag_prompt function to log the response, enhancing traceability during execution. * Update lightrag-dembrane dependency to version 1.2.7.6 in pyproject.toml and lock files - Upgraded lightrag-dembrane from version 1.2.7.4 to 1.2.7.6 for improved functionality and compatibility across the project. * Add LiteLLM configuration documentation and refactor database management - Introduced a new documentation file for LiteLLM configurations, detailing required settings for various models including LLM, audio transcription, text structure, embedding, and inference models. - Refactored database management by implementing a singleton `PostgresDBManager` class to handle PostgreSQLDB initialization and access. - Updated the main application to utilize the new `PostgresDBManager` for database connections, enhancing modularity and maintainability. - Improved the RAGManager to ensure proper initialization of the LightRAG instance. - Enhanced audio processing utilities and prompts to support new configurations and improve functionality. * Refactor database.py to remove unused imports - Removed the unused JSONB import from sqlalchemy.dialects.postgresql, streamlining the code and improving clarity. - Updated comments to reflect the current state of the database management approach. * Refactor lightrag_utils.py for improved functionality and clarity - Changed the return type of _load_postgres_env_vars from bool to None, reflecting its purpose of setting environment variables without returning a value. - Enhanced get_conversation_details_for_rag_query by bulk fetching conversation metadata, reducing redundant API calls and improving performance. - Added a check in get_ratio_abs to return an empty dictionary if no relevant chunks are found, enhancing error handling. --------- Co-authored-by: Usama <reach.usamazafar@gmail.com>
Description
Citation support and Reference Support enabled for backend
Robust error handelling in prompt calculations
Code cleanup and style changes
Key Methods
Changes Made
• Added citation extraction and validation logic
• Implemented citation schema models for structured data handling
• Added error handling for citation processing with detailed logging
• Added conversation reference retrieval functionality
• Implemented error handling for reference processing
• Added fallback mechanisms for reference retrieval failures
• Added comprehensive try-catch blocks for prompt operations
• Implemented structured logging for error tracking
• Added graceful fallbacks for language-specific prompt templates
• Enhanced error reporting in prompt calculations
• Standardized error handling patterns
• Improved code organization in prompt-related modules
• Enhanced logging consistency
Added type hints and documentation
Key Testing Area
• Test citation extraction with various ask conversations
• Verify citation format and structure
• Test error handling for invalid citations
• Test reference retrieval with various ask conversations
• Verify reference data structure
• Test error cases and fallback behavior
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
Chores