Problem Statement
If a local model (e.g., Ollama) or an external API provider hangs or becomes unresponsive, the session is currently blocked indefinitely. There is no timeout mechanism in the Agent.generate_response() call, which can lead to a "frozen" interface.
Proposed Solution
- Add a
timeout: int = Field(default=30) field to the AgentConfig model in config.py.
- Pass this timeout parameter to the
litellm.completion() call within agent.py.
- Wrap the completion call in a
try/except block to catch litellm.Timeout errors.
- Return a graceful error message (or trigger a retry) instead of allowing the process to hang.
Alternatives Considered
None. Robustness against infrastructure failure is a core requirement.
Priority
High 🔴
Additional Context
Essential for reliability in local environments where local model serving might be unstable or slow.
Problem Statement
If a local model (e.g., Ollama) or an external API provider hangs or becomes unresponsive, the session is currently blocked indefinitely. There is no timeout mechanism in the
Agent.generate_response()call, which can lead to a "frozen" interface.Proposed Solution
timeout: int = Field(default=30)field to the AgentConfig model in config.py.litellm.completion()call withinagent.py.try/exceptblock to catchlitellm.Timeouterrors.Alternatives Considered
None. Robustness against infrastructure failure is a core requirement.
Priority
High 🔴
Additional Context
Essential for reliability in local environments where local model serving might be unstable or slow.