Feature/llm connection check#586
Conversation
- Add check_llm_connection() method to Tiku base class, AI and SiliconFlow classes - Add check_llm_connection config option to enable/disable connection verification - Verify LLM connection at startup when using AI/SiliconFlow provider - Allow empty input to continue when connection check fails The connection check sends a simple test message to verify API configuration is working correctly.
📝 WalkthroughWalkthroughAdds LLM connectivity checks: a new Changes
Sequence Diagram(s)sequenceDiagram
participant Init as init_chaoxing()
participant Config as tiku_config
participant Tiku as Tiku Instance
participant Net as External LLM/API
participant User as User Prompt
Init->>Config: read provider & check_llm_connection
alt provider is AI or SiliconFlow AND check enabled
Init->>Init: log "validating LLM config"
Init->>Tiku: call check_llm_connection()
rect rgba(200, 230, 255, 0.5)
Tiku->>Net: perform test API request (with optional proxy)
Net-->>Tiku: return response
Tiku-->>Init: connectivity result (True/False)
end
alt result == False
Init->>Init: log error
Init->>User: prompt "continue or cancel?"
alt user cancels
Init->>Init: raise RuntimeError (abort)
else user continues
Init->>Init: proceed startup
end
else result == True
Init->>Init: proceed startup
end
else
Init->>Init: skip validation, proceed
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
config_template.ini (1)
84-86: Consider adding a dedicated SiliconFlow interval setting.The comment indicates a workaround for shared
min_interval_secondsbetween AI and SiliconFlow providers. Consider addingsiliconflow_min_interval_secondsfor clarity and to avoid configuration conflicts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config_template.ini` around lines 84 - 86, Add a dedicated configuration key for SiliconFlow interval to avoid collision with the generic min_interval_seconds: introduce siliconflow_min_interval_seconds and document it next to siliconflow_endpoint so users set SiliconFlow-specific request spacing; update the comment that currently references min_interval_seconds to instruct using siliconflow_min_interval_seconds instead and remove the workaround note that directs users to edit line 52.api/answer.py (1)
849-854: Consider using context manager for httpx.Client to prevent resource leaks.The
httpx.Clientis created but never explicitly closed, which could lead to connection pool exhaustion over time.♻️ Proposed fix
try: if self.http_proxy: - httpx_client = httpx.Client(proxy=self.http_proxy) - client = OpenAI(http_client=httpx_client, base_url=self.endpoint, api_key=self.key) + with httpx.Client(proxy=self.http_proxy) as httpx_client: + client = OpenAI(http_client=httpx_client, base_url=self.endpoint, api_key=self.key) + return self._perform_connection_check(client) else: client = OpenAI(base_url=self.endpoint, api_key=self.key) + return self._perform_connection_check(client)Alternatively, ensure the client is closed in a
finallyblock or use context management.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/answer.py` around lines 849 - 854, The httpx.Client created when self.http_proxy is truthy (httpx_client) is not closed and can leak connections; update the code that constructs the OpenAI client to use a context manager for httpx.Client (with httpx.Client(...) as httpx_client: ...) when creating OpenAI(http_client=httpx_client, ...) or ensure httpx_client.close() is called in a finally block after using client; keep the same variable names (httpx_client, client) and the OpenAI(...) call so the change is localized and avoids leaking the connection pool.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Line 26: Update the README heading "### 源码运行(Python 13+)" to the correct
Python version notation "### 源码运行(Python 3.13+)" so the version uses the
standard 3.x format; locate and replace the exact heading text in the README.
---
Nitpick comments:
In `@api/answer.py`:
- Around line 849-854: The httpx.Client created when self.http_proxy is truthy
(httpx_client) is not closed and can leak connections; update the code that
constructs the OpenAI client to use a context manager for httpx.Client (with
httpx.Client(...) as httpx_client: ...) when creating
OpenAI(http_client=httpx_client, ...) or ensure httpx_client.close() is called
in a finally block after using client; keep the same variable names
(httpx_client, client) and the OpenAI(...) call so the change is localized and
avoids leaking the connection pool.
In `@config_template.ini`:
- Around line 84-86: Add a dedicated configuration key for SiliconFlow interval
to avoid collision with the generic min_interval_seconds: introduce
siliconflow_min_interval_seconds and document it next to siliconflow_endpoint so
users set SiliconFlow-specific request spacing; update the comment that
currently references min_interval_seconds to instruct using
siliconflow_min_interval_seconds instead and remove the workaround note that
directs users to edit line 52.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 07c2bd32-a71d-4876-94b3-b125471cdd9a
📒 Files selected for processing (4)
README.mdapi/answer.pyconfig_template.inimain.py
- Add blank lines around lists and code blocks to fix Codacy linting - Use proper bash code blocks for command examples - Update Python version requirement to 3.13+
b45fc10 to
daaf50e
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (1)
README.md (1)
62-67: Consider clarifying "低版本 Python" for better user guidance.The phrase "低版本 Python" (lower version Python) could be more explicit about what it's lower than. Users might not immediately understand this refers to Python versions below 3.13.
📝 Suggested clarification
> Tips: -> 如果已安装低版本 Python 推荐使用 `uv` 运行: +> 如果本地未安装 Python 3.13 或安装了更低版本,推荐使用 `uv` 运行:Alternative wording: "If you don't have Python 3.13+ installed locally, it is recommended to use
uvto run:"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 62 - 67, Replace the ambiguous phrase "低版本 Python" with a clear statement like "如果本地未安装 Python 3.13 及以上" and update the sentence so the example command `uv run --python 3.13 main.py` is introduced as a fallback (e.g., "如果本地未安装 Python 3.13 及以上,建议使用 `uv run --python 3.13 main.py` 运行:"); ensure the text around the existing example command and the phrase `uv run --python 3.13 main.py` is updated accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@README.md`:
- Around line 62-67: Replace the ambiguous phrase "低版本 Python" with a clear
statement like "如果本地未安装 Python 3.13 及以上" and update the sentence so the example
command `uv run --python 3.13 main.py` is introduced as a fallback (e.g.,
"如果本地未安装 Python 3.13 及以上,建议使用 `uv run --python 3.13 main.py` 运行:"); ensure the
text around the existing example command and the phrase `uv run --python 3.13
main.py` is updated accordingly.

概述
为使用 AI 和 SiliconFlow 大模型题库的用户添加启动时连接检查功能,帮助提前发现配置问题,避免运行过程中才发现无法答题。
主要更改
1. 新增
check_llm_connection()方法 (api/answer.py)True(非大模型题库不需要检查)2. 新增配置项 (
config_template.ini)3. 启动时检查逻辑 (
main.py)AI或SiliconFlow题库时执行检查y/yes)n或其他内容时才会退出4. 文档优化 (
README.md)uv run使用说明,方便 Python 版本管理使用场景
兼容性
check_llm_connection=false禁用检查测试建议
check_llm_connection=false,验证检查被跳过Summary by CodeRabbit
Documentation
New Features