Skip to content

Feature/llm connection check#586

Merged
Samueli924 merged 2 commits intoSamueli924:mainfrom
tooplick:feature/llm-connection-check
Mar 6, 2026
Merged

Feature/llm connection check#586
Samueli924 merged 2 commits intoSamueli924:mainfrom
tooplick:feature/llm-connection-check

Conversation

@tooplick
Copy link
Copy Markdown
Contributor

@tooplick tooplick commented Mar 6, 2026

概述

为使用 AI 和 SiliconFlow 大模型题库的用户添加启动时连接检查功能,帮助提前发现配置问题,避免运行过程中才发现无法答题。

主要更改

1. 新增 check_llm_connection() 方法 (api/answer.py)

  • Tiku 基类: 添加默认实现,返回 True(非大模型题库不需要检查)
  • AI 类: 发送测试消息 "1+1 等于几?" 验证 OpenAI 格式 API 连接
  • SiliconFlow 类: 发送测试消息验证硅基流动 API 连接

2. 新增配置项 (config_template.ini)

; 是否启用大模型连接检查,填写 true 表示在启动时检查大模型连接是否可用,false 表示不检查
; 仅在使用 AI 或 SiliconFlow 题库时生效,会消耗少量 token
check_llm_connection=true

3. 启动时检查逻辑 (main.py)

  • 仅在使用 AISiliconFlow 题库时执行检查
  • 检查失败时提示用户,支持直接回车继续运行(兼容手动输入 y/yes
  • 只有输入 n 或其他内容时才会退出

4. 文档优化 (README.md)

  • 重新格式化使用方法章节,使用标准代码块
  • 添加 uv run 使用说明,方便 Python 版本管理

使用场景

  1. 新配置用户: 首次配置大模型题库时,启动时自动验证配置是否正确
  2. API 变更: 当 API 密钥过期或服务端点变更时,及时发现问题
  3. 网络问题: 检测网络是否可以正常访问大模型服务

兼容性

  • 非大模型题库(TikuYanxi、TikuAdapter 等)不受影响
  • 可通过配置 check_llm_connection=false 禁用检查
  • 检查失败时用户可选择继续运行,不影响现有使用流程

测试建议

  1. 配置 AI 或 SiliconFlow 题库,验证连接检查成功
  2. 使用错误的 API Key,验证连接检查失败提示
  3. 设置 check_llm_connection=false,验证检查被跳过

Summary by CodeRabbit

  • Documentation

    • Expanded and restructured setup guide with clearer steps, Python 3.13+ instructions, code examples for clone/install/run, and tips for config-file and runner usage.
  • New Features

    • Startup LLM connectivity check (enabled by default) that validates API access and prompts the user if validation fails.
    • New configuration options for API credentials, model selection, proxy, SiliconFlow integration, retry policies, request intervals, and like-api features.

- Add check_llm_connection() method to Tiku base class, AI and SiliconFlow classes
- Add check_llm_connection config option to enable/disable connection verification
- Verify LLM connection at startup when using AI/SiliconFlow provider
- Allow empty input to continue when connection check fails

The connection check sends a simple test message to verify API configuration is working correctly.
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 6, 2026

📝 Walkthrough

Walkthrough

Adds LLM connectivity checks: a new check_llm_connection() method on Tiku with provider-specific implementations, new config options to enable these checks and SiliconFlow settings, runtime validation during initialization, and README updates for Python 3.13+ run guidance. (≈38 words)

Changes

Cohort / File(s) Summary
Documentation
README.md
Rewrote setup/run instructions into structured steps and fenced commands; added Python 3.13+ examples, uv runner tips, config-file usage, and preserved packaging/run-from-release guidance.
Core LLM Connectivity
api/answer.py
Added check_llm_connection(self) -> bool to base Tiku (default True) and implemented provider checks in TikuYanxi, AI, and SiliconFlow that perform test API requests, log results, and return connectivity status.
Configuration
config_template.ini
Added [tiku] options: check_llm_connection, cover_rate, delay, tokens, url, endpoint, key, model, min_interval_seconds, http_proxy, SiliconFlow keys/model/endpoint, and LIKE-related flags (likeapi_*) and retry settings.
Initialization Logic
main.py
In init_chaoxing, added runtime LLM validation: when provider is AI or SiliconFlow and check_llm_connection enabled, call tiku.check_llm_connection(), log outcome, prompt user on failure, and optionally abort.

Sequence Diagram(s)

sequenceDiagram
    participant Init as init_chaoxing()
    participant Config as tiku_config
    participant Tiku as Tiku Instance
    participant Net as External LLM/API
    participant User as User Prompt

    Init->>Config: read provider & check_llm_connection
    alt provider is AI or SiliconFlow AND check enabled
        Init->>Init: log "validating LLM config"
        Init->>Tiku: call check_llm_connection()
        rect rgba(200, 230, 255, 0.5)
            Tiku->>Net: perform test API request (with optional proxy)
            Net-->>Tiku: return response
            Tiku-->>Init: connectivity result (True/False)
        end
        alt result == False
            Init->>Init: log error
            Init->>User: prompt "continue or cancel?"
            alt user cancels
                Init->>Init: raise RuntimeError (abort)
            else user continues
                Init->>Init: proceed startup
            end
        else result == True
            Init->>Init: proceed startup
        end
    else
        Init->>Init: skip validation, proceed
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 I hopped through code with eager cheer,
I checked the clouds so far and near—
Connections tested, logs aglow,
Configs stretched so systems grow.
A little hop, a rabbit's nod, all set to run and prod!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 62.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Feature/llm connection check' directly and clearly describes the main feature added in the changeset: a startup connection check for LLM-based question banks.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
config_template.ini (1)

84-86: Consider adding a dedicated SiliconFlow interval setting.

The comment indicates a workaround for shared min_interval_seconds between AI and SiliconFlow providers. Consider adding siliconflow_min_interval_seconds for clarity and to avoid configuration conflicts.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@config_template.ini` around lines 84 - 86, Add a dedicated configuration key
for SiliconFlow interval to avoid collision with the generic
min_interval_seconds: introduce siliconflow_min_interval_seconds and document it
next to siliconflow_endpoint so users set SiliconFlow-specific request spacing;
update the comment that currently references min_interval_seconds to instruct
using siliconflow_min_interval_seconds instead and remove the workaround note
that directs users to edit line 52.
api/answer.py (1)

849-854: Consider using context manager for httpx.Client to prevent resource leaks.

The httpx.Client is created but never explicitly closed, which could lead to connection pool exhaustion over time.

♻️ Proposed fix
     try:
         if self.http_proxy:
-            httpx_client = httpx.Client(proxy=self.http_proxy)
-            client = OpenAI(http_client=httpx_client, base_url=self.endpoint, api_key=self.key)
+            with httpx.Client(proxy=self.http_proxy) as httpx_client:
+                client = OpenAI(http_client=httpx_client, base_url=self.endpoint, api_key=self.key)
+                return self._perform_connection_check(client)
         else:
             client = OpenAI(base_url=self.endpoint, api_key=self.key)
+            return self._perform_connection_check(client)

Alternatively, ensure the client is closed in a finally block or use context management.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/answer.py` around lines 849 - 854, The httpx.Client created when
self.http_proxy is truthy (httpx_client) is not closed and can leak connections;
update the code that constructs the OpenAI client to use a context manager for
httpx.Client (with httpx.Client(...) as httpx_client: ...) when creating
OpenAI(http_client=httpx_client, ...) or ensure httpx_client.close() is called
in a finally block after using client; keep the same variable names
(httpx_client, client) and the OpenAI(...) call so the change is localized and
avoids leaking the connection pool.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@README.md`:
- Line 26: Update the README heading "### 源码运行(Python 13+)" to the correct
Python version notation "### 源码运行(Python 3.13+)" so the version uses the
standard 3.x format; locate and replace the exact heading text in the README.

---

Nitpick comments:
In `@api/answer.py`:
- Around line 849-854: The httpx.Client created when self.http_proxy is truthy
(httpx_client) is not closed and can leak connections; update the code that
constructs the OpenAI client to use a context manager for httpx.Client (with
httpx.Client(...) as httpx_client: ...) when creating
OpenAI(http_client=httpx_client, ...) or ensure httpx_client.close() is called
in a finally block after using client; keep the same variable names
(httpx_client, client) and the OpenAI(...) call so the change is localized and
avoids leaking the connection pool.

In `@config_template.ini`:
- Around line 84-86: Add a dedicated configuration key for SiliconFlow interval
to avoid collision with the generic min_interval_seconds: introduce
siliconflow_min_interval_seconds and document it next to siliconflow_endpoint so
users set SiliconFlow-specific request spacing; update the comment that
currently references min_interval_seconds to instruct using
siliconflow_min_interval_seconds instead and remove the workaround note that
directs users to edit line 52.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 07c2bd32-a71d-4876-94b3-b125471cdd9a

📥 Commits

Reviewing files that changed from the base of the PR and between 0e787c4 and b45fc10.

📒 Files selected for processing (4)
  • README.md
  • api/answer.py
  • config_template.ini
  • main.py

Comment thread README.md Outdated
- Add blank lines around lists and code blocks to fix Codacy linting
- Use proper bash code blocks for command examples
- Update Python version requirement to 3.13+
@tooplick tooplick force-pushed the feature/llm-connection-check branch from b45fc10 to daaf50e Compare March 6, 2026 07:29
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
README.md (1)

62-67: Consider clarifying "低版本 Python" for better user guidance.

The phrase "低版本 Python" (lower version Python) could be more explicit about what it's lower than. Users might not immediately understand this refers to Python versions below 3.13.

📝 Suggested clarification
 > Tips:  
-> 如果已安装低版本 Python 推荐使用 `uv` 运行:
+> 如果本地未安装 Python 3.13 或安装了更低版本,推荐使用 `uv` 运行:

Alternative wording: "If you don't have Python 3.13+ installed locally, it is recommended to use uv to run:"

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 62 - 67, Replace the ambiguous phrase "低版本 Python"
with a clear statement like "如果本地未安装 Python 3.13 及以上" and update the sentence so
the example command `uv run --python 3.13 main.py` is introduced as a fallback
(e.g., "如果本地未安装 Python 3.13 及以上,建议使用 `uv run --python 3.13 main.py` 运行:");
ensure the text around the existing example command and the phrase `uv run
--python 3.13 main.py` is updated accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@README.md`:
- Around line 62-67: Replace the ambiguous phrase "低版本 Python" with a clear
statement like "如果本地未安装 Python 3.13 及以上" and update the sentence so the example
command `uv run --python 3.13 main.py` is introduced as a fallback (e.g.,
"如果本地未安装 Python 3.13 及以上,建议使用 `uv run --python 3.13 main.py` 运行:"); ensure the
text around the existing example command and the phrase `uv run --python 3.13
main.py` is updated accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1d04b5fc-05b4-4a9a-af4b-c73c1c831b8e

📥 Commits

Reviewing files that changed from the base of the PR and between b45fc10 and daaf50e.

📒 Files selected for processing (1)
  • README.md

@tooplick
Copy link
Copy Markdown
Contributor Author

tooplick commented Mar 6, 2026

屏幕截图 2026-03-06 153213 README.md 可忽略

@Samueli924 Samueli924 merged commit 6e46887 into Samueli924:main Mar 6, 2026
1 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants