diff --git a/.gitignore b/.gitignore
index 3f2dbe74..ab86198f 100644
--- a/.gitignore
+++ b/.gitignore
@@ -27,7 +27,6 @@ auth.json
model_responses.txt
# Sensitive files (API keys, credentials)
-mykey.py
tasks/
diff --git a/GITHUB_COPILOT_CONFIG_GUIDE.md b/GITHUB_COPILOT_CONFIG_GUIDE.md
new file mode 100644
index 00000000..573a854d
--- /dev/null
+++ b/GITHUB_COPILOT_CONFIG_GUIDE.md
@@ -0,0 +1,274 @@
+# GitHub Copilot Pro 配置指南
+
+## 📖 目录
+
+1. [概述](#概述)
+2. [前置条件](#前置条件)
+3. [获取 OAuth Token](#获取-oauth-token)
+4. [配置方法(推荐)](#配置方法推荐)
+5. [配置方法(直连模式)](#配置方法直连模式)
+6. [常见问题](#常见问题)
+7. [可用模型列表](#可用模型列表)
+
+---
+
+## 1. 概述
+
+本文档详细介绍如何在 GenericAgent 项目中配置和使用 GitHub Copilot Pro 模型。GitHub Copilot Pro 提供了多种强大的 AI 模型,包括 GPT-4、GPT-5、Claude 和 Gemini 系列。
+
+**关键点:**
+- GitHub Copilot API 需要 OAuth Token(gho_开头),不支持传统 PAT(ghp_开头)
+- 需要使用 litellm 代理进行模型名称转换
+- 支持流式响应和工具调用
+
+---
+
+## 2. 前置条件
+
+- 已订阅 GitHub Copilot Pro
+- 已安装 Python 3.10+
+- 已创建项目虚拟环境 `.venv`
+- 已在 `.venv` 中安装 litellm 及其代理依赖
+
+```bash
+.venv\Scripts\python.exe -m pip install -U pip
+.venv\Scripts\python.exe -m pip install "litellm[proxy]"
+```
+
+---
+
+## 3. 获取 OAuth Token
+
+### 方法一:使用 GitHub CLI(推荐)
+
+```bash
+# 安装 GitHub CLI(如果未安装)
+# https://cli.github.com/
+
+# 登录并获取 Copilot 权限
+gh auth login --scopes copilot
+
+# 获取 token
+gh auth token
+```
+
+### 方法二:从 VS Code Copilot 插件提取
+
+1. 打开 VS Code
+2. 确保已登录 GitHub 并启用 Copilot
+3. 在以下路径查找 token:
+ - Windows: `%APPDATA%\GitHub Copilot\api_token`
+ - macOS: `~/Library/Application Support/GitHub Copilot/api_token`
+ - Linux: `~/.config/github-copilot/api_token`
+
+---
+
+## 4. 配置方法(推荐)
+
+使用 litellm 代理模式,这是最稳定的配置方式。
+
+### 步骤 1:创建 litellm 配置文件
+
+在项目根目录创建 `litellm_config.yaml`:
+
+```yaml
+model_list:
+ - model_name: gpt-4
+ litellm_params:
+ model: github_copilot/gpt-4
+ api_key: YOUR_OAUTH_TOKEN_HERE
+ extra_headers:
+ Editor-Version: "vscode/1.85.1"
+ Editor-Plugin-Version: "copilot/1.155.0"
+ Copilot-Integration-Id: "vscode-chat"
+ User-Agent: "GitHubCopilotChat/0.35.0"
+
+ # 添加更多模型...
+ - model_name: gpt-5
+ litellm_params:
+ model: github_copilot/gpt-5
+ api_key: YOUR_OAUTH_TOKEN_HERE
+ extra_headers:
+ Editor-Version: "vscode/1.85.1"
+ Editor-Plugin-Version: "copilot/1.155.0"
+ Copilot-Integration-Id: "vscode-chat"
+ User-Agent: "GitHubCopilotChat/0.35.0"
+```
+
+**注意**:将 `YOUR_OAUTH_TOKEN_HERE` 替换为你的 OAuth Token。
+
+### 步骤 2:配置 mykey.py
+
+```python
+native_oai_config_copilot = {
+ 'name': 'copilot-pro',
+ 'apikey': 'anything', # litellm 代理不需要验证
+ 'apibase': 'http://localhost:8000/v1',
+ 'model': 'gpt-4',
+ 'api_mode': 'chat_completions',
+ 'stream': True,
+}
+```
+
+### 步骤 3:启动 litellm 代理
+
+```bash
+.venv\Scripts\litellm.exe --config litellm_config.yaml --port 8000
+# 或直接双击项目根目录下 start_litellm.bat
+```
+
+### 步骤 4:启动 GenericAgent
+
+```bash
+python launch.pyw
+```
+
+**其他可选前端:**
+```bash
+python frontends/qtapp.py # 基于 Qt 的桌面应用
+streamlit run frontends/stapp2.py # 另一种 Streamlit 风格 UI
+python frontends/wechatapp.py # 微信 Bot 前端
+```
+
+---
+
+## 5. 配置方法(直连模式)
+
+**⚠️ 注意**:直连模式可能遇到模型名称不支持的问题,推荐使用代理模式。
+
+### 配置示例
+
+```python
+native_oai_config_copilot = {
+ 'name': 'copilot-pro',
+ 'apikey': 'gho_your_oauth_token',
+ 'apibase': 'https://api.githubcopilot.com/chat/completions',
+ 'model': 'gpt-4',
+ 'api_mode': 'chat_completions',
+ 'stream': True,
+ 'extra_headers': {
+ "Editor-Version": "vscode/1.85.1",
+ "Editor-Plugin-Version": "copilot/1.155.0",
+ "Copilot-Integration-Id": "vscode-chat",
+ "User-Agent": "GitHubCopilotChat/0.35.0"
+ },
+}
+```
+
+---
+
+## 6. 常见问题
+
+### Q1: 错误 "The requested model is not supported"
+
+**原因**:GitHub Copilot API 需要使用 litellm 格式的模型名称(如 `github_copilot/gpt-4`),而不是标准模型名称。
+
+**解决方案**:使用 litellm 代理模式。
+
+### Q2: 错误 "Access to this endpoint is forbidden"
+
+**原因**:使用了错误的端点或 token 类型。
+
+**解决方案**:
+- 使用 `https://api.githubcopilot.com` 而不是 `api.individual.githubcopilot.com`
+- 确保使用 OAuth Token(gho_开头)而不是 PAT(ghp_开头)
+
+### Q3: 错误 "MissingSchema: Invalid URL 'auto/v1/chat/completions'"
+
+**原因**:`apibase: 'auto'` 在 GenericAgent 中无法正确解析。
+
+**解决方案**:使用完整的 URL 地址或 litellm 代理模式。
+
+### Q4: 网络连接超时
+
+**解决方案**:设置代理环境变量:
+
+```bash
+set HTTP_PROXY=http://127.0.0.1:6789
+set HTTPS_PROXY=http://127.0.0.1:6789
+```
+
+---
+
+## 7. 可用模型列表
+
+### OpenAI 模型
+| 模型名称 | litellm 格式 | 说明 |
+|----------|-------------|------|
+| `gpt-4` | `github_copilot/gpt-4` | 标准 GPT-4 模型 |
+| `gpt-5` | `github_copilot/gpt-5` | GPT-5 模型 |
+| `gpt-5.1-codex` | `github_copilot/gpt-5.1-codex` | 代码专用模型 |
+| `gpt-5.2-codex` | `github_copilot/gpt-5.2-codex` | 改进版代码模型 |
+| `gpt-5.4` | `github_copilot/gpt-5.4` | 最新 GPT-5.4 模型 |
+| `gpt-5.4-mini` | `github_copilot/gpt-5.4-mini` | 轻量版,响应更快 |
+
+### Anthropic 模型
+| 模型名称 | litellm 格式 | 说明 |
+|----------|-------------|------|
+| `claude-sonnet-4.5` | `github_copilot/claude-sonnet-4.5` | 长上下文支持 |
+| `claude-opus-4.5` | `github_copilot/claude-opus-4.5` | 旗舰模型 |
+| `claude-opus-4.6` | `github_copilot/claude-opus-4.6` | 更新版本 |
+| `claude-opus-4.7` | `github_copilot/claude-opus-4.7` | 最新版本 |
+
+### Google 模型
+| 模型名称 | litellm 格式 | 说明 |
+|----------|-------------|------|
+| `gemini-2.5-pro` | `github_copilot/gemini-2.5-pro` | Gemini 旗舰模型 |
+| `gemini-3-flash` | `github_copilot/gemini-3-flash` | 轻量快速版 |
+| `gemini-3.1-pro` | `github_copilot/gemini-3.1-pro` | 最新版本 |
+
+---
+
+## 📝 配置模板
+
+### litellm_config.yaml 完整模板
+
+```yaml
+model_list:
+ - model_name: gpt-4
+ litellm_params:
+ model: github_copilot/gpt-4
+ api_key: gho_your_token_here
+ extra_headers:
+ Editor-Version: "vscode/1.85.1"
+ Editor-Plugin-Version: "copilot/1.155.0"
+ Copilot-Integration-Id: "vscode-chat"
+ User-Agent: "GitHubCopilotChat/0.35.0"
+
+ - model_name: gpt-5
+ litellm_params:
+ model: github_copilot/gpt-5
+ api_key: gho_your_token_here
+ extra_headers:
+ Editor-Version: "vscode/1.85.1"
+ Editor-Plugin-Version: "copilot/1.155.0"
+ Copilot-Integration-Id: "vscode-chat"
+ User-Agent: "GitHubCopilotChat/0.35.0"
+
+ - model_name: claude-sonnet
+ litellm_params:
+ model: github_copilot/claude-sonnet-4.5
+ api_key: gho_your_token_here
+ extra_headers:
+ Editor-Version: "vscode/1.85.1"
+ Editor-Plugin-Version: "copilot/1.155.0"
+ Copilot-Integration-Id: "vscode-chat"
+ User-Agent: "GitHubCopilotChat/0.35.0"
+```
+
+---
+
+## 📌 总结
+
+| 配置项 | 值 |
+|--------|-----|
+| 推荐方式 | litellm 代理模式 |
+| 端点 | `http://localhost:8000/v1`(代理) |
+| Token 类型 | OAuth Token(gho_开头) |
+| 模型格式 | `github_copilot/{model_name}` |
+| 必需头部 | Editor-Version, Editor-Plugin-Version, Copilot-Integration-Id, User-Agent |
+
+---
+
+**最后更新**: 2026年4月
+**版本**: v1.0
\ No newline at end of file
diff --git a/agentmain.py b/agentmain.py
index 937fb7eb..ebda8220 100644
--- a/agentmain.py
+++ b/agentmain.py
@@ -1,4 +1,4 @@
-import os, sys, threading, queue, time, json, re, random, locale
+import os, sys, threading, queue, time, json, re, random, locale, base64, mimetypes
os.environ.setdefault('GA_LANG', 'zh' if any(k in (locale.getlocale()[0] or '').lower() for k in ('zh', 'chinese')) else 'en')
if sys.stdout is None: sys.stdout = open(os.devnull, "w")
elif hasattr(sys.stdout, 'reconfigure'): sys.stdout.reconfigure(errors='replace')
@@ -6,7 +6,7 @@
elif hasattr(sys.stderr, 'reconfigure'): sys.stderr.reconfigure(errors='replace')
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
-from llmcore import reload_mykeys, LLMSession, ToolClient, ClaudeSession, MixinSession, NativeToolClient, NativeClaudeSession, NativeOAISession
+from llmcore import LLMSession, ToolClient, ClaudeSession, MixinSession, NativeToolClient, NativeClaudeSession, NativeOAISession
from agent_loop import agent_runner_loop
from ga import GenericAgentHandler, smart_format, get_global_memory, format_error, consume_file
@@ -39,24 +39,30 @@ def get_system_prompt():
prompt += get_global_memory()
return prompt
+def build_multimodal_user_content(text, images):
+ content = [{"type": "text", "text": text}]
+ for path in images or []:
+ if not path or not os.path.isfile(path):
+ continue
+ mime = mimetypes.guess_type(path)[0] or 'application/octet-stream'
+ if not mime.startswith('image/'):
+ continue
+ try:
+ with open(path, 'rb') as f:
+ data = base64.b64encode(f.read()).decode('ascii')
+ except OSError:
+ continue
+ content.append({
+ "type": "image",
+ "source": {"type": "base64", "media_type": mime, "data": data}
+ })
+ return content
+
class GeneraticAgent:
def __init__(self):
script_dir = os.path.dirname(os.path.abspath(__file__))
os.makedirs(os.path.join(script_dir, 'temp'), exist_ok=True)
- self.lock = threading.Lock()
- self.task_dir = None
- self.history = []
- self.task_queue = queue.Queue()
- self.is_running = False; self.stop_sig = False
- self.llm_no = 0; self.inc_out = False
- self.handler = None; self.verbose = True
- self.load_llm_sessions()
-
- def load_llm_sessions(self):
- mykeys, changed = reload_mykeys()
- if not changed and hasattr(self, 'llmclients'): return
- try: oldhistory = self.llmclient.backend.history
- except: oldhistory = None
+ from llmcore import mykeys
llm_sessions = []
for k, cfg in mykeys.items():
if not any(x in k for x in ['api', 'config', 'cookie']): continue
@@ -75,11 +81,16 @@ def load_llm_sessions(self):
else: llm_sessions[i] = ToolClient(mixin)
except Exception as e: print(f'[WARN] Failed to init MixinSession with cfg {s["mixin_cfg"]}: {e}')
self.llmclients = llm_sessions
- self.llmclient = self.llmclients[self.llm_no%len(self.llmclients)]
- if oldhistory: self.llmclient.backend.history = oldhistory
-
+ self.lock = threading.Lock()
+ self.task_dir = None
+ self.history = []
+ self.task_queue = queue.Queue()
+ self.is_running = False; self.stop_sig = False
+ self.llm_no = 0; self.inc_out = False
+ self.handler = None; self.verbose = True
+ self.llmclient = self.llmclients[self.llm_no]
+
def next_llm(self, n=-1):
- self.load_llm_sessions()
self.llm_no = ((self.llm_no + 1) if n < 0 else n) % len(self.llmclients)
lastc = self.llmclient
self.llmclient = self.llmclients[self.llm_no]
@@ -89,9 +100,7 @@ def next_llm(self, n=-1):
name = self.get_llm_name(model=True)
if 'glm' in name or 'minimax' in name or 'kimi' in name: load_tool_schema('_cn')
else: load_tool_schema()
- def list_llms(self):
- self.load_llm_sessions()
- return [(i, self.get_llm_name(b), i == self.llm_no) for i, b in enumerate(self.llmclients)]
+ def list_llms(self): return [(i, self.get_llm_name(b), i == self.llm_no) for i, b in enumerate(self.llmclients)]
def get_llm_name(self, b=None, model=False):
b = self.llmclient if b is None else b
if isinstance(b, dict): return 'BADCONFIG_MIXIN'
@@ -129,27 +138,39 @@ def run(self):
while True:
task = self.task_queue.get()
raw_query, source, images, display_queue = task["query"], task["source"], task.get("images") or [], task["output"]
- raw_query = self._handle_slash_cmd(raw_query, display_queue)
- if raw_query is None:
- self.task_queue.task_done(); continue
self.is_running = True
- rquery = smart_format(raw_query.replace('\n', ' '), max_str_len=200)
- self.history.append(f"[USER]: {rquery}")
-
- sys_prompt = get_system_prompt() + getattr(self.llmclient.backend, 'extra_sys_prompt', '')
- script_dir = os.path.dirname(os.path.abspath(__file__))
- handler = GenericAgentHandler(self, self.history, os.path.join(script_dir, 'temp'))
- if self.handler and 'key_info' in self.handler.working:
- ki = re.sub(r'\n\[SYSTEM\] 此为.*?工作记忆[。\n]*', '', self.handler.working['key_info']) # 去旧
- handler.working['key_info'] = ki
- handler.working['passed_sessions'] = ps = self.handler.working.get('passed_sessions', 0) + 1
- if ps > 0: handler.working['key_info'] += f'\n[SYSTEM] 此为 {ps} 个对话前设置的key_info,若已在新任务,先更新或清除工作记忆。\n'
- self.handler = handler
- # although new handler, the **full** history is in llmclient, so it is full history!
- gen = agent_runner_loop(self.llmclient, sys_prompt, raw_query,
- handler, TOOLS_SCHEMA, max_turns=70, verbose=self.verbose)
+ full_resp = ""
try:
- full_resp = ""; last_pos = 0
+ raw_query = self._handle_slash_cmd(raw_query, display_queue)
+ if raw_query is None:
+ continue
+
+ rquery = smart_format(raw_query.replace('\n', ' '), max_str_len=200)
+ self.history.append(f"[USER]: {rquery}")
+
+ sys_prompt = get_system_prompt() + getattr(self.llmclient.backend, 'extra_sys_prompt', '')
+ script_dir = os.path.dirname(os.path.abspath(__file__))
+ print(f"[DEBUG] 创建 GenericAgentHandler,工作目录设置为: {script_dir}")
+ handler = GenericAgentHandler(self, self.history, script_dir)
+ print(f"[DEBUG] Handler 创建成功,cwd = {handler.cwd}")
+ if self.handler and 'key_info' in self.handler.working:
+ ki = re.sub(r'\n\[SYSTEM\] 此为.*?工作记忆[。\n]*', '', self.handler.working['key_info']) # 去旧
+ handler.working['key_info'] = ki
+ handler.working['passed_sessions'] = ps = self.handler.working.get('passed_sessions', 0) + 1
+ if ps > 0: handler.working['key_info'] += f'\n[SYSTEM] 此为 {ps} 个对话前设置的key_info,若已在新任务,先更新或清除工作记忆。\n'
+ self.handler = handler
+ user_input = raw_query
+ if source == 'feishu' and len(self.history) > 1: # 如果有历史记录且来自飞书,注入到首轮 user_input 中(支持/restore恢复上下文)
+ user_input = handler._get_anchor_prompt() + f"\n\n### 用户当前消息\n{raw_query}"
+ initial_user_content = None
+ if images and isinstance(self.llmclient, NativeToolClient):
+ initial_user_content = build_multimodal_user_content(user_input, images)
+ #if 'gpt' in self.get_llm_name(model=True): handler._done_hooks.append('请确定任务是否完成,如果完成请给出信息完整的简报回答,如未完成需要继续工具调用直到完成任务,确实需要问用户应使用ask_user工具')
+ # although new handler, the **full** history is in llmclient, so it is full history!
+ gen = agent_runner_loop(self.llmclient, sys_prompt, user_input,
+ handler, TOOLS_SCHEMA, max_turns=70, verbose=self.verbose,
+ initial_user_content=initial_user_content)
+ last_pos = 0
for chunk in gen:
if consume_file(self.task_dir, '_stop'): self.abort()
if self.stop_sig: break
@@ -253,8 +274,6 @@ def run(self):
except Exception as e: print(f'[Reflect] on_done error: {e}')
if getattr(mod, 'ONCE', False): print('[Reflect] ONCE=True, exiting.'); break
else:
- try: import readline
- except Exception: pass
agent.inc_out = True
while True:
q = input('> ').strip()
diff --git a/check_permissions.py b/check_permissions.py
new file mode 100644
index 00000000..dd35ac58
--- /dev/null
+++ b/check_permissions.py
@@ -0,0 +1,103 @@
+import os
+import subprocess
+import sys
+
+def check_permissions():
+ print("=" * 60)
+ print("权限诊断脚本")
+ print("=" * 60)
+
+ # 1. 检查当前工作目录
+ try:
+ cwd = os.getcwd()
+ print(f"\n[1] 当前工作目录: {cwd}")
+ except Exception as e:
+ print(f"\n[1] 获取当前目录失败: {e}")
+
+ # 2. 检查上级目录访问
+ print("\n[2] 尝试访问上级目录:")
+ try:
+ parent_contents = os.listdir('..')
+ print(f" ✅ 成功列出上级目录内容: {len(parent_contents)} 个项目")
+ print(f" 内容: {parent_contents[:10]}..." if len(parent_contents) > 10 else f" 内容: {parent_contents}")
+ except PermissionError as e:
+ print(f" ❌ 权限错误: {e}")
+ except Exception as e:
+ print(f" ⚠️ 其他错误: {e}")
+
+ # 3. 检查 memory 目录
+ print("\n[3] 尝试访问 memory 目录:")
+ try:
+ memory_path = '../memory'
+ if os.path.exists(memory_path):
+ memory_contents = os.listdir(memory_path)
+ print(f" ✅ 成功列出 memory 目录: {len(memory_contents)} 个项目")
+ else:
+ print(f" ⚠️ memory 目录不存在: {memory_path}")
+ except PermissionError as e:
+ print(f" ❌ 权限错误: {e}")
+ except Exception as e:
+ print(f" ⚠️ 其他错误: {e}")
+
+ # 4. 检查 E:\AI 目录
+ print("\n[4] 尝试访问 E:\\AI 目录:")
+ try:
+ ai_path = 'E:\\AI'
+ if os.path.exists(ai_path):
+ ai_contents = os.listdir(ai_path)
+ print(f" ✅ 成功列出 E:\\AI 目录: {len(ai_contents)} 个项目")
+ else:
+ print(f" ⚠️ E:\\AI 目录不存在")
+ except PermissionError as e:
+ print(f" ❌ 权限错误: {e}")
+ except Exception as e:
+ print(f" ⚠️ 其他错误: {e}")
+
+ # 5. 检查用户信息
+ print("\n[5] 用户信息:")
+ try:
+ if sys.platform == 'win32':
+ result = subprocess.run(['whoami'], capture_output=True, text=True)
+ print(f" 当前用户: {result.stdout.strip()}")
+
+ # 检查是否管理员
+ import ctypes
+ is_admin = ctypes.windll.shell32.IsUserAnAdmin()
+ print(f" 是否管理员: {'是' if is_admin else '否'}")
+ except Exception as e:
+ print(f" 获取用户信息失败: {e}")
+
+ # 6. 检查 NTFS 权限 (通过 icacls)
+ print("\n[6] NTFS 权限检查:")
+ try:
+ result = subprocess.run(
+ ['icacls', os.getcwd()],
+ capture_output=True,
+ text=True,
+ encoding='utf-8',
+ errors='replace'
+ )
+ print(f" 当前目录权限:\n{result.stdout[:1000]}")
+ except Exception as e:
+ print(f" 检查权限失败: {e}")
+
+ # 7. 检查环境变量
+ print("\n[7] 环境变量检查:")
+ print(f" PYTHONPATH: {os.environ.get('PYTHONPATH', '未设置')}")
+ print(f" PATH: {os.environ.get('PATH', '未设置')[:100]}...")
+
+ # 8. 检查目录属性
+ print("\n[8] 目录属性检查:")
+ try:
+ cwd_stat = os.stat('.')
+ print(f" 当前目录 stat: {cwd_stat}")
+ print(f" 权限位: {oct(cwd_stat.st_mode)[-4:]}")
+ except Exception as e:
+ print(f" 获取目录属性失败: {e}")
+
+ print("\n" + "=" * 60)
+ print("诊断完成!")
+ print("=" * 60)
+
+if __name__ == "__main__":
+ check_permissions()
\ No newline at end of file
diff --git a/compare_models.py b/compare_models.py
new file mode 100644
index 00000000..5f902c54
--- /dev/null
+++ b/compare_models.py
@@ -0,0 +1,28 @@
+import yaml, json, re
+
+# 1. Get models from yaml
+with open('litellm_config.yaml', 'r') as f:
+ yaml_data = yaml.safe_load(f)
+yaml_models = [m['model_name'] for m in yaml_data.get('model_list', [])]
+
+# 2. Get models from mykey.py
+with open('mykey.py', 'r', encoding='utf-8') as f:
+ mykey_content = f.read()
+# Find all native_oai_config_copilot_... blocks and extract model field
+blocks = re.findall(r'native_oai_config_copilot_.*?\s*=\s*\{.*?\}', mykey_content, re.DOTALL)
+mykey_models = []
+for block in blocks:
+ m = re.search(r'[\'\"]model[\'\"]\s*:\s*[\'\"](.*?)[\'\"]', block)
+ if m:
+ mykey_models.append(m.group(1))
+
+# 3. Comparison
+missing_in_litellm = [m for m in mykey_models if m not in yaml_models]
+missing_in_mykey = [m for m in yaml_models if m not in mykey_models]
+
+print(json.dumps({
+ 'yaml_models': yaml_models,
+ 'mykey_models': mykey_models,
+ 'missing_in_litellm': missing_in_litellm,
+ 'missing_in_mykey': missing_in_mykey
+}))
diff --git a/fix_start.py b/fix_start.py
new file mode 100644
index 00000000..d7386ff6
--- /dev/null
+++ b/fix_start.py
@@ -0,0 +1,50 @@
+import os, sys, subprocess, ctypes
+
+def is_admin():
+ try:
+ return ctypes.windll.shell32.IsUserAnAdmin()
+ except:
+ return False
+
+def fix_permissions():
+ script_dir = os.path.dirname(os.path.abspath(__file__))
+ os.chdir(script_dir)
+
+ if script_dir not in sys.path:
+ sys.path.insert(0, script_dir)
+
+ os.environ['GA_LANG'] = 'zh'
+
+ temp_dir = os.path.join(script_dir, 'temp')
+ os.makedirs(temp_dir, exist_ok=True)
+
+ try:
+ os.listdir(temp_dir)
+ print('✅ temp 目录访问正常')
+ except Exception as e:
+ print(f'❌ temp 目录访问失败: {e}')
+ return False
+
+ try:
+ os.listdir('..')
+ print('✅ 上级目录访问正常')
+ except Exception as e:
+ print(f'❌ 上级目录访问失败: {e}')
+ return False
+
+ return True
+
+def main():
+ if not is_admin():
+ print('⚠️ 建议以管理员身份运行')
+
+ print('🔧 正在检查和修复权限...')
+ if not fix_permissions():
+ print('❌ 权限修复失败')
+ sys.exit(1)
+
+ print('🚀 启动 GenericAgent...')
+ subprocess.run([sys.executable, 'agentmain.py'])
+
+if __name__ == '__main__':
+ main()
diff --git a/frontends/qtapp.py b/frontends/qtapp.py
index 3d5de073..e7dc18f0 100644
--- a/frontends/qtapp.py
+++ b/frontends/qtapp.py
@@ -6,7 +6,7 @@
"""
from __future__ import annotations
-import math, os, sys, json, glob, re, base64, time, threading
+import math, os, sys, json, glob, re, time, threading
import queue as _queue
from datetime import datetime
from typing import Optional
@@ -453,14 +453,15 @@ def _save_history(history: list):
def _build_prompt_with_uploads(prompt: str, files: list) -> tuple:
"""
files: list of {'name': str, 'type': str, 'raw': bytes}
- returns (full_prompt, display_prompt, display_attachments)
+ returns (full_prompt, display_prompt, display_attachments, image_paths)
"""
if not files:
- return prompt, prompt, []
+ return prompt, prompt, [], []
os.makedirs("temp/uploaded", exist_ok=True)
attachment_chunks = ["\n\n[用户上传附件 — 文件已保存到本地磁盘,可用 file_read 工具读取]"]
display_attachments = []
+ image_paths = []
img_count, file_names = 0, []
for f in files:
@@ -479,12 +480,12 @@ def _build_prompt_with_uploads(prompt: str, files: list) -> tuple:
saved = "(保存失败)"
if mime.startswith("image/"):
- b64 = base64.b64encode(raw).decode()
attachment_chunks.append(
f"\n- [图片附件] {name} ({size} bytes)\n 磁盘路径: {saved}"
- f"\n data:{mime};base64,{b64}"
)
display_attachments.append({"type": "image", "name": name})
+ if saved != "(保存失败)":
+ image_paths.append(saved)
img_count += 1
elif ext in TEXT_FILE_EXTS:
text = raw.decode("utf-8", errors="replace")
@@ -507,7 +508,7 @@ def _build_prompt_with_uploads(prompt: str, files: list) -> tuple:
if file_names:
parts.append(f"{len(file_names)} 个文件({'、'.join(file_names)})")
display_prompt = f"{prompt}\n\n📎 已附带:{','.join(parts)}" if parts else prompt
- return prompt + "\n".join(attachment_chunks), display_prompt, display_attachments
+ return prompt + "\n".join(attachment_chunks), display_prompt, display_attachments, image_paths
# ── small reusable widgets ────────────────────────────────────────────────────
@@ -1681,7 +1682,7 @@ def _handle_send(self):
return
prompt = text or "请分析我上传的附件。"
- full_prompt, display_prompt, _ = _build_prompt_with_uploads(prompt, files)
+ full_prompt, display_prompt, _, image_paths = _build_prompt_with_uploads(prompt, files)
# Clear input state
self._input.clear()
@@ -1704,7 +1705,7 @@ def _handle_send(self):
self._set_stop_mode()
self._streaming_badge.show()
- self._display_queue = self.agent.put_task(full_prompt, source="user")
+ self._display_queue = self.agent.put_task(full_prompt, source="user", images=image_paths)
self._poll_timer.start(40)
def _poll_queue(self):
diff --git a/frontends/stapp.py b/frontends/stapp.py
index 59bade27..74616481 100644
--- a/frontends/stapp.py
+++ b/frontends/stapp.py
@@ -8,8 +8,10 @@
try: sys.stderr.reconfigure(errors='replace')
except: pass
script_dir = os.path.dirname(__file__)
-sys.path.append(os.path.abspath(os.path.join(script_dir, '..')))
+project_dir = os.path.abspath(os.path.join(script_dir, '..'))
+sys.path.append(project_dir)
sys.path.append(os.path.abspath(script_dir))
+os.chdir(project_dir)
import streamlit as st
import time, json, re, threading, queue
diff --git a/ga.py b/ga.py
index e6e7d9b4..685aa39c 100644
--- a/ga.py
+++ b/ga.py
@@ -554,9 +554,9 @@ def get_global_memory():
suffix = '_en' if os.environ.get('GA_LANG', '') == 'en' else ''
with open(os.path.join(script_dir, 'memory/global_mem_insight.txt'), 'r', encoding='utf-8', errors='replace') as f: insight = f.read()
with open(os.path.join(script_dir, f'assets/insight_fixed_structure{suffix}.txt'), 'r', encoding='utf-8') as f: structure = f.read()
- prompt += f'cwd = {os.path.join(script_dir, "temp")} (./)\n'
- prompt += f"\n[Memory] (../memory)\n"
- prompt += structure + '\n../memory/global_mem_insight.txt:\n'
+ prompt += f'cwd = {script_dir} (./)\n'
+ prompt += f"\n[Memory] (./memory)\n"
+ prompt += structure + '\n./memory/global_mem_insight.txt:\n'
prompt += insight + "\n"
except FileNotFoundError: pass
return prompt
diff --git a/launch.pyw b/launch.pyw
index e5b148f2..f0530e27 100644
--- a/launch.pyw
+++ b/launch.pyw
@@ -19,7 +19,7 @@ def get_screen_width():
def start_streamlit(port):
global proc
cmd = [sys.executable, "-m", "streamlit", "run", os.path.join(frontends_dir, "stapp.py"), "--server.port", str(port), "--server.address", "localhost", "--server.headless", "true"]
- proc = subprocess.Popen(cmd)
+ proc = subprocess.Popen(cmd, cwd=script_dir)
atexit.register(proc.kill)
def inject(text):
@@ -78,7 +78,14 @@ def idle_monitor():
except Exception as e:
print(f'[Idle Monitor] Error: {e}')
+def debug_cwd():
+ """调试工作目录"""
+ print(f"[DEBUG] Launch.pyw 启动")
+ print(f"[DEBUG] 当前工作目录: {os.getcwd()}")
+ print(f"[DEBUG] 脚本目录: {script_dir}")
+
if __name__ == '__main__':
+ debug_cwd()
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('port', nargs='?', default='0');
diff --git a/litellm_config.yaml b/litellm_config.yaml
new file mode 100644
index 00000000..9310ee3c
--- /dev/null
+++ b/litellm_config.yaml
@@ -0,0 +1,33 @@
+model_list:
+ # OpenAI Models
+ - model_name: gpt-4
+ litellm_params:
+ model: github_copilot/gpt-4
+ api_key: os.environ/GITHUB_COPILOT_TOKEN
+ extra_headers:
+ Editor-Version: "vscode/1.85.1"
+ Editor-Plugin-Version: "copilot/1.155.0"
+ Copilot-Integration-Id: "vscode-chat"
+ User-Agent: "GitHubCopilotChat/0.35.0"
+
+ # Anthropic Models
+ - model_name: claude-sonnet-4.5
+ litellm_params:
+ model: github_copilot/claude-sonnet-4.5
+ api_key: os.environ/GITHUB_COPILOT_TOKEN
+ extra_headers:
+ Editor-Version: "vscode/1.85.1"
+ Editor-Plugin-Version: "copilot/1.155.0"
+ Copilot-Integration-Id: "vscode-chat"
+ User-Agent: "GitHubCopilotChat/0.35.0"
+
+ # Google Models
+ - model_name: gemini-2.5-pro
+ litellm_params:
+ model: github_copilot/gemini-2.5-pro
+ api_key: os.environ/GITHUB_COPILOT_TOKEN
+ extra_headers:
+ Editor-Version: "vscode/1.85.1"
+ Editor-Plugin-Version: "copilot/1.155.0"
+ Copilot-Integration-Id: "vscode-chat"
+ User-Agent: "GitHubCopilotChat/0.35.0"
diff --git a/llmcore.py b/llmcore.py
index 96d5703f..645e090d 100644
--- a/llmcore.py
+++ b/llmcore.py
@@ -1,33 +1,26 @@
-import os, json, re, time, requests, sys, threading, urllib3, base64, importlib, uuid
+import os, json, re, time, requests, sys, threading, urllib3, base64, mimetypes, uuid
from datetime import datetime
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
_RESP_CACHE_KEY = str(uuid.uuid4())
def _load_mykeys():
- global _mykey_path
try:
- import mykey; importlib.reload(mykey); _mykey_path = mykey.__file__
- return {k: v for k, v in vars(mykey).items() if not k.startswith('_')}
+ import mykey; return {k: v for k, v in vars(mykey).items() if not k.startswith('_')}
except ImportError: pass
- _mykey_path = p = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'mykey.json')
+ p = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'mykey.json')
if not os.path.exists(p): raise Exception('[ERROR] mykey.py or mykey.json not found, please create one from mykey_template.')
with open(p, encoding='utf-8') as f: return json.load(f)
-_mykey_path = _mykey_mtime = None
-def reload_mykeys():
- global _mykey_mtime
- mt = os.stat(_mykey_path).st_mtime_ns if _mykey_path else -1
- if mt == _mykey_mtime: return globals().get('mykeys', {}), False
- mk = _load_mykeys(); _mykey_mtime = os.stat(_mykey_path).st_mtime_ns
- print(f'[Info] Load mykeys from {_mykey_path}')
- globals().update(mykeys=mk)
- if mk.get('langfuse_config'):
- try: from plugins import langfuse_tracing
- except Exception: pass
- return mk, True
-
def __getattr__(name): # once guard in PEP 562
- if name == 'mykeys': return reload_mykeys()[0]
+ if name in ('mykeys', 'proxies'):
+ mk = _load_mykeys()
+ proxy = mk.get("proxy", 'http://127.0.0.1:2082')
+ px = {"http": proxy, "https": proxy} if proxy else None
+ globals().update(mykeys=mk, proxies=px)
+ if mk.get('langfuse_config'):
+ try: from plugins import langfuse_tracing
+ except Exception: pass
+ return globals()[name]
raise AttributeError(f"module 'llmcore' has no attribute {name}")
def compress_history_tags(messages, keep_recent=10, max_len=800, force=False):
@@ -81,12 +74,6 @@ def _sanitize_leading_user_msg(msg):
msg['content'] = [{"type": "text", "text": '\n'.join(t for t in texts if t)}]
return msg
-_oldprint = print
-def safeprint(*argv):
- try: _oldprint(*argv)
- except OSError: pass
-print = safeprint
-
def trim_messages_history(history, context_win):
compress_history_tags(history)
cost = sum(len(json.dumps(m, ensure_ascii=False)) for m in history)
@@ -165,11 +152,6 @@ def _parse_claude_sse(resp_lines):
if not warn:
if not got_message_stop and not stop_reason: warn = "\n\n[!!! 流异常中断,未收到完整响应 !!!]"
elif stop_reason == "max_tokens": warn = "\n\n[!!! Response truncated: max_tokens !!!]"
- if current_block:
- if current_block["type"] == "tool_use":
- try: current_block["input"] = json.loads(tool_json_buf) if tool_json_buf else {}
- except: current_block["input"] = {"_raw": tool_json_buf}
- content_blocks.append(current_block); current_block = None
if warn:
print(f"[WARN] {warn.strip()}")
content_blocks.append({"type": "text", "text": warn}); yield warn
@@ -197,6 +179,7 @@ def _parse_openai_sse(resp_lines, api_mode="chat_completions"):
content_block: {type:'text', text:str} | {type:'tool_use', id:str, name:str, input:dict}
"""
content_text = ""
+ reasoning_text = ""
if api_mode == "responses":
seen_delta = False; fc_buf = {}; current_fc_idx = None
for line in resp_lines:
@@ -247,7 +230,6 @@ def _parse_openai_sse(resp_lines, api_mode="chat_completions"):
return blocks
else:
tc_buf = {} # index -> {id, name, args}
- reasoning_text = ""
for line in resp_lines:
if not line: continue
line = line.decode('utf-8', errors='replace') if isinstance(line, bytes) else line
@@ -259,7 +241,7 @@ def _parse_openai_sse(resp_lines, api_mode="chat_completions"):
ch = (evt.get("choices") or [{}])[0]
delta = ch.get("delta") or {}
if delta.get("reasoning_content"):
- reasoning_text += delta["reasoning_content"]
+ text = delta["reasoning_content"]; reasoning_text += text; yield text
if delta.get("content"):
text = delta["content"]; content_text += text; yield text
for tc in (delta.get("tool_calls") or []):
@@ -318,7 +300,7 @@ def _parse_openai_json(data, api_mode="chat_completions"):
msg = (data.get("choices") or [{}])[0].get("message", {})
reasoning = msg.get("reasoning_content", "")
if reasoning:
- blocks.append({"type": "thinking", "thinking": reasoning})
+ blocks.append({"type": "thinking", "thinking": reasoning}); yield reasoning
content = msg.get("content", "")
if content:
blocks.append({"type": "text", "text": content}); yield content
@@ -355,7 +337,6 @@ def _openai_stream(api_base, api_key, messages, model, api_mode='chat_completion
payload = {"model": model, "input": _to_responses_input(messages), "stream": stream,
"prompt_cache_key": _RESP_CACHE_KEY, "instructions": system or "You are an Omnipotent Executor."}
if reasoning_effort: payload["reasoning"] = {"effort": reasoning_effort}
- if max_tokens: payload["max_output_tokens"] = max_tokens
else:
url = auto_make_url(api_base, "chat/completions")
if system: messages = [{"role": "system", "content": system}] + messages
@@ -363,7 +344,7 @@ def _openai_stream(api_base, api_key, messages, model, api_mode='chat_completion
payload = {"model": model, "messages": messages, "stream": stream}
if stream: payload["stream_options"] = {"include_usage": True}
if temperature != 1: payload["temperature"] = temperature
- if max_tokens: payload["max_completion_tokens" if ml.startswith(("gpt-5", "o1", "o2", "o3", "o4")) else "max_tokens"] = max_tokens
+ if max_tokens: payload["max_tokens"] = max_tokens
if reasoning_effort: payload["reasoning_effort"] = reasoning_effort
if tools: payload["tools"] = _prepare_oai_tools(tools, api_mode)
RETRYABLE = {408, 409, 425, 429, 500, 502, 503, 504, 529}
@@ -438,7 +419,7 @@ def _to_responses_input(messages):
elif ptype == "image_url":
url = (part.get("image_url") or {}).get("url", "")
if url and role != "assistant": parts.append({"type": "input_image", "image_url": url})
- if len(parts) == 0: parts = [{"type": text_type, "text": str(content) if not isinstance(content, list) else '[empty]'}]
+ if len(parts) == 0: parts = [{"type": text_type, "text": str(content) or '[empty]'}]
result.append({"role": role, "content": parts})
pending = []
for tc in (msg.get("tool_calls") or []):
@@ -456,18 +437,16 @@ def _msgs_claude2oai(messages):
content = msg.get("content", "")
blocks = content if isinstance(content, list) else [{"type": "text", "text": str(content)}]
if role == "assistant":
- text_parts, tool_calls, reasoning = [], [], ""
+ text_parts, tool_calls = [], []
for b in blocks:
if not isinstance(b, dict): continue
- if b.get("type") == "thinking" and b.get("thinking"): reasoning = b["thinking"]
- elif b.get("type") == "text" and b.get("text"): text_parts.append({"type": "text", "text": b.get("text", "")})
+ if b.get("type") == "text" and b.get("text"): text_parts.append({"type": "text", "text": b.get("text", "")})
elif b.get("type") == "tool_use":
tool_calls.append({
"id": b.get("id") or '', "type": "function",
"function": {"name": b.get("name", ""), "arguments": json.dumps(b.get("input", {}), ensure_ascii=False)}
})
m = {"role": "assistant"}
- if reasoning: m["reasoning_content"] = reasoning
if text_parts: m["content"] = text_parts
else: m["content"] = ""
if tool_calls: m["tool_calls"] = tool_calls
@@ -500,7 +479,7 @@ def __init__(self, cfg):
self.api_key = cfg['apikey']
self.api_base = cfg['apibase'].rstrip('/')
self.model = cfg.get('model', '')
- self.context_win = cfg.get('context_win', 28000)
+ self.context_win = cfg.get('context_win', 24000)
self.history = []
self.lock = threading.Lock()
self.system = ""
@@ -521,7 +500,7 @@ def _enum(key, valid):
mode = str(cfg.get('api_mode', 'chat_completions')).strip().lower().replace('-', '_')
self.api_mode = 'responses' if mode in ('responses', 'response') else 'chat_completions'
self.temperature = cfg.get('temperature', 1)
- self.max_tokens = cfg.get('max_tokens')
+ self.max_tokens = cfg.get('max_tokens', 8192)
def _apply_claude_thinking(self, payload):
if self.thinking_type:
thinking = {"type": self.thinking_type}
@@ -553,16 +532,8 @@ def _ask_gen():
if not content.startswith("!!!Error:"): self.history.append({"role": "assistant", "content": [{"type": "text", "text": content}]})
return _ask_gen() if stream else ''.join(list(_ask_gen()))
-def _keep_claude_block(b): return not isinstance(b, dict) or b.get("type") != "thinking" or b.get("signature")
-def _drop_unsigned_thinking(messages):
- for m in messages:
- c = m.get("content")
- if isinstance(c, list): m["content"] = [b for b in c if _keep_claude_block(b)]
- return messages
-
class ClaudeSession(BaseSession):
def raw_ask(self, messages):
- if self.max_tokens is None: self.max_tokens = 8192
headers = {"x-api-key": self.api_key, "Content-Type": "application/json", "anthropic-version": "2023-06-01", "anthropic-beta": "prompt-caching-2024-07-31"}
payload = {"model": self.model, "messages": messages, "max_tokens": self.max_tokens, "stream": True}
if self.temperature != 1: payload["temperature"] = self.temperature
@@ -576,7 +547,7 @@ def raw_ask(self, messages):
yield (err := f"!!!Error: {e}")
return [{"type": "text", "text": err}]
def make_messages(self, raw_list):
- msgs = _drop_unsigned_thinking([{"role": m['role'], "content": list(m['content'])} for m in raw_list])
+ msgs = [{"role": m['role'], "content": list(m['content'])} for m in raw_list]
user_idxs = [i for i, m in enumerate(msgs) if m['role'] == 'user']
for idx in user_idxs[-2:]:
msgs[idx]["content"][-1] = dict(msgs[idx]["content"][-1], cache_control={"type": "ephemeral"})
@@ -610,6 +581,7 @@ def _fix_messages(messages):
class NativeClaudeSession(BaseSession):
def __init__(self, cfg):
super().__init__(cfg)
+ self.context_win = cfg.get("context_win", 28000)
self.fake_cc_system_prompt = cfg.get("fake_cc_system_prompt", False)
self.user_agent = cfg.get("user_agent", "claude-cli/2.1.113 (external, cli)")
self._session_id = str(uuid.uuid4())
@@ -617,8 +589,7 @@ def __init__(self, cfg):
self._device_id = uuid.uuid4().hex + uuid.uuid4().hex[:32]
self.tools = None
def raw_ask(self, messages):
- messages = _drop_unsigned_thinking(_fix_messages(messages))
- if self.max_tokens is None: self.max_tokens = 8192
+ messages = _fix_messages(messages)
model = self.model
beta_parts = ["claude-code-20250219", "interleaved-thinking-2025-05-14", "redact-thinking-2026-02-12", "prompt-caching-scope-2026-01-05"]
if "[1m]" in model.lower():
@@ -955,7 +926,7 @@ def _raw_ask(self, *args, **kwargs):
THINKING_PROMPT_ZH = """
### 行动规范(持续有效)
-每次回复(含工具调用轮)都先在回复文字中包含一个 中输出极简单行(<30字)物理快照:上次结果新信息+本次意图。此内容进入长期工作记忆。
+每次回复请先在回复文字中包含一个 中输出极简单行(<30字)物理快照:上次结果新信息+本次意图。此内容进入长期工作记忆。
\n**若用户需求未完成,必须进行工具调用!**
""".strip()
THINKING_PROMPT_EN = """
diff --git a/memory/global_mem_insight.txt b/memory/global_mem_insight.txt
new file mode 100644
index 00000000..431c2c13
--- /dev/null
+++ b/memory/global_mem_insight.txt
@@ -0,0 +1,27 @@
+# [Global Memory Insight]
+搜索:WEB-SEARCH(L2→**Baidu/Tavily/Brave/Serper/Exa 已验证** | Jina✅免费端点 +API Key(10M tokens,402 时自动回退))
+浏览器特殊操作: tmwebdriver_sop(文件上传/图搜/PDF blob/物理坐标/HttpOnly Cookie/autofill突破/跨域iframe/CDP/跨tab)
+键鼠: ljqCtrl_sop(禁pyautogui/先activate) 截图/视觉: ocr/vision_sop | 禁全屏截图,优先窗口
+定时:scheduled_task_sop | 自主:autonomous_operation_sop | watchdog/反射:agentmain --reflect
+手机:adb_ui.py
+
+需要时read L2 或 ls ../memory/ 查L3
+L0(META-SOP): memory_management_sop
+L2: 飞书:lark-cli | (按section扩展)
+L3: verify_sop(事实验证闭环) | memory_cleanup_sop(记忆整理) | skill_search | web_search_sop(国内搜索方案) | deep_research_sop(多源深度研究) | ui_detect.py | ocr_utils.py | subagent | web_setup_sop | plan_sop
+| procmem_scanner | keychain | ljqCtrl_sop+.py | tmwebdriver_sop | autonomous_operation_sop | scheduled_task_sop | vision_sop | adb_ui.py
+L4: L4_raw_sessions/
+
+[RULES]
+1. [效率型] 搜索先行: 搜文件名严禁不用es(禁PS递归/禁dir遍历), 优先看cwd,禁猜路径
+1a. [隐蔽型] Web搜索: 国内Google不可用,需配置国内后端(Brave/Bing/百度); deep-research是编排框架非搜索引擎
+2. [致命型] 交叉验证: 禁信摘要, 数值进详情页核实
+3. [隐蔽型] 编码安全: 禁PS cat/type用file_read; 改前必读; memory模块直接import(已在PATH,禁加虚假前缀)
+4. [致命型] 闭环: 物理模拟后确认; 3次失败请求干预; Git完整闭环
+5. [致命型] 进程: 禁无条件杀python(杀自己), 精确PID, 禁os.kill判活
+6. [效率型] 窗口: GUI状态优先win32gui枚举标题
+7. [隐蔽型] web JS: 输入用原生setter+事件链, 点击前检disabled, 注意引号转义; scan空/不全先稍等再scan, 禁首扫定论
+8. [致命型] SOP: 读SOP禁凭印象,有utils必用 | 复杂超长程任务/用户明确提及规划模式→读plan_sop
+9. [隐蔽型] 飞书CLI: code_run需shell=True(.cmd); API 404=开放平台权限未开通(非CLI问题)
+10. [隐蔽型] Everything: es.exe搜索需服务RUNNING(net start Everything); 服务STOPPED时搜索会超时挂死
+11. [致命型] 验证闭环: 每次报告结论前MUST执行验证动作(运行/截图/核对), 调用 scripts/verify_claims.py 或 verify_sop; 报告MUST包含 verify_claims 的 markdown 验证表, 否则该报告视为无效; 无工具证据的结论=无效结论
diff --git a/memory/memory_management_sop.md b/memory/memory_management_sop.md
index c72da832..42e271fd 100644
--- a/memory/memory_management_sop.md
+++ b/memory/memory_management_sop.md
@@ -1,90 +1,94 @@
-## 0. 核心公理 (Core Axioms - 最高优先级)
-1. **行动验证原则 (Action-Verified Only)**
- * **定义**:任何写入 L1/L2/L3 的信息,必须源自**成功的工具调用结果**(如 `shell` 执行成功、`file_read` 确认内容存在、代码运行通过)。
- * **禁止**:严禁将模型的“固有知识”、“推理猜测”、“未执行的计划”或“未验证的假设”作为事实写入。
- * **口号**:**No Execution, No Memory. (无行动,不记忆)**
-2. **神圣不可删改性 (Sanctity of Verified Data)**
- * **定义**:凡是经过行动验证的有效配置、避坑指南、关键路径,在重构(Refactoring/GC)时**严禁丢弃**。
- * **操作**:可以压缩文字、可以迁移层级(从 L2 移到 L3),但绝不能丢失信息的准确性和可追溯性。
- * 记忆修改时请极度小心,尽量不要overwrite或code run。只能少量patch,改不动宁愿不改。
-3. **禁止存储易变状态 (No Volatile State)**
- * **定义**:严禁存储随时间/会话高频变化的数据。
- * **示例**:当前时间戳、临时 Session ID、正在运行的 PID、某个具体绝对路径、连接的设备信息
-4. **最小充分指针 (Minimum Sufficient Pointer)**
- * 上层只留能定位下层的最短标识,多一词即冗余。
----
-## 记忆层级架构
-```
-L1: global_mem_insight.txt (极简索引层 - 严格控制 ≤30 行)
- ↓ 导航指向 (Pointer)
-L2: global_mem.txt (事实库层 - 现短但会膨胀)
- ↓ 详细引用 (Reference)
-L3: ../memory/ (记录库层 - 包含 .md/.py 等各类文件)
-L4: ../memory/L4_raw_sessions/ (历史会话层 - scheduler反射自动收集,可定位过往上下文)
-```
----
-## 各层职责与原则
-### L1:全局内存索引 (global_mem_insight.txt)
-**职责**:为 L2 和 L3 提供极简导航索引,确保关键能力可被发现。
-**特征**:
-- 体积限制:≤ 30 行(硬约束),< 1k tokens(期望)。严禁填写细节(除非极高频任务)
-- 内容:两层「场景关键词→记忆定位」映射 + RULES(红线规则 + 高频犯错点)
- - 第一层:高频场景 key→value(直接给出 sop/py/L2 section 名),自包含名称只写一词不重复翻译
- - 第二层:低频场景仅列关键词,需要时 read L2 或 ls L3 自行定位
- - 核心:场景触发词极重要(不索引则不知有此能力),但严禁写How-to细节
- - RULES:压缩版避坑准则,包含:
- - 红线规则(致命型):违反会导致进程终止或系统崩溃(如 `禁无条件杀python(会杀自己)`)
- - 红线规则(隐蔽型):违反不报错但产生错误结果(如 `搜索用google不用百度`)
- - 高频犯错点:容易遗忘的关键约束(如 `es(PATH有)` 防止找路径)
-- 更新:L2/L3 有新增/删除时,判断频率归入对应层。修改时请极度小心,不允许overwrite或code run。只能少量patch,改不动宁愿不改。
-**禁止**:严禁写入密码、API Key。允许内联非敏感触发参数(如代理端口)。不写 "How to" 或详细解释。严禁包含特定任务的技术细节(特定任务细节应该在L3)。更加严禁写入日志记录!
----
-### L2:全局事实库 (global_mem.txt)
-**职责**:存储全局环境性事实(路径、凭证、配置、常量等)。
-**特征**:
-- 趋势:随环境扩展而膨胀(可接受)
-- 内容:按 `## [SECTION]` 组织的事实条目
-- 同步:变化时更新 L1 的相应 TOPIC 导航行,只能导航
-**禁止**:禁止存储易变状态、禁止存储猜测、严禁存储大模型可推理的通用常识
----
-### L3:任务级精简记录库 (../memory/)
-职责:补充 L1/L2 无法容纳、但对**特定任务**未来复用至关重要的少量详细信息。内容必须在满足复用需求的前提下**尽可能短**。
-原则:
-- 只记录:跨会话仍重要、且难以通过少量 file_read / web_scan / 简单脚本快速重建的要点。
-- 优先写:该任务特有的隐藏前置条件、典型易踩坑点,一旦遗忘会导致高成本重试的信息。
-- 不记录:普通操作步骤、可在几步探测中重新获得的路径或状态信息。
-形式:
-- SOP(*_sop.md):为单一任务或小类任务保留极简的「关键前置 + 典型坑」清单,避免长篇教程。
-- 工具脚本(*.py):仅封装高复用、逻辑相对复杂且不希望每次都重新推理的处理流程。
----
-## L1 ↔ L2/L3 同步规则
-| 操作 | L1 同步 |
-|---------|--------|
-| L2/L3 新增场景 | 新建默认低频→L3列表加文件名(自解释不加描述,反直觉场景才能加括号触发词) |
-| L2/L3 删除场景 | 删除对应层的关键词/映射行 |
-| L2/L3 修改值 | 若不影响场景定位则不动 L1 |
-| 发现通用避坑规律 | 压缩为一句加入 RULES |
-
-> **同步红线**:L1 只写关键词/名称,禁搬细节。需要评估L1中的token数和索引效用。
-
----
-## 信息分类快速决策树
-```
-"这条信息该放哪层?"
-
-是『环境特异性事实』? (IP、非标路径、凭证、ID、API 密钥等,大模型 Zero-shot 无法生成准确)
- ├─ YES → L2 (global_mem.txt)
- │ 然后 → 按频率归入 L1 第一层(key→value)或第二层(仅关键词)
- │
- └─ NO
- ↓
- 是『通用操作规律』? (全局性避坑指南、排查方法、不针对特定任务的通用准则)
- ├─ YES → L1 [RULES] (仅限 1 句压缩准则)
- │
- └─ NO
- ↓
- 是『特定任务技术』? (艰难尝试才能成功,且未来还能用到的任务,如:微信解析参数、特定游戏坐标、临时工具配置)
- ├─ YES → L3 (../memory/ 专项 SOP 或脚本)
- │
- └─ NO → 判定为『通用常识』或『冗余信息』: 严禁存储,直接丢弃
+## 0. 核心公理 (Core Axioms - 最高优先级)
+1. **行动验证原则 (Action-Verified Only)**
+ * **定义**:任何写入 L1/L2/L3 的信息,必须源自**成功的工具调用结果**(如 `shell` 执行成功、`file_read` 确认内容存在、代码运行通过)。
+ * **禁止**:严禁将模型的“固有知识”、“推理猜测”、“未执行的计划”或“未验证的假设”作为事实写入。
+ * **口号**:**No Execution, No Memory. (无行动,不记忆)**
+2. **神圣不可删改性 (Sanctity of Verified Data)**
+ * **定义**:凡是经过行动验证的有效配置、避坑指南、关键路径,在重构(Refactoring/GC)时**严禁丢弃**。
+ * **操作**:可以压缩文字、可以迁移层级(从 L2 移到 L3),但绝不能丢失信息的准确性和可追溯性。
+ * 记忆修改时请极度小心,尽量不要overwrite或code run。只能少量patch,改不动宁愿不改。
+3. **禁止存储易变状态 (No Volatile State)**
+ * **定义**:严禁存储随时间/会话高频变化的数据。
+ * **示例**:当前时间戳、临时 Session ID、正在运行的 PID、某个具体绝对路径、连接的设备信息
+4. **最小充分指针 (Minimum Sufficient Pointer)**
+ * 上层只留能定位下层的最短标识,多一词即冗余。
+5. **写入前验证门控 (Pre-Write Verification Gate)**
+ * **定义**:在所有记忆写入(L1/L2/L3 的创建或修改)之前,必须运行验证命令核查要写入的事项是否真实成立。
+ * **操作**:使用 `scripts/verify_claims.py` 验证(优先),或手动 `file_read` 确认事实(`code_run` 仅限 verify_claims.py 不可用时)。写入后立即 `file_read` 确认修改生效。每次使用 verify_claims.py 前,先以 `from scripts.verify_claims import VerificationResult` 作为最小导入自检——导入失败则回退手动路径。
+ * **红线**:未经验证的结论禁止写入记忆。违反此条的记忆修改视为无效且需回滚。
+---
+## 记忆层级架构
+```
+L1: global_mem_insight.txt (极简索引层 - 严格控制 ≤30 行)
+ ↓ 导航指向 (Pointer)
+L2: global_mem.txt (事实库层 - 现短但会膨胀)
+ ↓ 详细引用 (Reference)
+L3: ../memory/ (记录库层 - 包含 .md/.py 等各类文件)
+L4: ../memory/L4_raw_sessions/ (历史会话层 - scheduler反射自动收集,可定位过往上下文)
+```
+---
+## 各层职责与原则
+### L1:全局内存索引 (global_mem_insight.txt)
+**职责**:为 L2 和 L3 提供极简导航索引,确保关键能力可被发现。
+**特征**:
+- 体积限制:≤ 30 行(硬约束),< 1k tokens(期望)。严禁填写细节(除非极高频任务)
+- 内容:两层「场景关键词→记忆定位」映射 + RULES(红线规则 + 高频犯错点)
+ - 第一层:高频场景 key→value(直接给出 sop/py/L2 section 名),自包含名称只写一词不重复翻译
+ - 第二层:低频场景仅列关键词,需要时 read L2 或 ls L3 自行定位
+ - 核心:场景触发词极重要(不索引则不知有此能力),但严禁写How-to细节
+ - RULES:压缩版避坑准则,包含:
+ - 红线规则(致命型):违反会导致进程终止或系统崩溃(如 `禁无条件杀python(会杀自己)`)
+ - 红线规则(隐蔽型):违反不报错但产生错误结果(如 `搜索用google不用百度`)
+ - 高频犯错点:容易遗忘的关键约束(如 `es(PATH有)` 防止找路径)
+- 更新:L2/L3 有新增/删除时,判断频率归入对应层。修改时请极度小心,不允许overwrite或code run。只能少量patch,改不动宁愿不改。
+**禁止**:严禁写入密码、API Key。允许内联非敏感触发参数(如代理端口)。不写 "How to" 或详细解释。严禁包含特定任务的技术细节(特定任务细节应该在L3)。更加严禁写入日志记录!
+---
+### L2:全局事实库 (global_mem.txt)
+**职责**:存储全局环境性事实(路径、凭证、配置、常量等)。
+**特征**:
+- 趋势:随环境扩展而膨胀(可接受)
+- 内容:按 `## [SECTION]` 组织的事实条目
+- 同步:变化时更新 L1 的相应 TOPIC 导航行,只能导航
+**禁止**:禁止存储易变状态、禁止存储猜测、严禁存储大模型可推理的通用常识
+---
+### L3:任务级精简记录库 (../memory/)
+职责:补充 L1/L2 无法容纳、但对**特定任务**未来复用至关重要的少量详细信息。内容必须在满足复用需求的前提下**尽可能短**。
+原则:
+- 只记录:跨会话仍重要、且难以通过少量 file_read / web_scan / 简单脚本快速重建的要点。
+- 优先写:该任务特有的隐藏前置条件、典型易踩坑点,一旦遗忘会导致高成本重试的信息。
+- 不记录:普通操作步骤、可在几步探测中重新获得的路径或状态信息。
+形式:
+- SOP(*_sop.md):为单一任务或小类任务保留极简的「关键前置 + 典型坑」清单,避免长篇教程。
+- 工具脚本(*.py):仅封装高复用、逻辑相对复杂且不希望每次都重新推理的处理流程。
+---
+## L1 ↔ L2/L3 同步规则
+| 操作 | L1 同步 |
+|---------|--------|
+| L2/L3 新增场景 | 新建默认低频→L3列表加文件名(自解释不加描述,反直觉场景才能加括号触发词) |
+| L2/L3 删除场景 | 删除对应层的关键词/映射行 |
+| L2/L3 修改值 | 若不影响场景定位则不动 L1 |
+| 发现通用避坑规律 | 压缩为一句加入 RULES |
+
+> **同步红线**:L1 只写关键词/名称,禁搬细节。需要评估L1中的token数和索引效用。
+
+---
+## 信息分类快速决策树
+```
+"这条信息该放哪层?"
+
+是『环境特异性事实』? (IP、非标路径、凭证、ID、API 密钥等,大模型 Zero-shot 无法生成准确)
+ ├─ YES → L2 (global_mem.txt)
+ │ 然后 → 按频率归入 L1 第一层(key→value)或第二层(仅关键词)
+ │
+ └─ NO
+ ↓
+ 是『通用操作规律』? (全局性避坑指南、排查方法、不针对特定任务的通用准则)
+ ├─ YES → L1 [RULES] (仅限 1 句压缩准则)
+ │
+ └─ NO
+ ↓
+ 是『特定任务技术』? (艰难尝试才能成功,且未来还能用到的任务,如:微信解析参数、特定游戏坐标、临时工具配置)
+ ├─ YES → L3 (../memory/ 专项 SOP 或脚本)
+ │
+ └─ NO → 判定为『通用常识』或『冗余信息』: 严禁存储,直接丢弃
```
\ No newline at end of file
diff --git a/memory/vision_sop.md b/memory/vision_sop.md
index 2619a343..83360fd2 100644
--- a/memory/vision_sop.md
+++ b/memory/vision_sop.md
@@ -20,4 +20,4 @@ result = ask_vision(image, prompt="描述图片内容", backend="claude", timeou
1. 复制 `memory/vision_api.template.py` → `memory/vision_api.py`
2. 只改头部"用户配置区":去 `mykey.py` 里扫描变量名(⚠️ 只看名字,禁止输出 apikey 值),尝试找能用配置名填入 `CLAUDE_CONFIG_KEY` / `OPENAI_CONFIG_KEY`,`DEFAULT_BACKEND` 选后端,并测试
-3. 保底:没有可用 config 时去 `https://modelscope.cn/my/myaccesstoken` 申请 token 填入 `MODELSCOPE_API_KEY`
+3. 保底:没有可用 config 时去 `https://modelscope.cn/my/myaccesstoken` 申请 token 填入 `MODELSCOPE_API_KEY`
\ No newline at end of file
diff --git a/mykey.py b/mykey.py
new file mode 100644
index 00000000..851d4f5d
--- /dev/null
+++ b/mykey.py
@@ -0,0 +1,44 @@
+# ── GitHub Copilot Pro (多模型配置) ─────────────────────────────────────
+# 启动方式:先启动 litellm 代理(使用 .venv),然后在 UI 中选择模型
+# .venv\Scripts\litellm.exe --config litellm_config.yaml --port 8000
+
+# GPT-4 - 平衡性能与成本
+native_oai_config_copilot_gpt4 = {
+ 'name': 'copilot-gpt4',
+ 'apikey': 'anything',
+ 'apibase': 'http://localhost:8000/v1',
+ 'model': 'gpt-4',
+ 'api_mode': 'chat_completions',
+ 'stream': True,
+}
+
+# Claude Sonnet 4.5 - 长上下文支持 (200K+)
+native_oai_config_copilot_claude = {
+ 'name': 'copilot-claude',
+ 'apikey': 'anything',
+ 'apibase': 'http://localhost:8000/v1',
+ 'model': 'claude-sonnet-4.5',
+ 'api_mode': 'chat_completions',
+ 'stream': True,
+}
+
+# Gemini 2.5 Pro - 强多模态支持
+native_oai_config_copilot_gemini = {
+ 'name': 'copilot-gemini',
+ 'apikey': 'anything',
+ 'apibase': 'http://localhost:8000/v1',
+ 'model': 'gemini-2.5-pro',
+ 'api_mode': 'chat_completions',
+ 'stream': True,
+}
+
+# ── 模型自动轮询配置(已启用,仅使用当前已验证可用模型)────────────────────
+mixin_config = {
+ 'llm_nos': [
+ 'copilot-gpt4', # 首选:已验证可用
+ 'copilot-claude', # 兜底:已验证可用
+ 'copilot-gemini', # 兜底:已验证可用
+ ],
+ 'max_retries': 4, # 两模型间轮询重试,避免长时间无效重试
+ 'base_delay': 0.5, # 指数退避起始延迟
+}
diff --git a/mykey_template.py b/mykey_template.py
index dc681556..3df3a110 100644
--- a/mykey_template.py
+++ b/mykey_template.py
@@ -318,6 +318,22 @@
# 'read_timeout': 120, # int 秒 默认 30
# }
+# ── DeepSeek V4 (OpenAI 兼容协议) ────────────────────────────────────────────
+# DeepSeek V4 模型支持 OpenAI 兼容接口,使用 NativeOAISession。
+# 官方文档: https://platform.deepseek.com/api-docs/zh-cn/
+# native_oai_config_deepseek = {
+# 'name': 'deepseek-v4', # /llms 显示名 & mixin 引用名
+# 'apikey': 'sk-', # Bearer 鉴权,从 DeepSeek 控制台获取
+# 'apibase': 'https://api.deepseek.com/v1', # DeepSeek OpenAI 兼容端点
+# 'model': 'deepseek-chat', # DeepSeek V4 模型
+# 'max_retries': 3, # int 默认 1
+# 'connect_timeout': 10, # int 秒 默认 5(最小 1)
+# 'read_timeout': 120, # int 秒 默认 30(最小 5)
+# # 'temperature': 1.0, # float 默认 1.0
+# # 'max_tokens': 8192, # int 默认 8192
+# # 'context_win': 65536, # int 默认 24000;DeepSeek V4 支持 64K 上下文
+# }
+
# ══════════════════════════════════════════════════════════════════════════════
# 3. LLMSession / ClaudeSession — 非 Native 文本协议工具(deprecated)
diff --git a/reflect/scheduler.py b/reflect/scheduler.py
index 28d701ef..c1cdcefe 100644
--- a/reflect/scheduler.py
+++ b/reflect/scheduler.py
@@ -1,131 +1,132 @@
-import os, json, time as _time, socket as _socket, logging
-from datetime import datetime, timedelta
-
-# 端口锁:防止重复启动,bind失败时agentmain会直接崩溃退出
-# reload时mod.__dict__保留_lock,跳过重复绑定
-try: _lock
-except NameError:
- _lock = _socket.socket(_socket.AF_INET, _socket.SOCK_STREAM)
- _lock.bind(('127.0.0.1', 45762)); _lock.listen(1)
-
-INTERVAL = 120
-ONCE = False
-
-_dir = os.path.dirname(os.path.abspath(__file__))
-TASKS = os.path.join(_dir, '../sche_tasks')
-DONE = os.path.join(_dir, '../sche_tasks/done')
-_LOG = os.path.join(_dir, '../sche_tasks/scheduler.log')
-
-# --- 日志 ---
-_logger = logging.getLogger('scheduler')
-if not _logger.handlers:
- _logger.setLevel(logging.INFO)
- _fh = logging.FileHandler(_LOG, encoding='utf-8')
- _fh.setFormatter(logging.Formatter('%(asctime)s %(levelname)s %(message)s',
- datefmt='%Y-%m-%d %H:%M'))
- _logger.addHandler(_fh)
-
-# 默认最大延迟窗口(小时),超过此时间不触发
-DEFAULT_MAX_DELAY = 6
-_l4_t = 0 # last L4 archive time
-
-def _parse_cooldown(repeat):
- """解析repeat为冷却时间(比实际周期略短,防漂移)"""
- if repeat == 'once': return timedelta(days=999999)
- if repeat in ('daily', 'weekday'): return timedelta(hours=20)
- if repeat == 'weekly': return timedelta(days=6)
- if repeat == 'monthly': return timedelta(days=27)
- if repeat.startswith('every_'):
- try:
- parts = repeat.split('_')
- n = int(parts[1].rstrip('hdm'))
- u = parts[1][-1]
- if u == 'h': return timedelta(hours=n)
- if u == 'm': return timedelta(minutes=n)
- if u == 'd': return timedelta(days=n)
- except (ValueError, IndexError):
- pass # fall through to warning below
- _logger.warning(f'Unknown repeat type: {repeat}, fallback to 20h cooldown')
- return timedelta(hours=20)
-
-def _last_run(tid, done_files):
- """找最近一次执行时间"""
- latest = None
- for df in done_files:
- if not df.endswith(f'_{tid}.md'): continue
- try:
- t = datetime.strptime(df[:15], '%Y-%m-%d_%H%M')
- if latest is None or t > latest: latest = t
- except: continue
- return latest
-
-def check():
- # L4 archive cron (silent, every 12h)
- global _l4_t
- if _time.time() - _l4_t > 43200:
- _l4_t = _time.time()
- try:
- import sys; sys.path.insert(0, os.path.join(_dir, '../memory/L4_raw_sessions'))
- from compress_session import batch_process
- raw_dir = os.path.join(_dir, '../temp/model_responses')
- r = batch_process(raw_dir, dry_run=False)
- print(f'[L4 cron] {r}')
- except Exception as e:
- _logger.error(f'L4 archive failed: {e}')
-
- if not os.path.isdir(TASKS): return None
- now = datetime.now()
- os.makedirs(DONE, exist_ok=True)
- done_files = set(os.listdir(DONE))
- for f in sorted(os.listdir(TASKS)):
- if not f.endswith('.json'): continue
- tid = f[:-5]
- try:
- with open(os.path.join(TASKS, f), encoding='utf-8') as fp:
- task = json.loads(fp.read())
- except Exception as e:
- _logger.error(f'JSON parse error for {f}: {e}')
- continue
- if not task.get('enabled', False): continue
-
- repeat = task.get('repeat', 'daily')
- sched = task.get('schedule', '00:00')
- try:
- h, m = map(int, sched.split(':'))
- except Exception as e:
- _logger.error(f'Invalid schedule format in {f}: {sched!r} ({e})')
- continue
-
- # weekday任务:周末跳过
- if repeat == 'weekday' and now.weekday() >= 5: continue
-
- # 还没到schedule时间就跳过
- if now.hour < h or (now.hour == h and now.minute < m): continue
-
- # 执行窗口检查:超过max_delay小时则跳过(防止开机太晚触发过时任务)
- max_delay = task.get('max_delay_hours', DEFAULT_MAX_DELAY)
- sched_minutes = h * 60 + m
- now_minutes = now.hour * 60 + now.minute
- if (now_minutes - sched_minutes) > max_delay * 60:
- _logger.info(f'SKIP {tid}: {now_minutes - sched_minutes}min past schedule, '
- f'exceeds max_delay={max_delay}h')
- continue
-
- # 检查冷却
- last = _last_run(tid, done_files)
- cooldown = _parse_cooldown(repeat)
- if last and (now - last) < cooldown: continue
-
- # 触发
- _logger.info(f'TRIGGER {tid} (repeat={repeat}, schedule={sched}, '
- f'last_run={last})')
- ts = now.strftime('%Y-%m-%d_%H%M')
- rpt = os.path.join(DONE, f'{ts}_{tid}.md')
- prompt = task.get('prompt', '')
- return (f'[定时任务] {tid}\n'
- f'[报告路径] {rpt}\n\n'
- f'先读 scheduled_task_sop 了解执行流程,然后执行以下任务:\n\n'
- f'{prompt}\n\n'
- f'完成后将执行报告写入 {rpt}。')
-
- return None
+import os, json, time as _time, socket as _socket, logging
+from datetime import datetime, timedelta
+
+# 端口锁:防止重复启动,bind失败时agentmain会直接崩溃退出
+# reload时mod.__dict__保留_lock,跳过重复绑定
+try: _lock
+except NameError:
+ _lock = _socket.socket(_socket.AF_INET, _socket.SOCK_STREAM)
+ _lock.bind(('127.0.0.1', 45762)); _lock.listen(1)
+
+INTERVAL = 120
+ONCE = False
+
+_dir = os.path.dirname(os.path.abspath(__file__))
+TASKS = os.path.join(_dir, '../sche_tasks')
+DONE = os.path.join(_dir, '../sche_tasks/done')
+_LOG = os.path.join(_dir, '../sche_tasks/scheduler.log')
+
+# --- 日志 ---
+os.makedirs(os.path.dirname(_LOG), exist_ok=True)
+_logger = logging.getLogger('scheduler')
+if not _logger.handlers:
+ _logger.setLevel(logging.INFO)
+ _fh = logging.FileHandler(_LOG, encoding='utf-8')
+ _fh.setFormatter(logging.Formatter('%(asctime)s %(levelname)s %(message)s',
+ datefmt='%Y-%m-%d %H:%M'))
+ _logger.addHandler(_fh)
+
+# 默认最大延迟窗口(小时),超过此时间不触发
+DEFAULT_MAX_DELAY = 6
+_l4_t = 0 # last L4 archive time
+
+def _parse_cooldown(repeat):
+ """解析repeat为冷却时间(比实际周期略短,防漂移)"""
+ if repeat == 'once': return timedelta(days=999999)
+ if repeat in ('daily', 'weekday'): return timedelta(hours=20)
+ if repeat == 'weekly': return timedelta(days=6)
+ if repeat == 'monthly': return timedelta(days=27)
+ if repeat.startswith('every_'):
+ try:
+ parts = repeat.split('_')
+ n = int(parts[1].rstrip('hdm'))
+ u = parts[1][-1]
+ if u == 'h': return timedelta(hours=n)
+ if u == 'm': return timedelta(minutes=n)
+ if u == 'd': return timedelta(days=n)
+ except (ValueError, IndexError):
+ pass # fall through to warning below
+ _logger.warning(f'Unknown repeat type: {repeat}, fallback to 20h cooldown')
+ return timedelta(hours=20)
+
+def _last_run(tid, done_files):
+ """找最近一次执行时间"""
+ latest = None
+ for df in done_files:
+ if not df.endswith(f'_{tid}.md'): continue
+ try:
+ t = datetime.strptime(df[:15], '%Y-%m-%d_%H%M')
+ if latest is None or t > latest: latest = t
+ except: continue
+ return latest
+
+def check():
+ # L4 archive cron (silent, every 12h)
+ global _l4_t
+ if _time.time() - _l4_t > 43200:
+ _l4_t = _time.time()
+ try:
+ import sys; sys.path.insert(0, os.path.join(_dir, '../memory/L4_raw_sessions'))
+ from compress_session import batch_process
+ raw_dir = os.path.join(_dir, '../temp/model_responses')
+ r = batch_process(raw_dir, dry_run=False)
+ print(f'[L4 cron] {r}')
+ except Exception as e:
+ _logger.error(f'L4 archive failed: {e}')
+
+ if not os.path.isdir(TASKS): return None
+ now = datetime.now()
+ os.makedirs(DONE, exist_ok=True)
+ done_files = set(os.listdir(DONE))
+ for f in sorted(os.listdir(TASKS)):
+ if not f.endswith('.json'): continue
+ tid = f[:-5]
+ try:
+ with open(os.path.join(TASKS, f), encoding='utf-8') as fp:
+ task = json.loads(fp.read())
+ except Exception as e:
+ _logger.error(f'JSON parse error for {f}: {e}')
+ continue
+ if not task.get('enabled', False): continue
+
+ repeat = task.get('repeat', 'daily')
+ sched = task.get('schedule', '00:00')
+ try:
+ h, m = map(int, sched.split(':'))
+ except Exception as e:
+ _logger.error(f'Invalid schedule format in {f}: {sched!r} ({e})')
+ continue
+
+ # weekday任务:周末跳过
+ if repeat == 'weekday' and now.weekday() >= 5: continue
+
+ # 还没到schedule时间就跳过
+ if now.hour < h or (now.hour == h and now.minute < m): continue
+
+ # 执行窗口检查:超过max_delay小时则跳过(防止开机太晚触发过时任务)
+ max_delay = task.get('max_delay_hours', DEFAULT_MAX_DELAY)
+ sched_minutes = h * 60 + m
+ now_minutes = now.hour * 60 + now.minute
+ if (now_minutes - sched_minutes) > max_delay * 60:
+ _logger.info(f'SKIP {tid}: {now_minutes - sched_minutes}min past schedule, '
+ f'exceeds max_delay={max_delay}h')
+ continue
+
+ # 检查冷却
+ last = _last_run(tid, done_files)
+ cooldown = _parse_cooldown(repeat)
+ if last and (now - last) < cooldown: continue
+
+ # 触发
+ _logger.info(f'TRIGGER {tid} (repeat={repeat}, schedule={sched}, '
+ f'last_run={last})')
+ ts = now.strftime('%Y-%m-%d_%H%M')
+ rpt = os.path.join(DONE, f'{ts}_{tid}.md')
+ prompt = task.get('prompt', '')
+ return (f'[定时任务] {tid}\n'
+ f'[报告路径] {rpt}\n\n'
+ f'先读 scheduled_task_sop 了解执行流程,然后执行以下任务:\n\n'
+ f'{prompt}\n\n'
+ f'完成后将执行报告写入 {rpt}。')
+
+ return None
diff --git a/scripts/brave_search.py b/scripts/brave_search.py
new file mode 100644
index 00000000..00ef9c91
--- /dev/null
+++ b/scripts/brave_search.py
@@ -0,0 +1,31 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""Brave Search API Wrapper"""
+import requests, json, sys, os
+
+def brave_search(api_key, query, count=5):
+ """Call Brave Search API"""
+ params = {"q": query, "count": count}
+ headers = {"Accept": "application/json", "X-Subscription-Token": api_key}
+ resp = requests.get("https://api.search.brave.com/res/v1/web/search", params=params, headers=headers, timeout=10)
+ if resp.status_code != 200:
+ raise Exception(f"Brave API error: {resp.status_code} {resp.text[:200]}")
+ data = resp.json()
+ results = []
+ for r in data.get("web", {}).get("results", []):
+ results.append({
+ "title": r.get("title", ""),
+ "url": r.get("url", ""),
+ "description": r.get("description", "")
+ })
+ return results
+
+if __name__ == "__main__":
+ query = sys.argv[1] if len(sys.argv) > 1 else "AI agent"
+ api_key = os.environ.get("BRAVE_API_KEY")
+ if not api_key:
+ raise Exception("BRAVE_API_KEY not found")
+ results = brave_search(api_key, query)
+ # 确保输出UTF-8
+ sys.stdout.reconfigure(encoding="utf-8")
+ print(json.dumps(results, indent=2, ensure_ascii=False))
\ No newline at end of file
diff --git a/scripts/exa_search.py b/scripts/exa_search.py
new file mode 100644
index 00000000..3fde109b
--- /dev/null
+++ b/scripts/exa_search.py
@@ -0,0 +1,27 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""Exa Semantic Search API Wrapper"""
+import requests, json, sys, os
+
+def exa_search(api_key, query, type="auto", count=5):
+ """Call Exa Semantic Search API"""
+ params = {
+ "query": query,
+ "numResults": count,
+ "type": type
+ }
+ headers = {"Authorization": f"Bearer {api_key}"}
+ resp = requests.post("https://api.exa.ai/search", json=params, headers=headers, timeout=10)
+ if resp.status_code != 200:
+ raise Exception(f"Exa API error: {resp.status_code} {resp.text}")
+ data = resp.json()
+ return [{'title': r.get('title', ''), 'url': r.get('url', ''), 'description': r.get('text', '')}
+ for r in data.get('results', [])]
+
+if __name__ == '__main__':
+ query = sys.argv[1] if len(sys.argv) > 1 else "AI agent"
+ api_key = os.environ.get('EXA_API_KEY')
+ if not api_key:
+ raise Exception("EXA_API_KEY not found")
+ results = exa_search(api_key, query)
+ print(json.dumps(results, indent=2, ensure_ascii=False))
\ No newline at end of file
diff --git a/scripts/jina_reader.py b/scripts/jina_reader.py
new file mode 100644
index 00000000..5db0e30f
--- /dev/null
+++ b/scripts/jina_reader.py
@@ -0,0 +1,136 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""Jina Reader/Search API Wrapper - Supports Free Endpoint with Auto-Fallback
+
+Features:
+- Free endpoint: https://r.jina.ai/http:// (no API key, rate limited)
+- API endpoint: Bearer token (requires balance, 10M tokens quota)
+- Auto-fallback: 402 error -> automatically use free endpoint
+"""
+import requests, json, sys, os
+
+def jina_read_url(url, api_key=None, auto_fallback=True):
+ """Call Jina Reader API (URL to Markdown)
+
+ Args:
+ url: Target URL to read
+ api_key: Optional Jina API Key for higher rate limits
+ auto_fallback: If True, automatically fallback to free endpoint on 402 error
+
+ Returns:
+ Markdown content string
+ """
+ # Ensure URL has protocol
+ if not url.startswith("http://") and not url.startswith("https://"):
+ url = "https://" + url
+
+ # Try with API key first if provided
+ if api_key:
+ headers = {"Authorization": f"Bearer {api_key}"}
+ try:
+ resp = requests.get(f"https://r.jina.ai/{url}", headers=headers, timeout=10)
+ if resp.status_code == 200:
+ return resp.text
+ elif resp.status_code == 402 and auto_fallback:
+ # Auto-fallback to free endpoint
+ print(f"⚠️ Jina API 402 (insufficient balance), falling back to free endpoint...", file=sys.stderr)
+ return jina_read_url(url, api_key=None, auto_fallback=False)
+ else:
+ raise Exception(f"Jina API error {resp.status_code}: {resp.text[:200]}")
+ except requests.RequestException as e:
+ if auto_fallback:
+ print(f"⚠️ Jina API request failed ({e}), falling back to free endpoint...", file=sys.stderr)
+ return jina_read_url(url, api_key=None, auto_fallback=False)
+ raise
+ else:
+ # Free endpoint mode - no auth header
+ resp = requests.get(f"https://r.jina.ai/{url}", timeout=10)
+ if resp.status_code != 200:
+ raise Exception(f"Jina Reader error: {resp.status_code} {resp.text[:200]}")
+ return resp.text
+
+def jina_search(query, count=5, api_key=None):
+ """Call Jina Search API - requires API key with balance
+
+ Args:
+ query: Search query string
+ count: Number of results (default 5)
+ api_key: Optional, will try environment variable if not provided
+
+ Returns:
+ List of search results with title, url, description
+ """
+ if not api_key:
+ api_key = os.environ.get("JINA_API_KEY")
+
+ if not api_key:
+ raise Exception("JINA_API_KEY required for search. Use read mode with free endpoint instead.")
+
+ params = {"query": query, "limit": count}
+ headers = {"Authorization": f"Bearer {api_key}"}
+ resp = requests.post("https://r.jina.ai/search", json=params, headers=headers, timeout=10)
+
+ if resp.status_code == 402:
+ raise Exception(f"Jina Search 402 InsufficientBalanceError: Account needs recharge. Use read mode with free endpoint instead.")
+ elif resp.status_code != 200:
+ raise Exception(f"Jina API error: {resp.status_code} {resp.text[:200]}")
+
+ data = resp.json()
+ return [{"title": r.get("title", ""), "url": r.get("url", ""), "description": r.get("description", "")}
+ for r in data.get("data", [])]
+
+if __name__ == "__main__":
+ sys.stdout.reconfigure(encoding="utf-8")
+
+ if len(sys.argv) < 2:
+ print("Usage: python jina_reader.py read [--api] [--no-fallback]")
+ print(" python jina_reader.py search 'query' [count]")
+ print("")
+ print("Modes:")
+ print(" read - Auto mode: try API key first, fallback to free endpoint on 402")
+ print(" read --api - Force API key mode (no fallback)")
+ print(" read --no-fallback - Disable auto fallback")
+ print(" search 'query' - Search API (requires JINA_API_KEY with balance)")
+ print("")
+ print("Environment: JINA_API_KEY (from registry or set manually)")
+ sys.exit(1)
+
+ mode = sys.argv[1]
+ api_key = os.environ.get("JINA_API_KEY")
+ use_api_force = "--api" in sys.argv
+ no_fallback = "--no-fallback" in sys.argv or use_api_force
+
+ if mode == "read":
+ url = sys.argv[2] if len(sys.argv) > 2 else "https://example.com"
+ if use_api_force and not api_key:
+ print("❌ --api requires JINA_API_KEY in environment", file=sys.stderr)
+ sys.exit(1)
+
+ # Auto mode: use api_key if available, with fallback
+ effective_key = api_key if (api_key and not no_fallback) else None
+
+ try:
+ content = jina_read_url(url, api_key=effective_key, auto_fallback=not no_fallback)
+ print(content[:3000])
+ except Exception as e:
+ print(f"❌ Error: {e}", file=sys.stderr)
+ sys.exit(1)
+
+ elif mode == "search":
+ if not api_key:
+ print("❌ JINA_API_KEY not found in environment", file=sys.stderr)
+ print(" For free usage, use: python jina_reader.py read ", file=sys.stderr)
+ sys.exit(1)
+
+ query = sys.argv[2] if len(sys.argv) > 2 else "AI agent"
+ count = int(sys.argv[3]) if len(sys.argv) > 3 else 5
+
+ try:
+ results = jina_search(query, count, api_key)
+ print(json.dumps(results, indent=2, ensure_ascii=False))
+ except Exception as e:
+ print(f"❌ Search error: {e}", file=sys.stderr)
+ sys.exit(1)
+ else:
+ print(f"Unknown mode: {mode}", file=sys.stderr)
+ sys.exit(1)
\ No newline at end of file
diff --git a/scripts/search.py b/scripts/search.py
new file mode 100644
index 00000000..588f64c5
--- /dev/null
+++ b/scripts/search.py
@@ -0,0 +1,138 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Unified Search Entry - GA 搜索工具统一入口
+支持:Baidu,Tavily,Brave,Serper,Exa,Jina
+调用示例:
+ python search.py "query text"
+ python search.py '{"query": "...", "engine": "tavily", "count": 5}'
+"""
+import sys, io, os, json, re
+sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace')
+
+def parse_query(args):
+ """解析输入参数: 支持纯文本和JSON"""
+ if not args:
+ return {"query": "", "count": 5, "engine": "baidu"}
+ first = args[0]
+ if first.startswith('{'):
+ try:
+ return json.loads(first)
+ except:
+ pass
+ return {"query": first, "count": 5, "engine": "baidu"}
+
+def call_baidu(query, count=5):
+ """调用Baidu Search API"""
+ from search_baidu import baidu_search
+ current_time = __import__('datetime').datetime.now()
+ from datetime import timedelta
+ request_body = {"query": query, "count": count}
+ results = baidu_search(os.environ['BAIDU_API_KEY'], request_body)
+ return results
+
+def call_tavily(query, count=5):
+ """调用Tavily Search API"""
+ from search_tavily import tavily_search
+ api_key = os.environ['TAVILY_API_KEY']
+ results = tavily_search(api_key, query, count)
+ return results
+
+def call_brave(query, count=5):
+ """调用Brave Search API"""
+ import requests
+ api_key = os.environ['BRAVE_API_KEY']
+ resp = requests.get(
+ 'https://api.search.brave.com/res/v1/web/search',
+ params={'q': query, 'count': count},
+ headers={'X-Subscription-Token': api_key},
+ timeout=10
+ )
+ if resp.status_code != 200:
+ raise Exception(f"Brave API error: {resp.status_code}")
+ data = resp.json()
+ return [{'title': r.get('title'), 'url': r.get('url'), 'description': r.get('description')}
+ for r in data.get('web', {}).get('results', [])]
+
+def call_serper(query, count=5):
+ """调用Serper (Google) API"""
+ import requests
+ api_key = os.environ['GOOGLE_SERPER_API_KEY']
+ resp = requests.post(
+ 'https://google.serper.dev/search',
+ json={'q': query, 'num': count},
+ headers={'X-API-KEY': api_key},
+ timeout=10
+ )
+ if resp.status_code != 200:
+ raise Exception(f"Serper API error: {resp.status_code}")
+ data = resp.json()
+ return [{'title': r.get('title'), 'url': r.get('link'), 'description': r.get('snippet')}
+ for r in data.get('organic', [])]
+
+def call_exa(query, count=5):
+ """调用Exa Semantic Search"""
+ import requests
+ api_key = os.environ['EXA_API_KEY']
+ resp = requests.post(
+ 'https://api.exa.ai/search',
+ json={'query': query, 'numResults': count},
+ headers={'Authorization': f'Bearer {api_key}'},
+ timeout=10
+ )
+ if resp.status_code != 200:
+ raise Exception(f"Exa API error: {resp.status_code}")
+ data = resp.json()
+ return [{'title': r.get('title', ''), 'url': r.get('url', ''), 'description': r.get('text', '')}
+ for r in data.get('results', [])]
+
+def call_jina_read(query, count=5):
+ """调用Jina Reader (search mode)"""
+ import requests
+ api_key = os.environ['JINA_API_KEY']
+ resp = requests.post(
+ 'https://r.jina.ai/search',
+ json={'query': query, 'limit': count},
+ headers={'Authorization': f'Bearer {api_key}'},
+ timeout=10
+ )
+ if resp.status_code != 200:
+ raise Exception(f"Jina API error: {resp.status_code}")
+ data = resp.json()
+ return [{'title': r.get('title', ''), 'url': r.get('url', ''), 'description': r.get('description', '')}
+ for r in data.get('results', [])]
+
+def main():
+ params = parse_query(sys.argv[1:])
+ query = params.get('query', '')
+ count = int(params.get('count', params.get('max_results', 5)))
+ engine = params.get('engine', 'baidu').lower()
+
+ if not query:
+ print("Usage: python search.py 'query' [count] or {'query': '...', 'engine': '...'}")
+ sys.exit(1)
+
+ try:
+ if engine == 'baidu':
+ results = call_baidu(query, count)
+ elif engine == 'tavily':
+ results = call_tavily(query, count)
+ elif engine == 'brave':
+ results = call_brave(query, count)
+ elif engine == 'serper':
+ results = call_serper(query, count)
+ elif engine == 'exa':
+ results = call_exa(query, count)
+ elif engine == 'jina':
+ results = call_jina_read(query, count)
+ else:
+ print(f"Unknown engine: {engine}")
+ sys.exit(1)
+
+ print(json.dumps(results, indent=2, ensure_ascii=False))
+ except Exception as e:
+ print(f"Error: {str(e)}", file=sys.stderr)
+ sys.exit(1)
+
+if __name__ == '__main__':
+ main()
\ No newline at end of file
diff --git a/scripts/search_baidu.py b/scripts/search_baidu.py
new file mode 100644
index 00000000..063b7b58
--- /dev/null
+++ b/scripts/search_baidu.py
@@ -0,0 +1,22 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""Baidu Search API Wrapper"""
+import requests, json
+
+def baidu_search(api_key, request_body):
+ """Call Baidu Search API"""
+ url = "https://ai.baidu.com/aisearch"
+ headers = {"Content-Type": "application/json"}
+ params = {"ak": api_key}
+ resp = requests.post(url, params=params, json=request_body, headers=headers, timeout=10)
+ if resp.status_code != 200:
+ raise Exception(f"Baidu API error: {resp.status_code} {resp.text}")
+ data = resp.json()
+ results = []
+ for r in data.get('results', []):
+ results.append({
+ 'title': r.get('title', ''),
+ 'url': r.get('url', ''),
+ 'description': r.get('abstract', '')
+ })
+ return results
\ No newline at end of file
diff --git a/scripts/search_tavily.py b/scripts/search_tavily.py
new file mode 100644
index 00000000..076b830c
--- /dev/null
+++ b/scripts/search_tavily.py
@@ -0,0 +1,17 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""Tavily Search API Wrapper"""
+import requests, json
+
+def tavily_search(api_key, query, max_results=5):
+ """Call Tavily Search API"""
+ resp = requests.post(
+ 'https://api.tavily.com/search',
+ json={'query': query, 'api_key': api_key, 'max_results': max_results},
+ timeout=10
+ )
+ if resp.status_code != 200:
+ raise Exception(f"Tavily API error: {resp.status_code}")
+ data = resp.json()
+ return [{'title': r.get('title', ''), 'url': r.get('url', ''), 'description': r.get('content', '')}
+ for r in data.get('results', [])]
\ No newline at end of file
diff --git a/scripts/search_verification.py b/scripts/search_verification.py
new file mode 100644
index 00000000..79e11d62
--- /dev/null
+++ b/scripts/search_verification.py
@@ -0,0 +1,279 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+"""
+GA搜索引擎自动验证器 (search_verification.py)
+用途: 在每次部署/更新后运行,防止SOP与实际实现脱节
+
+用法: python scripts/search_verification.py [--verbose] [--json] [--fail-on-warn]
+"""
+
+import os
+import sys
+import json
+import argparse
+from dataclasses import dataclass, asdict, field
+from typing import Dict, List, Optional
+sys.stdout.reconfigure(encoding='utf-8')
+
+# ============================================================
+# 配置: 所有声称可用的搜索引擎
+# ============================================================
+SEARCH_TOOLS = {
+ "baidu": {
+ "description": "百度千帆中文搜索",
+ "api_key_env": "BAIDU_API_KEY",
+ "script_path": r"C:\Users\Administrator\.agents\skills\baidu-search\scripts\search.py",
+ "skill_md_path": r"C:\Users\Administrator\.agents\skills\baidu-search\SKILL.md",
+ "priority": "P0 - Chinese default",
+ "call_format": "json", # 需要JSON参数: '{"query": "...", "count": 3}'
+ },
+ "tavily": {
+ "description": "Tavily英文/AI搜索",
+ "api_key_env": "TAVILY_API_KEY",
+ "script_path": r"C:\Users\Administrator\.agents\skills\tavily-search\scripts\search.py",
+ "skill_md_path": r"C:\Users\Administrator\.agents\skills\tavily-search\SKILL.md",
+ "priority": "P0 - English default",
+ "call_format": "text_or_json", # 纯文本查询或JSON: '{"query": "..."}'
+ },
+ "brave": {
+ "description": "Brave全球Web搜索",
+ "api_key_env": "BRAVE_SEARCH_API_KEY",
+ "script_path": r"C:\Users\Administrator\.agents\skills\brave-search\scripts\search.py",
+ "skill_md_path": r"C:\Users\Administrator\.agents\skills\brave-search\SKILL.md",
+ "priority": "P1 - English fallback",
+ },
+ "serper": {
+ "description": "Serper Google搜索结果",
+ "api_key_env": "SERPER_API_KEY",
+ "script_path": r"C:\Users\Administrator\.agents\skills\serper-search\scripts\search.py",
+ "skill_md_path": r"C:\Users\Administrator\.agents\skills\serper-search\SKILL.md",
+ "priority": "P1 - Google fallback",
+ },
+ "exa": {
+ "description": "Exa语义搜索",
+ "api_key_env": "EXA_API_KEY",
+ "script_path": r"C:\Users\Administrator\.agents\skills\exa-search\scripts\search.py",
+ "skill_md_path": r"C:\Users\Administrator\.agents\skills\exa-search\SKILL.md",
+ "priority": "P2 - Semantic mining",
+ },
+ "jina": {
+ "description": "Jina Reader全文提取",
+ "api_key_env": "JINA_API_KEY",
+ "script_path": r"C:\Users\Administrator\.agents\skills\jina-reader\scripts\reader.py",
+ "skill_md_path": r"C:\Users\Administrator\.agents\skills\jina-reader\SKILL.md",
+ "priority": "P2 - Full text extraction",
+ },
+}
+
+# ============================================================
+# 结果数据结构
+# ============================================================
+@dataclass
+class ToolStatus:
+ """单个搜索引擎的状态信息"""
+ name: str
+ description: str
+ priority: str
+
+ script_exists: bool = False
+ api_key_configured: bool = False
+ skill_md_exists: bool = False
+ quick_test_passed: bool = False
+
+ overall_status: str = "" # "verified", "configured_only", "missing", "not_implemented"
+ issues: List[str] = field(default_factory=list)
+
+ def to_dict(self):
+ return asdict(self)
+
+def evaluate_status(tool: ToolStatus, tool_config: dict) -> str:
+ """评估搜索引擎整体状态"""
+ if tool.quick_test_passed:
+ return "verified"
+
+ has_anything = tool.script_exists or tool.api_key_configured or tool.skill_md_exists
+
+ if has_anything:
+ return "partially_configured"
+ else:
+ return "not_implemented"
+
+def verify_tool(name: str, config: dict, verbose: bool = False) -> ToolStatus:
+ """验证单个搜索引擎的所有组件"""
+ result = ToolStatus(
+ name=name,
+ description=config["description"],
+ priority=config["priority"],
+ )
+
+ # 1. 检查API Key
+ api_key_env = config.get("api_key_env", "")
+ api_key_value = os.environ.get(api_key_env, "")
+ if api_key_value and len(api_key_value.strip()) > 5:
+ result.api_key_configured = True
+ if verbose:
+ print(f" [OK] API Key configured ({api_key_env})")
+ else:
+ result.issues.append(f"API Key missing: {api_key_env}")
+ if verbose:
+ print(f" [MISSING] API Key: {api_key_env}")
+
+ # 2. 检查脚本文件
+ script_path = config.get("script_path", "")
+ if script_path and os.path.isfile(script_path):
+ result.script_exists = True
+ file_size = os.path.getsize(script_path)
+ if verbose:
+ print(f" [OK] Script exists: {script_path} ({file_size} bytes)")
+ else:
+ result.issues.append(f"Script not found: {script_path}")
+ if verbose:
+ print(f" [MISSING] Script: {script_path}")
+
+ # 3. 检查SKILL.md
+ skill_md = config.get("skill_md_path", "")
+ if skill_md and os.path.isfile(skill_md):
+ result.skill_md_exists = True
+ if verbose:
+ print(f" [OK] SKILL.md exists: {skill_md}")
+ else:
+ result.issues.append(f"SKILL.md not found: {skill_md}")
+ if verbose:
+ print(f" [MISSING] SKILL.md: {skill_md}")
+
+ # 4. 快速功能测试 (仅对Baidu和Tavily做真实测试)
+ if name == "baidu" and result.api_key_configured and result.script_exists:
+ try:
+ import subprocess
+ import json
+ # Baidu需要JSON参数格式
+ test_query = json.dumps({"query": "test", "count": 1}, ensure_ascii=False)
+ proc = subprocess.run([
+ sys.executable, script_path, test_query
+ ], capture_output=True, text=True, timeout=15)
+ result.quick_test_passed = proc.returncode == 0
+ if verbose:
+ print(f" [{'OK' if result.quick_test_passed else 'FAIL'}] Quick test: {'passed' if result.quick_test_passed else 'failed'}")
+ if not result.quick_test_passed and verbose:
+ print(f" [DEBUG] stdout: {proc.stdout[:200]}")
+ print(f" [DEBUG] stderr: {proc.stderr[:200]}")
+ except Exception as e:
+ if verbose:
+ print(f" [ERROR] Quick test failed: {e}")
+
+ elif name == "tavily" and result.api_key_configured and result.script_exists:
+ try:
+ import subprocess
+ # Tavily接受纯文本查询(第一个参数即query)
+ proc = subprocess.run([
+ sys.executable, script_path, "test query", "--results", "1"
+ ], capture_output=True, text=True, timeout=15)
+ result.quick_test_passed = proc.returncode == 0
+ if verbose:
+ print(f" [{'OK' if result.quick_test_passed else 'FAIL'}] Quick test: {'passed' if result.quick_test_passed else 'failed'}")
+ if not result.quick_test_passed and verbose:
+ print(f" [DEBUG] stdout: {proc.stdout[:200]}")
+ print(f" [DEBUG] stderr: {proc.stderr[:200]}")
+ except Exception as e:
+ if verbose:
+ print(f" [ERROR] Quick test failed: {e}")
+
+ # 5. 整体状态评估
+ result.overall_status = evaluate_status(result, config)
+
+ return result
+
+# ============================================================
+# 主函数
+# ============================================================
+def run_verification(verbose: bool = False, json_output: bool = False,
+ fail_on_warn: bool = False) -> Dict:
+ """执行完整验证流程"""
+ print("=" * 70)
+ print("🔍 GA Search Tools Verification")
+ print(f"📅 Date: {__import__('datetime').datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
+ print("=" * 70)
+
+ results = {}
+ status_counts = {
+ "verified": 0,
+ "partially_configured": 0,
+ "not_implemented": 0,
+ }
+
+ for name, config in SEARCH_TOOLS.items():
+ if verbose:
+ print(f"\n[{name.upper()}] Testing...")
+
+ result = verify_tool(name, config, verbose=verbose)
+ results[name] = result
+
+ status_counts[result.overall_status] = status_counts.get(result.overall_status, 0) + 1
+
+ # 打印一行摘要
+ emoji_map = {
+ "verified": "✅",
+ "partially_configured": "⚠️",
+ "not_implemented": "❌",
+ }
+ emoji = emoji_map.get(result.overall_status, "?")
+ print(f"{emoji:<5} {name:12s} | Status: {result.overall_status:25s} | Issues: {len(result.issues)}")
+
+ # 汇总
+ total = len(SEARCH_TOOLS)
+ verified_count = status_counts["verified"]
+
+ print("\n" + "=" * 70)
+ print("📊 VERIFICATION SUMMARY")
+ print("=" * 70)
+ print(f"Total tools declared: {total}")
+ print(f"Verified working: {verified_count} ({verified_count/total*100:.1f}%)")
+ print(f"Partially configured: {status_counts['partially_configured']}")
+ print(f"Not implemented: {status_counts['not_implemented']}")
+
+ if verified_count < total * 0.5:
+ print("\n⚠️ WARNING: Less than 50% of declared tools are verified!")
+
+ # JSON输出
+ if json_output:
+ json_results = {k: v.to_dict() for k, v in results.items()}
+ json_results["summary"] = {
+ "timestamp": __import__('datetime').datetime.now().isoformat(),
+ "total": total,
+ "verified_count": verified_count,
+ "status_counts": status_counts,
+ "pass_rate": f"{verified_count/total*100:.1f}%",
+ }
+ print("\n--- JSON Output ---")
+ print(json.dumps(json_results, indent=2, ensure_ascii=False))
+
+ # 返回是否通过
+ pass_threshold = 0.5 if not fail_on_warn else 1.0
+ passed = verified_count >= total * pass_threshold
+
+ if not passed:
+ print(f"\n🚨 VERIFICATION FAILED: Pass rate {verified_count/total*100:.1f}% below threshold {pass_threshold*100:.0f}%")
+ else:
+ print(f"\n✅ VERIFICATION PASSED: Pass rate {verified_count/total*100:.1f}% meets threshold {pass_threshold*100:.0f}%")
+
+ return {"passed": passed, "results": results, "summary": {
+ "total": total,
+ "verified_count": verified_count,
+ "status_counts": status_counts,
+ "pass_rate": f"{verified_count/total*100:.1f}%",
+ }}
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser(description="GA Search Tools Verification")
+ parser.add_argument("--verbose", "-v", action="store_true", help="Show detailed output")
+ parser.add_argument("--json", "-j", action="store_true", help="Output as JSON")
+ parser.add_argument("--strict", action="store_true", help="Fail unless ALL tools work")
+ args = parser.parse_args()
+
+ result = run_verification(
+ verbose=args.verbose,
+ json_output=args.json,
+ fail_on_warn=args.strict,
+ )
+
+ sys.exit(0 if result["passed"] else 1)
diff --git a/scripts/serper_search.py b/scripts/serper_search.py
new file mode 100644
index 00000000..1d066d49
--- /dev/null
+++ b/scripts/serper_search.py
@@ -0,0 +1,23 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""Serper (Google) Search API Wrapper"""
+import requests, json, sys, os
+
+def serper_search(api_key, query, count=5):
+ """Call Serper Search API (Google) - use x-api-key header"""
+ headers = {"x-api-key": api_key, "Content-Type": "application/json"}
+ payload = {"q": query, "num": count}
+ resp = requests.post("https://google.serper.dev/search", headers=headers, json=payload, timeout=10)
+ if resp.status_code != 200:
+ raise Exception(f"Serper API error: {resp.status_code} {resp.text[:200]}")
+ data = resp.json()
+ return [{"title": r.get("title", ""), "url": r.get("link", ""), "description": r.get("snippet", "")}
+ for r in data.get("organic", [])]
+
+if __name__ == "__main__":
+ query = sys.argv[1] if len(sys.argv) > 1 else "AI agent"
+ api_key = os.environ.get("X-API-KEY")
+ if not api_key:
+ raise Exception("X-API-KEY not found in environment")
+ results = serper_search(api_key, query)
+ print(json.dumps(results, indent=2, ensure_ascii=False))
\ No newline at end of file
diff --git a/scripts/verify_claims.py b/scripts/verify_claims.py
new file mode 100644
index 00000000..91e69bdf
--- /dev/null
+++ b/scripts/verify_claims.py
@@ -0,0 +1,144 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+"""
+verify_claims.py - 通用事实验证脚本
+用途: 为 Agent 提供"报告前最后验证"的统一入口
+设计原则: 所有验证必须有工具输出证据, 无证据的结论自动标记为 UNVERIFIED
+"""
+
+import subprocess
+import sys
+import os
+import json
+from typing import List, Dict
+
+
+class VerificationResult:
+ def __init__(self, claim: str):
+ self.claim = claim
+ self.evidence: List[Dict] = []
+ self.status: str = "UNVERIFIED"
+ self.summary: str = ""
+
+ def add_evidence(self, action: str, tool: str, output_summary: str, passed: bool):
+ self.evidence.append({
+ "action": action, "tool": tool,
+ "output": output_summary, "passed": passed
+ })
+
+ def finalize(self):
+ if not self.evidence:
+ self.status = "UNVERIFIED"
+ self.summary = "无任何工具证据 -> 结论无效"
+ elif all(e["passed"] for e in self.evidence):
+ self.status = "PASS"
+ self.summary = "所有检查通过"
+ else:
+ self.status = "FAIL"
+ failed = [e for e in self.evidence if not e["passed"]]
+ self.summary = f"共{len(self.evidence)}项检查, {len(failed)}项失败"
+ return self.to_markdown()
+
+ def to_markdown(self) -> str:
+ lines = [f"## 验证: {self.claim}", "",
+ f"**最终裁定: {self.status}**",
+ f"**摘要:** {self.summary}", "",
+ "| # | 验证动作 | 工具 | 关键输出 | PASS/FAIL |",
+ "|---|---------|------|---------|:--------:|"]
+ for i, e in enumerate(self.evidence, 1):
+ ps = "PASS" if e["passed"] else "FAIL"
+ lines.append(f"| {i} | {e['action']} | {e['tool']} | {e['output']} | {ps} |")
+ lines.append("")
+ return "\n".join(lines)
+
+ def to_json(self) -> str:
+ return json.dumps({
+ "claim": self.claim, "status": self.status,
+ "summary": self.summary, "evidence": self.evidence
+ }, ensure_ascii=False, indent=2)
+
+
+def verify_claim(claim: str, evidence_builder=None) -> VerificationResult:
+ vr = VerificationResult(claim)
+ if evidence_builder:
+ evidence_builder(vr)
+ vr.finalize()
+ return vr
+
+
+def run_command_verification(claim: str, command: str, success_keywords: list = None) -> VerificationResult:
+ vr = VerificationResult(claim)
+ try:
+ result = subprocess.run(command, shell=True, capture_output=True, text=True, timeout=30)
+ output = (result.stdout + "\n" + result.stderr).strip()
+ exit_ok = result.returncode == 0
+ kw_ok = True
+ missing_kw = []
+ if success_keywords:
+ for kw in success_keywords:
+ if kw.lower() not in output.lower():
+ kw_ok = False
+ missing_kw.append(kw)
+ passed = exit_ok and kw_ok
+ summary = output[:200].replace("\n", " | ")
+ if not exit_ok:
+ summary += f" [exit={result.returncode}]"
+ if missing_kw:
+ summary += f" [缺失关键词: {missing_kw}]"
+ vr.add_evidence(f"run: {command[:80]}", "code_run", summary, passed)
+ except Exception as e:
+ vr.add_evidence(f"run: {command[:80]}", "code_run", f"异常: {str(e)[:100]}", False)
+ vr.finalize()
+ return vr
+
+
+def verify_file_content(claim: str, file_path: str, expected_content: str = None) -> VerificationResult:
+ vr = VerificationResult(claim)
+ if not os.path.exists(file_path):
+ vr.add_evidence(f"检查文件存在: {file_path}", "file_read", "文件不存在", False)
+ vr.finalize()
+ return vr
+ try:
+ with open(file_path, 'r', encoding='utf-8') as f:
+ content = f.read()
+ if expected_content:
+ found = expected_content in content
+ vr.add_evidence(f"检查: {file_path}", "file_read",
+ f"包含期望内容={found}", passed=found)
+ else:
+ vr.add_evidence(f"检查: {file_path}", "file_read",
+ f"文件存在, {len(content)} bytes", passed=True)
+ except Exception as e:
+ vr.add_evidence(f"读取: {file_path}", "file_read", f"失败: {str(e)[:100]}", False)
+ vr.finalize()
+ return vr
+
+
+def main():
+ import argparse
+ parser = argparse.ArgumentParser(description="通用事实验证工具")
+ parser.add_argument("--check", required=True, help="待验证的结论")
+ parser.add_argument("--command", help="验证命令")
+ parser.add_argument("--file", help="验证文件路径")
+ parser.add_argument("--expect", help="文件应包含的内容")
+ parser.add_argument("--keywords", nargs="*", help="命令输出应包含的关键词")
+ parser.add_argument("--json", action="store_true", help="JSON 格式输出")
+ args = parser.parse_args()
+
+ if args.command:
+ vr = run_command_verification(args.check, args.command, args.keywords)
+ elif args.file:
+ vr = verify_file_content(args.check, args.file, args.expect)
+ else:
+ print("错误: 必须指定 --command 或 --file")
+ sys.exit(1)
+
+ if args.json:
+ print(vr.to_json())
+ else:
+ print(vr.to_markdown())
+ sys.exit(0 if vr.status == "PASS" else 1)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/start.bat b/start.bat
new file mode 100644
index 00000000..6d5da69d
--- /dev/null
+++ b/start.bat
@@ -0,0 +1,4 @@
+@echo off
+cd /d "%~dp0"
+call .venv\Scripts\activate.bat
+.venv\Scripts\python.exe launch.pyw
\ No newline at end of file
diff --git a/start_all.bat b/start_all.bat
new file mode 100644
index 00000000..0bb1d0a1
--- /dev/null
+++ b/start_all.bat
@@ -0,0 +1,49 @@
+@echo off
+:: ====== 自动提权(UAC)逻辑 ======
+:: 检查是否为管理员,如果不是则自我提权
+whoami /groups | find "S-1-5-32-544" >nul 2>nul
+if not errorlevel 1 goto :gotAdmin
+:: 不是管理员,尝试自我提权
+echo [INFO] 当前未以管理员权限运行,尝试自动提权...
+powershell -Command "Start-Process '%~f0' -Verb RunAs" >nul 2>nul
+exit /b
+:gotAdmin
+cd /d "%~dp0"
+setlocal
+
+set "LITELLM_PORT=8000"
+set "WAIT_SECONDS=60"
+set "LITELLM_READY=0"
+
+if "%GA_PROXY_MODE%"=="" set "GA_PROXY_MODE=auto"
+if "%GA_PROXY_URL%"=="" set "GA_PROXY_URL=http://127.0.0.1:6789"
+echo [INFO] Proxy settings for LiteLLM: GA_PROXY_MODE=%GA_PROXY_MODE%, GA_PROXY_URL=%GA_PROXY_URL%
+echo [INFO] 当前已以管理员权限运行
+
+if not exist ".venv\Scripts\python.exe" (
+ echo [ERROR] .venv not found. Please create virtual environment first.
+ echo python -m venv .venv
+ exit /b 1
+)
+
+echo [INFO] Checking whether LiteLLM is already running on port %LITELLM_PORT%...
+powershell -NoProfile -ExecutionPolicy Bypass -Command "try { $resp = Invoke-WebRequest -Uri 'http://127.0.0.1:%LITELLM_PORT%/v1/models' -TimeoutSec 2 -UseBasicParsing; if ($resp.StatusCode -eq 200) { exit 0 } else { exit 1 } } catch { exit 1 }"
+if not errorlevel 1 set "LITELLM_READY=1"
+
+if "%LITELLM_READY%"=="1" (
+ echo [INFO] LiteLLM is already available. Skipping duplicate startup.
+) else (
+ echo [INFO] LiteLLM is not running. Starting LiteLLM bootstrap in a separate window...
+ start "GenericAgent LiteLLM" cmd /c start_litellm.bat
+)
+
+echo [INFO] Waiting for LiteLLM on port %LITELLM_PORT% to become ready ^(up to %WAIT_SECONDS% seconds^)...
+powershell -NoProfile -ExecutionPolicy Bypass -Command "$ready = $false; for ($i = 0; $i -lt %WAIT_SECONDS%; $i++) { try { $resp = Invoke-WebRequest -Uri 'http://127.0.0.1:%LITELLM_PORT%/v1/models' -TimeoutSec 2 -UseBasicParsing; if ($resp.StatusCode -eq 200) { $ready = $true; break } } catch {}; Start-Sleep -Seconds 1 }; if (-not $ready) { exit 1 }"
+if errorlevel 1 (
+ echo [ERROR] LiteLLM was not ready within %WAIT_SECONDS% seconds.
+ echo If a LiteLLM window opened, check that window for details.
+ exit /b 1
+)
+
+echo [INFO] LiteLLM is ready. Launching GenericAgent UI...
+call start.bat
\ No newline at end of file
diff --git a/start_litellm.bat b/start_litellm.bat
new file mode 100644
index 00000000..0d1b655d
--- /dev/null
+++ b/start_litellm.bat
@@ -0,0 +1,88 @@
+@echo off
+cd /d "%~dp0"
+setlocal
+
+set "LITELLM_PORT=8000"
+if "%GA_PROXY_MODE%"=="" set "GA_PROXY_MODE=auto"
+if "%GA_PROXY_URL%"=="" set "GA_PROXY_URL=http://127.0.0.1:6789"
+
+if not exist ".venv\Scripts\python.exe" (
+ echo [ERROR] .venv not found. Please create virtual environment first.
+ echo python -m venv .venv
+ exit /b 1
+)
+
+call .venv\Scripts\activate.bat
+if errorlevel 1 (
+ echo [ERROR] Failed to activate .venv
+ exit /b 1
+)
+
+if "%GITHUB_COPILOT_TOKEN%"=="" (
+ for /f "usebackq delims=" %%i in (`gh auth token 2^>nul`) do set "GITHUB_COPILOT_TOKEN=%%i"
+)
+
+if "%GITHUB_COPILOT_TOKEN%"=="" (
+ echo [ERROR] GITHUB_COPILOT_TOKEN is not set.
+ echo Please run `gh auth login --scopes copilot` or set the environment variable manually.
+ exit /b 1
+)
+
+set "GA_PROXY_ACTIVE=0"
+if /I "%GA_PROXY_MODE%"=="off" (
+ set "GA_PROXY_ACTIVE=0"
+) else if /I "%GA_PROXY_MODE%"=="on" (
+ powershell -NoProfile -ExecutionPolicy Bypass -Command "$u=[uri]$env:GA_PROXY_URL; $c=New-Object Net.Sockets.TcpClient; try { $ar=$c.BeginConnect($u.Host,$u.Port,$null,$null); if(-not $ar.AsyncWaitHandle.WaitOne(1200)){ exit 1 }; $c.EndConnect($ar); exit 0 } catch { exit 1 } finally { $c.Close() }"
+ if errorlevel 1 (
+ echo [ERROR] GA_PROXY_MODE=on but proxy is unreachable: %GA_PROXY_URL%
+ exit /b 1
+ )
+ set "GA_PROXY_ACTIVE=1"
+) else (
+ powershell -NoProfile -ExecutionPolicy Bypass -Command "$u=[uri]$env:GA_PROXY_URL; $c=New-Object Net.Sockets.TcpClient; try { $ar=$c.BeginConnect($u.Host,$u.Port,$null,$null); if(-not $ar.AsyncWaitHandle.WaitOne(1200)){ exit 1 }; $c.EndConnect($ar); exit 0 } catch { exit 1 } finally { $c.Close() }"
+ if not errorlevel 1 set "GA_PROXY_ACTIVE=1"
+)
+
+if "%GA_PROXY_ACTIVE%"=="1" (
+ set "HTTP_PROXY=%GA_PROXY_URL%"
+ set "HTTPS_PROXY=%GA_PROXY_URL%"
+ set "ALL_PROXY=%GA_PROXY_URL%"
+ set "NO_PROXY=127.0.0.1,localhost"
+ echo [INFO] Proxy mode=%GA_PROXY_MODE% ^(active^): %GA_PROXY_URL%
+) else (
+ set "HTTP_PROXY="
+ set "HTTPS_PROXY="
+ set "ALL_PROXY="
+ set "NO_PROXY=*"
+ echo [INFO] Proxy mode=%GA_PROXY_MODE% ^(direct^)
+)
+
+if not exist ".venv\Scripts\litellm.exe" (
+ echo [INFO] LiteLLM not found in .venv, installing...
+ .venv\Scripts\python.exe -m pip install "litellm[proxy]"
+ if errorlevel 1 (
+ echo [ERROR] Failed to install litellm in .venv
+ exit /b 1
+ )
+)
+
+if exist ".venv\Scripts\python.exe" if exist "verify_copilot_models.py" (
+ echo [INFO] Syncing available Copilot models into config...
+ powershell -NoProfile -ExecutionPolicy Bypass -Command "$conn = Get-NetTCPConnection -LocalPort %LITELLM_PORT% -State Listen -ErrorAction SilentlyContinue; if ($conn) { $conn | Select-Object -ExpandProperty OwningProcess -Unique | ForEach-Object { Stop-Process -Id $_ -Force -ErrorAction SilentlyContinue } }" >nul 2>nul
+ powershell -NoProfile -ExecutionPolicy Bypass -Command "Start-Process -WindowStyle Hidden -FilePath '%CD%\.venv\Scripts\litellm.exe' -ArgumentList '--config','litellm_config.yaml','--port','%LITELLM_PORT%'"
+ powershell -NoProfile -ExecutionPolicy Bypass -Command "$ready = $false; for ($i = 0; $i -lt 40; $i++) { try { $resp = Invoke-WebRequest -Uri 'http://127.0.0.1:%LITELLM_PORT%/v1/models' -TimeoutSec 2 -UseBasicParsing; if ($resp.StatusCode -eq 200) { $ready = $true; break } } catch {}; Start-Sleep -Milliseconds 500 }; if (-not $ready) { exit 1 }"
+ if errorlevel 1 (
+ echo [ERROR] Bootstrap LiteLLM failed to start.
+ exit /b 1
+ )
+ .venv\Scripts\python.exe verify_copilot_models.py --apply
+ if errorlevel 1 (
+ echo [ERROR] Failed to refresh available Copilot models.
+ powershell -NoProfile -ExecutionPolicy Bypass -Command "$conn = Get-NetTCPConnection -LocalPort %LITELLM_PORT% -State Listen -ErrorAction SilentlyContinue; if ($conn) { $conn | Select-Object -ExpandProperty OwningProcess -Unique | ForEach-Object { Stop-Process -Id $_ -Force -ErrorAction SilentlyContinue } }" >nul 2>nul
+ exit /b 1
+ )
+ powershell -NoProfile -ExecutionPolicy Bypass -Command "$conn = Get-NetTCPConnection -LocalPort %LITELLM_PORT% -State Listen -ErrorAction SilentlyContinue; if ($conn) { $conn | Select-Object -ExpandProperty OwningProcess -Unique | ForEach-Object { Stop-Process -Id $_ -Force -ErrorAction SilentlyContinue } }" >nul 2>nul
+)
+
+echo [INFO] Starting LiteLLM on port 8000 using .venv
+.venv\Scripts\litellm.exe --config litellm_config.yaml --port %LITELLM_PORT%
diff --git a/temp_litellm_probe.yaml b/temp_litellm_probe.yaml
new file mode 100644
index 00000000..b89347fa
--- /dev/null
+++ b/temp_litellm_probe.yaml
@@ -0,0 +1,37 @@
+model_list:
+ - model_name: gpt-4
+ litellm_params:
+ model: github_copilot/gpt-4
+ api_key: os.environ/GITHUB_COPILOT_TOKEN
+ extra_headers:
+ Editor-Version: "vscode/1.85.1"
+ Editor-Plugin-Version: "copilot/1.155.0"
+ Copilot-Integration-Id: "vscode-chat"
+ User-Agent: "GitHubCopilotChat/0.35.0"
+ - model_name: gpt-5
+ litellm_params:
+ model: github_copilot/gpt-5
+ api_key: os.environ/GITHUB_COPILOT_TOKEN
+ extra_headers:
+ Editor-Version: "vscode/1.85.1"
+ Editor-Plugin-Version: "copilot/1.155.0"
+ Copilot-Integration-Id: "vscode-chat"
+ User-Agent: "GitHubCopilotChat/0.35.0"
+ - model_name: claude-sonnet-4.5
+ litellm_params:
+ model: github_copilot/claude-sonnet-4.5
+ api_key: os.environ/GITHUB_COPILOT_TOKEN
+ extra_headers:
+ Editor-Version: "vscode/1.85.1"
+ Editor-Plugin-Version: "copilot/1.155.0"
+ Copilot-Integration-Id: "vscode-chat"
+ User-Agent: "GitHubCopilotChat/0.35.0"
+ - model_name: gemini-2.5-pro
+ litellm_params:
+ model: github_copilot/gemini-2.5-pro
+ api_key: os.environ/GITHUB_COPILOT_TOKEN
+ extra_headers:
+ Editor-Version: "vscode/1.85.1"
+ Editor-Plugin-Version: "copilot/1.155.0"
+ Copilot-Integration-Id: "vscode-chat"
+ User-Agent: "GitHubCopilotChat/0.35.0"
diff --git a/test_ai_execution.py b/test_ai_execution.py
new file mode 100644
index 00000000..1aa2f2c9
--- /dev/null
+++ b/test_ai_execution.py
@@ -0,0 +1,98 @@
+import os
+import sys
+import subprocess
+import tempfile
+
+# 模拟AI执行代码的过程
+def simulate_ai_execution():
+ script_dir = os.path.dirname(os.path.abspath(__file__))
+
+ # Handler的工作目录
+ handler_cwd = os.path.join(script_dir, 'temp')
+ os.makedirs(handler_cwd, exist_ok=True)
+
+ # AI要执行的代码
+ code_to_run = """
+import os
+print("=== AI代码执行环境 ===")
+print(f"当前工作目录: {os.getcwd()}")
+print()
+print("尝试访问上级目录:")
+try:
+ parent_contents = os.listdir('..')
+ print(f"✅ 成功: {len(parent_contents)} 个项目")
+except PermissionError as e:
+ print(f"❌ 权限错误: {e}")
+except Exception as e:
+ print(f"⚠️ 其他错误: {e}")
+
+print()
+print("尝试访问 memory 目录:")
+try:
+ memory_contents = os.listdir('../memory')
+ print(f"✅ 成功: {len(memory_contents)} 个文件")
+except PermissionError as e:
+ print(f"❌ 权限错误: {e}")
+except Exception as e:
+ print(f"⚠️ 其他错误: {e}")
+
+print()
+print("尝试读取文件:")
+try:
+ with open('../memory/memory_management_sop.md', 'r', encoding='utf-8') as f:
+ content = f.read(200)
+ print(f"✅ 成功读取文件")
+except PermissionError as e:
+ print(f"❌ 权限错误: {e}")
+except Exception as e:
+ print(f"⚠️ 其他错误: {e}")
+"""
+
+ # 创建临时文件
+ tmp_file = tempfile.NamedTemporaryFile(suffix=".ai.py", delete=False, mode='w', encoding='utf-8', dir=handler_cwd)
+ tmp_file.write(code_to_run)
+ tmp_path = tmp_file.name
+ tmp_file.close()
+
+ print(f"=== 模拟AI执行 ===")
+ print(f"工作目录: {handler_cwd}")
+ print(f"临时文件: {tmp_path}")
+ print()
+
+ # 执行代码(模拟AI的code_run)
+ cmd = [sys.executable, "-X", "utf8", "-u", tmp_path]
+
+ startupinfo = None
+ if os.name == 'nt':
+ startupinfo = subprocess.STARTUPINFO()
+ startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
+ startupinfo.wShowWindow = 0
+
+ process = subprocess.Popen(
+ cmd,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.STDOUT,
+ bufsize=0,
+ cwd=handler_cwd,
+ startupinfo=startupinfo
+ )
+
+ # 读取输出
+ output = []
+ for line_bytes in iter(process.stdout.readline, b''):
+ try:
+ line = line_bytes.decode('utf-8')
+ except UnicodeDecodeError:
+ line = line_bytes.decode('gbk', errors='ignore')
+ output.append(line)
+ print(line, end="")
+
+ process.wait()
+
+ # 清理临时文件
+ os.unlink(tmp_path)
+
+ return ''.join(output)
+
+if __name__ == '__main__':
+ simulate_ai_execution()
\ No newline at end of file
diff --git a/test_copilot.py b/test_copilot.py
new file mode 100644
index 00000000..292fe8e9
--- /dev/null
+++ b/test_copilot.py
@@ -0,0 +1,25 @@
+import os
+
+# 设置代理环境变量
+os.environ['HTTP_PROXY'] = 'http://127.0.0.1:6789'
+os.environ['HTTPS_PROXY'] = 'http://127.0.0.1:6789'
+
+from litellm import completion
+
+# 添加 GitHub Copilot 所需的头部
+extra_headers = {
+ "Editor-Version": "vscode/1.85.1",
+ "Editor-Plugin-Version": "copilot/1.155.0",
+ "Copilot-Integration-Id": "vscode-chat",
+ "User-Agent": "GitHubCopilotChat/0.35.0"
+}
+
+response = completion(
+ model="github_copilot/gpt-4",
+ messages=[{"role": "user", "content": "Hello, who are you?"}],
+ stream=False,
+ extra_headers=extra_headers
+)
+
+print("Response:", response)
+print("\nContent:", response.choices[0].message.content)
\ No newline at end of file
diff --git a/test_full_diagnostic.py b/test_full_diagnostic.py
new file mode 100644
index 00000000..080c7c5a
--- /dev/null
+++ b/test_full_diagnostic.py
@@ -0,0 +1,139 @@
+import os
+import sys
+import subprocess
+import tempfile
+
+def run_diagnostic():
+ script_dir = os.path.dirname(os.path.abspath(__file__))
+ handler_cwd = os.path.join(script_dir, 'temp')
+
+ # 模拟AI执行的完整代码
+ code_to_run = """
+import os
+import sys
+
+print("="*60)
+print("AI执行环境诊断")
+print("="*60)
+
+# 1. 基本信息
+print("\\n[1] 基本信息:")
+print(f"Python版本: {sys.version}")
+print(f"当前工作目录: {os.getcwd()}")
+print(f"脚本路径: {os.path.abspath(__file__)}")
+
+# 2. 环境变量
+print("\\n[2] 环境变量:")
+print(f"HOME: {os.environ.get('HOME', '未设置')}")
+print(f"USERPROFILE: {os.environ.get('USERPROFILE', '未设置')}")
+print(f"PYTHONPATH: {os.environ.get('PYTHONPATH', '未设置')}")
+
+# 3. 测试上级目录访问
+print("\\n[3] 上级目录访问测试:")
+test_paths = ['..', '../memory', '../..']
+for p in test_paths:
+ try:
+ abs_path = os.path.abspath(p)
+ contents = os.listdir(p)
+ print(f"✅ {p} -> {abs_path} (包含 {len(contents)} 个项目)")
+ except PermissionError as e:
+ print(f"❌ {p} -> 权限错误: {e}")
+ except FileNotFoundError:
+ print(f"⚠️ {p} -> 路径不存在")
+ except Exception as e:
+ print(f"❓ {p} -> 未知错误: {e}")
+
+# 4. 测试文件读取
+print("\\n[4] 文件读取测试:")
+test_files = [
+ '../memory/memory_management_sop.md',
+ '../agentmain.py',
+ '../mykey.py'
+]
+for f in test_files:
+ try:
+ abs_path = os.path.abspath(f)
+ if os.path.exists(f):
+ with open(f, 'r', encoding='utf-8') as file:
+ content = file.read(100)
+ print(f"✅ {f} -> 读取成功")
+ else:
+ print(f"⚠️ {f} -> 文件不存在")
+ except PermissionError as e:
+ print(f"❌ {f} -> 权限错误: {e}")
+ except Exception as e:
+ print(f"❓ {f} -> 未知错误: {e}")
+
+# 5. 测试文件写入
+print("\\n[5] 文件写入测试:")
+test_write_path = './test_write.txt'
+try:
+ with open(test_write_path, 'w', encoding='utf-8') as f:
+ f.write('test')
+ print(f"✅ 写入 {test_write_path} 成功")
+ os.remove(test_write_path)
+except PermissionError as e:
+ print(f"❌ 写入失败: {e}")
+except Exception as e:
+ print(f"❓ 写入未知错误: {e}")
+
+# 6. 检查 os 模块权限
+print("\\n[6] OS模块权限检查:")
+try:
+ stat_info = os.stat('.')
+ print(f"当前目录权限: {oct(stat_info.st_mode)[-4:]}")
+except Exception as e:
+ print(f"获取权限失败: {e}")
+
+print("\\n" + "="*60)
+print("诊断完成")
+print("="*60)
+"""
+
+ # 创建临时文件(模拟AI的code_run)
+ tmp_file = tempfile.NamedTemporaryFile(suffix=".ai.py", delete=False, mode='w', encoding='utf-8', dir=handler_cwd)
+
+ # 添加 code_run_header.py 的内容
+ cr_header = os.path.join(script_dir, 'assets', 'code_run_header.py')
+ if os.path.exists(cr_header):
+ tmp_file.write(open(cr_header, encoding='utf-8').read())
+
+ tmp_file.write(code_to_run)
+ tmp_path = tmp_file.name
+ tmp_file.close()
+
+ print(f"=== 运行诊断脚本 ===")
+ print(f"工作目录: {handler_cwd}")
+ print(f"临时文件: {tmp_path}")
+ print()
+
+ # 执行
+ cmd = [sys.executable, "-X", "utf8", "-u", tmp_path]
+
+ startupinfo = None
+ if os.name == 'nt':
+ startupinfo = subprocess.STARTUPINFO()
+ startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
+ startupinfo.wShowWindow = 0
+
+ process = subprocess.Popen(
+ cmd,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.STDOUT,
+ bufsize=0,
+ cwd=handler_cwd,
+ startupinfo=startupinfo
+ )
+
+ for line_bytes in iter(process.stdout.readline, b''):
+ try:
+ line = line_bytes.decode('utf-8')
+ except UnicodeDecodeError:
+ line = line_bytes.decode('gbk', errors='ignore')
+ print(line, end="")
+
+ process.wait()
+ os.unlink(tmp_path)
+
+if __name__ == '__main__':
+ run_diagnostic()
\ No newline at end of file
diff --git a/test_models.py b/test_models.py
new file mode 100644
index 00000000..1df26219
--- /dev/null
+++ b/test_models.py
@@ -0,0 +1,22 @@
+import requests, json
+
+models = ["gpt-4", "gpt-5", "gpt-5.4-mini", "claude-sonnet-4.5", "claude-opus-4.7", "gemini-2.5-pro", "gemini-3-flash"]
+results = []
+url = "http://127.0.0.1:8000/v1/chat/completions"
+
+for m in models:
+ payload = {
+ "model": m,
+ "messages": [{"role": "user", "content": "reply only pong"}],
+ "stream": False
+ }
+ try:
+ resp = requests.post(url, json=payload, timeout=10)
+ status = resp.status_code
+ success = 200 <= status < 300
+ error = "" if success else resp.text[:100]
+ results.append({"model": m, "status": status, "success": success, "error": error})
+ except Exception as e:
+ results.append({"model": m, "status": 0, "success": False, "error": str(e)})
+
+print(json.dumps(results, indent=2))
diff --git a/test_path_issue.py b/test_path_issue.py
new file mode 100644
index 00000000..5750f704
--- /dev/null
+++ b/test_path_issue.py
@@ -0,0 +1,72 @@
+import os
+import sys
+
+# 获取脚本目录
+script_dir = os.path.dirname(os.path.abspath(__file__))
+print(f"脚本目录: {script_dir}")
+
+# 模拟 Handler 的工作目录设置
+cwd = os.path.join(script_dir, 'temp')
+print(f"Handler工作目录: {cwd}")
+
+# 模拟 _get_abs_path 方法
+def _get_abs_path(path):
+ if not path: return ""
+ return os.path.abspath(os.path.join(cwd, path))
+
+# 测试各种路径
+test_paths = [
+ '../',
+ '../memory',
+ '../memory/test.txt',
+ './test.txt',
+ 'test.txt',
+ '/test.txt',
+ '../..'
+]
+
+print("\n=== 路径解析测试 ===")
+for path in test_paths:
+ try:
+ abs_path = _get_abs_path(path)
+ exists = os.path.exists(abs_path)
+ is_dir = os.path.isdir(abs_path) if exists else False
+ can_access = True
+ if exists:
+ try:
+ if is_dir:
+ files = os.listdir(abs_path)
+ print(f"✅ {path} -> {abs_path} (目录,包含 {len(files)} 个文件)")
+ else:
+ with open(abs_path, 'r') as f:
+ content = f.read(100)
+ print(f"✅ {path} -> {abs_path} (文件)")
+ except PermissionError as e:
+ can_access = False
+ print(f"❌ {path} -> {abs_path} (权限错误: {e})")
+ except Exception as e:
+ print(f"⚠️ {path} -> {abs_path} (其他错误: {e})")
+ else:
+ print(f"ℹ️ {path} -> {abs_path} (不存在)")
+ except Exception as e:
+ print(f"❌ {path} -> 解析失败: {e}")
+
+print("\n=== 当前进程信息 ===")
+print(f"当前工作目录: {os.getcwd()}")
+print(f"Python可执行文件: {sys.executable}")
+print(f"是否管理员: {os.name == 'nt' and __import__('ctypes').windll.shell32.IsUserAnAdmin()}")
+
+# 测试 subprocess 执行环境
+print("\n=== subprocess 环境测试 ===")
+import subprocess
+result = subprocess.run(
+ ['python', '-c', 'import os; print(os.getcwd()); print(os.listdir(".."))'],
+ capture_output=True,
+ text=True,
+ cwd=cwd
+)
+print(f"stdout: {result.stdout}")
+if result.stderr:
+ print(f"stderr: {result.stderr}")
+if result.returncode != 0:
+ print(f"返回码: {result.returncode}")
\ No newline at end of file
diff --git a/tests/test_modelscope_deepseek_config.py b/tests/test_modelscope_deepseek_config.py
new file mode 100644
index 00000000..9f937009
--- /dev/null
+++ b/tests/test_modelscope_deepseek_config.py
@@ -0,0 +1,108 @@
+import importlib
+import json
+import os
+import sys
+import unittest
+
+
+REPO_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
+if REPO_DIR not in sys.path:
+ sys.path.insert(0, REPO_DIR)
+
+
+class TestDashscopeGlmConfig(unittest.TestCase):
+ def setUp(self):
+ self._old_value = os.environ.get('DASHSCOPE_API_KEY')
+ os.environ['DASHSCOPE_API_KEY'] = 'test-dashscope-key'
+ sys.modules.pop('mykey', None)
+
+ def tearDown(self):
+ if self._old_value is None:
+ os.environ.pop('DASHSCOPE_API_KEY', None)
+ else:
+ os.environ['DASHSCOPE_API_KEY'] = self._old_value
+ sys.modules.pop('mykey', None)
+
+ def test_dashscope_glm_is_primary_mixin_model(self):
+ mykey = importlib.import_module('mykey')
+
+ self.assertEqual(mykey.mixin_config['llm_nos'][0], 'dashscope-glm-5')
+
+ def test_dashscope_glm_uses_direct_openai_compatible_endpoint(self):
+ mykey = importlib.import_module('mykey')
+ cfg = mykey.native_oai_config_dashscope_glm_5
+
+ self.assertEqual(cfg['name'], 'dashscope-glm-5')
+ self.assertEqual(cfg['apikey'], 'test-dashscope-key')
+ self.assertEqual(cfg['apibase'], 'https://dashscope.aliyuncs.com/compatible-mode/v1')
+ self.assertEqual(cfg['model'], 'glm-5')
+ self.assertEqual(cfg['api_mode'], 'chat_completions')
+ self.assertTrue(cfg['stream'])
+ self.assertIsNone(cfg['proxy'])
+
+
+class TestModelScopeReasoningParsing(unittest.TestCase):
+ def test_parse_openai_sse_keeps_reasoning_content(self):
+ from llmcore import _parse_openai_sse
+
+ lines = [
+ 'data: ' + json.dumps({'choices': [{'delta': {'reasoning_content': '先分析问题'}}]}, ensure_ascii=False),
+ 'data: ' + json.dumps({'choices': [{'delta': {'content': '最终答案'}}]}, ensure_ascii=False),
+ 'data: [DONE]',
+ ]
+
+ gen = _parse_openai_sse(lines)
+ streamed = []
+ try:
+ while True:
+ streamed.append(next(gen))
+ except StopIteration as e:
+ blocks = e.value
+
+ self.assertEqual(streamed, ['先分析问题', '最终答案'])
+ self.assertEqual(blocks[0], {'type': 'thinking', 'thinking': '先分析问题'})
+ self.assertEqual(blocks[1], {'type': 'text', 'text': '最终答案'})
+
+ def test_parse_openai_json_keeps_reasoning_content(self):
+ from llmcore import _parse_openai_json
+
+ payload = {
+ 'choices': [{
+ 'message': {
+ 'reasoning_content': '先思考',
+ 'content': '再回答',
+ }
+ }]
+ }
+
+ gen = _parse_openai_json(payload)
+ streamed = []
+ try:
+ while True:
+ streamed.append(next(gen))
+ except StopIteration as e:
+ blocks = e.value
+
+ self.assertEqual(streamed, ['先思考', '再回答'])
+ self.assertEqual(blocks[0], {'type': 'thinking', 'thinking': '先思考'})
+ self.assertEqual(blocks[1], {'type': 'text', 'text': '再回答'})
+
+
+class TestVerifyCopilotModelsPreservesDashscopePrimary(unittest.TestCase):
+ def test_render_mykey_keeps_dashscope_glm_first(self):
+ from verify_copilot_models import render_mykey
+
+ rendered = render_mykey(['gpt-4', 'claude-sonnet-4.5'])
+
+ self.assertIn('native_oai_config_dashscope_glm_5', rendered)
+ self.assertIn("'apibase': 'https://dashscope.aliyuncs.com/compatible-mode/v1'", rendered)
+ self.assertIn("'model': 'glm-5'", rendered)
+ self.assertIn("'proxy': None", rendered)
+ self.assertLess(
+ rendered.index("'dashscope-glm-5'"),
+ rendered.index("'copilot-gpt4'"),
+ )
+
+
+if __name__ == '__main__':
+ unittest.main()
\ No newline at end of file
diff --git a/tests/test_multimodal_chain.py b/tests/test_multimodal_chain.py
new file mode 100644
index 00000000..5bf4ef43
--- /dev/null
+++ b/tests/test_multimodal_chain.py
@@ -0,0 +1,40 @@
+import base64
+import os
+import sys
+import tempfile
+import unittest
+
+sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
+
+from agentmain import build_multimodal_user_content
+
+
+PNG_1X1 = base64.b64decode(
+ "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mP8/x8AAwMCAO+jK3sAAAAASUVORK5CYII="
+)
+
+
+class TestMultimodalUserContent(unittest.TestCase):
+ def test_build_multimodal_user_content_includes_image_blocks(self):
+ with tempfile.TemporaryDirectory() as temp_dir:
+ image_path = os.path.join(temp_dir, 'tiny.png')
+ with open(image_path, 'wb') as f:
+ f.write(PNG_1X1)
+
+ content = build_multimodal_user_content('看图回答', [image_path])
+
+ self.assertEqual(content[0], {'type': 'text', 'text': '看图回答'})
+ self.assertEqual(content[1]['type'], 'image')
+ self.assertEqual(content[1]['source']['type'], 'base64')
+ self.assertEqual(content[1]['source']['media_type'], 'image/png')
+ self.assertTrue(content[1]['source']['data'])
+
+ def test_build_multimodal_user_content_skips_missing_or_non_image_files(self):
+ with tempfile.TemporaryDirectory() as temp_dir:
+ text_path = os.path.join(temp_dir, 'note.txt')
+ with open(text_path, 'w', encoding='utf-8') as f:
+ f.write('hello')
+
+ content = build_multimodal_user_content('只保留文本', [text_path, os.path.join(temp_dir, 'missing.png')])
+
+ self.assertEqual(content, [{'type': 'text', 'text': '只保留文本'}])
\ No newline at end of file
diff --git a/validate_setup.py b/validate_setup.py
new file mode 100644
index 00000000..dbb23377
--- /dev/null
+++ b/validate_setup.py
@@ -0,0 +1,78 @@
+import requests
+import yaml
+import json
+import re
+import os
+
+def get_base_models():
+ try:
+ resp = requests.get("http://127.0.0.1:8000/v1/models", timeout=5)
+ if resp.status_code == 200:
+ data = resp.json()
+ return [m['id'] for m in data.get('data', [])]
+ except Exception as e:
+ pass
+ return []
+
+def get_yaml_models():
+ try:
+ with open("litellm_config.yaml", "r", encoding="utf-8") as f:
+ config = yaml.safe_load(f)
+ return [m['model_name'] for m in config.get('model_list', [])]
+ except Exception as e:
+ return []
+
+def get_mykey_models():
+ models = []
+ try:
+ if os.path.exists("mykey.py"):
+ with open("mykey.py", "r", encoding="utf-8") as f:
+ content = f.read()
+ matches = re.finditer(r"native_oai_config_copilot_.*?=.*?\{.*?'model':\s*['\"](.*?)['\"]", content, re.DOTALL)
+ for match in matches:
+ models.append(match.group(1))
+ except Exception:
+ pass
+ return list(set(models))
+
+def test_model(model_id):
+ url = "http://127.0.0.1:8000/v1/chat/completions"
+ payload = {
+ "model": model_id,
+ "messages": [{"role": "user", "content": "reply only pong"}],
+ "stream": False
+ }
+ try:
+ resp = requests.post(url, json=payload, timeout=10)
+ status = resp.status_code
+ success = 200 <= status < 300
+ error = "" if success else resp.text[:100]
+ return {"status": status, "success": success, "error": error}
+ except Exception as e:
+ return {"status": 0, "success": False, "error": str(e)}
+
+base_models = get_base_models()
+yaml_models = get_yaml_models()
+mykey_models = get_mykey_models()
+
+diffs = {
+ "base_minus_yaml": list(set(base_models) - set(yaml_models)),
+ "yaml_minus_base": list(set(yaml_models) - set(base_models)),
+ "mykey_minus_base": list(set(mykey_models) - set(base_models)),
+ "base_minus_mykey": list(set(base_models) - set(mykey_models))
+}
+
+test_results = {}
+if base_models:
+ for m in base_models:
+ test_results[m] = test_model(m)
+
+output = {
+ "base_models": base_models,
+ "yaml_models": yaml_models,
+ "mykey_models": mykey_models,
+ "diffs": diffs,
+ "test_results": test_results
+}
+
+print(json.dumps(output, indent=2))
diff --git a/verify_claude.py b/verify_claude.py
new file mode 100644
index 00000000..5d673d73
--- /dev/null
+++ b/verify_claude.py
@@ -0,0 +1,11 @@
+import openai
+client = openai.OpenAI(api_key='anything', base_url='http://localhost:8000/v1')
+try:
+ response = client.chat.completions.create(
+ model='claude-sonnet-4.5',
+ messages=[{'role': 'user', 'content': 'Hello, are you Claude?'}]
+ )
+ print('Response:', response.choices[0].message.content)
+except Exception as e:
+ print('Error:', e)
+
diff --git a/verify_copilot_models.py b/verify_copilot_models.py
new file mode 100644
index 00000000..c91fd340
--- /dev/null
+++ b/verify_copilot_models.py
@@ -0,0 +1,261 @@
+import argparse
+import json
+import os
+import re
+import socket
+from urllib.parse import urlparse
+from pathlib import Path
+
+import requests
+import yaml
+
+BASE_URL = "http://127.0.0.1:8000"
+MODELS_URL = f"{BASE_URL}/v1/models"
+CHAT_URL = f"{BASE_URL}/v1/chat/completions"
+TIMEOUT = 15
+
+MODEL_SPECS = {
+ "gpt-4": {
+ "section": "OpenAI Models",
+ "backend_model": "github_copilot/gpt-4",
+ "config_var": "native_oai_config_copilot_gpt4",
+ "name": "copilot-gpt4",
+ "title": "GPT-4 - 平衡性能与成本",
+ },
+ "claude-sonnet-4.5": {
+ "section": "Anthropic Models",
+ "backend_model": "github_copilot/claude-sonnet-4.5",
+ "config_var": "native_oai_config_copilot_claude",
+ "name": "copilot-claude",
+ "title": "Claude Sonnet 4.5 - 长上下文支持 (200K+)",
+ },
+ "gemini-2.5-pro": {
+ "section": "Google Models",
+ "backend_model": "github_copilot/gemini-2.5-pro",
+ "config_var": "native_oai_config_copilot_gemini",
+ "name": "copilot-gemini",
+ "title": "Gemini 2.5 Pro - 强多模态支持",
+ },
+}
+
+HEADER_LINES = [
+ ' Editor-Version: "vscode/1.85.1"',
+ ' Editor-Plugin-Version: "copilot/1.155.0"',
+ ' Copilot-Integration-Id: "vscode-chat"',
+ ' User-Agent: "GitHubCopilotChat/0.35.0"',
+]
+TOKEN_REF = "os.environ/GITHUB_COPILOT_TOKEN"
+
+
+def detect_proxy_state():
+ proxy_url = (
+ os.environ.get("HTTPS_PROXY")
+ or os.environ.get("HTTP_PROXY")
+ or os.environ.get("ALL_PROXY")
+ or ""
+ )
+ info = {
+ "proxy_env_url": proxy_url,
+ "proxy_configured": bool(proxy_url),
+ "proxy_reachable": False,
+ "proxy_mode": "direct",
+ }
+ if not proxy_url:
+ return info
+
+ try:
+ parsed = urlparse(proxy_url)
+ host = parsed.hostname
+ port = parsed.port
+ if host and port:
+ with socket.create_connection((host, port), timeout=1.2):
+ info["proxy_reachable"] = True
+ info["proxy_mode"] = "proxy-active"
+ else:
+ info["proxy_mode"] = "proxy-configured-invalid"
+ except Exception:
+ info["proxy_mode"] = "proxy-configured-unreachable"
+ return info
+
+
+def get_base_models():
+ resp = requests.get(MODELS_URL, timeout=TIMEOUT)
+ resp.raise_for_status()
+ data = resp.json()
+ return [m.get("id") for m in data.get("data", []) if m.get("id")]
+
+
+def get_yaml_models(path: Path):
+ with path.open("r", encoding="utf-8") as f:
+ data = yaml.safe_load(f)
+ return [m.get("model_name") for m in data.get("model_list", []) if m.get("model_name")]
+
+
+def get_mykey_models(path: Path):
+ text = path.read_text(encoding="utf-8")
+ blocks = re.findall(r"native_oai_config_copilot_.*?\s*=\s*\{.*?\}", text, flags=re.DOTALL)
+ models = []
+ for block in blocks:
+ m = re.search(r"['\"]model['\"]\s*:\s*['\"](.*?)['\"]", block)
+ if m:
+ models.append(m.group(1))
+ return models
+
+
+def probe_model(model_id: str):
+ payload = {
+ "model": model_id,
+ "messages": [{"role": "user", "content": "reply only pong"}],
+ "stream": False,
+ }
+ try:
+ resp = requests.post(CHAT_URL, json=payload, timeout=TIMEOUT)
+ ok = 200 <= resp.status_code < 300
+ err = "" if ok else (resp.text or "")[:180]
+ return {"status": resp.status_code, "success": ok, "error": err}
+ except Exception as e:
+ return {"status": 0, "success": False, "error": str(e)[:180]}
+
+
+def render_litellm_config(models):
+ groups = {}
+ for model in models:
+ spec = MODEL_SPECS[model]
+ groups.setdefault(spec["section"], []).append(model)
+
+ lines = ["model_list:"]
+ for section in ["OpenAI Models", "Anthropic Models", "Google Models"]:
+ section_models = groups.get(section, [])
+ if not section_models:
+ continue
+ lines.append(f" # {section}")
+ for model in section_models:
+ spec = MODEL_SPECS[model]
+ lines.extend([
+ f" - model_name: {model}",
+ " litellm_params:",
+ f" model: {spec['backend_model']}",
+ f" api_key: {TOKEN_REF}",
+ " extra_headers:",
+ *HEADER_LINES,
+ "",
+ ])
+ if lines[-1] == "":
+ lines.pop()
+ return "\n".join(lines) + "\n"
+
+
+def render_mykey(models):
+ config_blocks = [
+ "\n".join([
+ "# GLM-5 - DashScope 兼容模式 API(直连,不走代理)",
+ "native_oai_config_dashscope_glm_5 = {",
+ " 'name': 'dashscope-glm-5',",
+ " 'apikey': os.environ.get('DASHSCOPE_API_KEY', ''),",
+ " 'apibase': 'https://dashscope.aliyuncs.com/compatible-mode/v1',",
+ " 'model': 'glm-5',",
+ " 'api_mode': 'chat_completions',",
+ " 'proxy': None,",
+ " 'stream': True,",
+ "}",
+ ])
+ ]
+ for model in models:
+ spec = MODEL_SPECS[model]
+ config_blocks.append(
+ "\n".join([
+ f"# {spec['title']}",
+ f"{spec['config_var']} = {{ ",
+ f" 'name': '{spec['name']}',",
+ " 'apikey': 'anything',",
+ " 'apibase': 'http://localhost:8000/v1',",
+ f" 'model': '{model}',",
+ " 'api_mode': 'chat_completions',",
+ " 'stream': True,",
+ "}",
+ ])
+ )
+
+ llm_nos = ["dashscope-glm-5"]
+ if "gpt-4" in models:
+ llm_nos.append("copilot-gpt4")
+ if "claude-sonnet-4.5" in models:
+ llm_nos.append("copilot-claude")
+ if "gemini-2.5-pro" in models:
+ llm_nos.append("copilot-gemini")
+
+ lines = [
+ "import os",
+ "",
+ "# ── GitHub Copilot Pro (多模型配置) ─────────────────────────────────────",
+ "# 启动方式:先启动 litellm 代理(使用 .venv),然后在 UI 中选择模型",
+ "# .venv\\Scripts\\litellm.exe --config litellm_config.yaml --port 8000",
+ "",
+ "\n\n".join(config_blocks),
+ "",
+ "# ── 模型自动轮询配置(已启用,仅使用当前已验证可用模型)────────────────────",
+ "mixin_config = {",
+ " 'llm_nos': [",
+ ]
+ for llm_name in llm_nos:
+ comment = "首选:已验证可用" if llm_name == llm_nos[0] else "兜底:已验证可用"
+ lines.append(f" '{llm_name}', # {comment}")
+ lines.extend([
+ " ],",
+ " 'max_retries': 4, # 两模型间轮询重试,避免长时间无效重试",
+ " 'base_delay': 0.5, # 指数退避起始延迟",
+ "}",
+ "",
+ ])
+ return "\n".join(lines)
+
+
+def apply_updates(models):
+ supported = [m for m in models if m in MODEL_SPECS]
+ Path("litellm_config.yaml").write_text(render_litellm_config(supported), encoding="utf-8")
+ Path("mykey.py").write_text(render_mykey(supported), encoding="utf-8")
+
+
+def main():
+ parser = argparse.ArgumentParser(description="Verify Copilot models and optionally apply available models.")
+ parser.add_argument("--dry-run", action="store_true", help="Print the models that would be written without modifying files.")
+ parser.add_argument("--apply", action="store_true", help="Write the currently available models back to litellm_config.yaml and mykey.py.")
+ args = parser.parse_args()
+
+ base_models = get_base_models()
+ proxy_state = detect_proxy_state()
+ yaml_models = get_yaml_models(Path("litellm_config.yaml"))
+ mykey_models = get_mykey_models(Path("mykey.py"))
+
+ diffs = {
+ "base_minus_yaml": sorted(set(base_models) - set(yaml_models)),
+ "yaml_minus_base": sorted(set(yaml_models) - set(base_models)),
+ "mykey_minus_base": sorted(set(mykey_models) - set(base_models)),
+ "base_minus_mykey": sorted(set(base_models) - set(mykey_models)),
+ }
+
+ results = {m: probe_model(m) for m in base_models}
+
+ available_models = [m for m, result in results.items() if result["success"] and m in MODEL_SPECS]
+
+ output = {
+ "proxy_state": proxy_state,
+ "base_models": base_models,
+ "yaml_models": yaml_models,
+ "mykey_models": mykey_models,
+ "diffs": diffs,
+ "test_results": results,
+ "available_models": available_models,
+ }
+
+ if args.dry_run or args.apply:
+ output["would_apply_models"] = available_models
+ if args.apply:
+ apply_updates(available_models)
+ output["applied"] = True
+
+ print(json.dumps(output, ensure_ascii=False, indent=2))
+
+
+if __name__ == "__main__":
+ main()