# oc-probe **Repository Path**: gabrielslls/oc-probe ## Basic Information - **Project Name**: oc-probe - **Description**: A zero-dependency CLI tool to probe LLM models from opencode.json for accessibility and performance. - **Primary Language**: Python - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-03-24 - **Last Updated**: 2026-04-05 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # oc-probe 🚀 **[opencode](https://opencode.ai/) 模型探针** — 一键测试你配置的所有 LLM 模型是否可用、响应速度如何。 A zero-dependency CLI tool that probes all LLM models configured in your [opencode](https://opencode.ai/) `opencode.json`, checking availability and measuring response time. --- ## ✨ Features - **零依赖** — 仅需 Python 3.6+,无需 `pip install` | Zero dependencies, Python 3.6+ only - **自动发现** — 读取 opencode 配置,自动提取所有 provider 和模型 | Auto-discovers providers & models from config - **环境变量解析** — 支持 `{env:VAR_NAME}` 格式的 API Key | Resolves `{env:VAR_NAME}` style env vars - **Thinking 模式** — 可测试模型的 thinking/reasoning 能力 | Test thinking/reasoning mode - **速度排行** — 自动按响应时间排序 | Auto-ranked by response time - **彩色输出** — 清晰的终端彩色展示 | Colorful terminal output ## 📦 Installation ```bash # Just download the script — no installation needed curl -O https://raw.githubusercontent.com/gabrielslls/oc-probe/main/oc-probe.py chmod +x oc-probe.py ``` Or clone the repo: ```bash git clone https://github.com/gabrielslls/oc-probe.git cd oc-probe ``` ## 🚀 Usage ```bash # Default mode — test all models without thinking parameter python3 oc-probe.py # Enable thinking mode python3 oc-probe.py think # Disable thinking mode python3 oc-probe.py nothink # Show help python3 oc-probe.py help ``` ## ⚙️ Configuration oc-probe reads your opencode configuration file from: ``` ~/.config/opencode/opencode.json ``` You can override this path via the `OC_PROBE_CONFIG` environment variable: ```bash OC_PROBE_CONFIG=/path/to/my/opencode.json python3 oc-probe.py ``` ### Config Format The tool reads the `provider` section of `opencode.json`. Each provider needs: - `options.apiKey` — API key (supports `{env:VAR_NAME}` syntax) - `options.baseURL` — API base URL (OpenAI-compatible `/chat/completions` endpoint) - `models` — Map of model identifiers to model info See [opencode.example.json](opencode.example.json) for a complete example. ```jsonc { "provider": { "my-provider": { "options": { "apiKey": "{env:MY_API_KEY}", "baseURL": "https://api.example.com/v1" }, "models": { "gpt-4o": { "name": "GPT-4o" }, "claude-3-opus": { "name": "Claude 3 Opus" } } } } } ``` ## 📊 Output Example ``` ====================================================================== 🚀 多平台模型测速工具 ====================================================================== 📋 加载配置... ✅ [provider-a] 3 个模型 ✅ [provider-b] 2 个模型 ✅ 总计: 5 个模型 ⚙️ 测试配置: • 超时: 180秒 • 模型间隔: 10秒 • 题目: 简单自我介绍 ====================================================================== 🌐 测试 [provider-a] ====================================================================== [1/3] GPT-4o ---------------------------------------------------------------------- 🧠 Thinking模式: 使用服务端默认 ⏳ 发送请求... ✅ 成功 | ⏱️ 1.23s | 📝 15→85 (100) | 无Thinking 📝 输出: 我是一个AI助手... ... ====================================================================== 📊 测试总结 ====================================================================== ⏱️ 总耗时: 45.2秒 📈 结果: 5/5 成功 🏆 速度排行: 1. [provider-a] GPT-4o: 1.23s 2. [provider-b] Claude 3 Opus: 2.45s ... ====================================================================== 🎉 所有模型测试成功! ====================================================================== ``` ## ❓ FAQ **Q: 支持哪些 API 格式?** A: 任何兼容 OpenAI `/chat/completions` 接口的 API 均可,包括 OpenAI、Anthropic(通过兼容层)、各种国产大模型 API 等。 **Q: 为什么模型之间要等 10 秒?** A: 避免触发 API 速率限制(Rate Limit)。 **Q: 支持流式(streaming)测试吗?** A: 目前不支持,使用非流式请求以获得准确的总响应时间。 ## 📄 License [MIT](LICENSE)