# polymind **Repository Path**: randy1568/polymind ## Basic Information - **Project Name**: polymind - **Description**: PolyMind is an intelligent orchestration platform that can unify the scheduling of LLM, MCP services, and agents to achieve multimodal cognitive collaboration and autonomous decision-making. - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 4 - **Created**: 2025-10-11 - **Last Updated**: 2025-12-08 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
PolyMind is a feature-rich AI assistant that supports mainstream language models, providing environment awareness and powerful tool scheduling capabilities for the DevStation distribution.
## π Table of Contents - [π Table of Contents](#-table-of-contents) - [π Project Introduction](#-project-introduction) - [π‘ Why Choose PolyMind](#-why-choose-polymind) - [π₯ Main Features](#-main-features) - [π€ Supported Model Providers](#-supported-model-providers) - [Compatible with any model provider in OpenAI/Gemini/Anthropic API format](#compatible-with-any-model-provider-in-openaigeminianthropic-api-format) - [π Use Cases](#-use-cases) - [π¦ Quick Start](#-quick-start) - [Download and Install](#download-and-install) - [Configure Models](#configure-models) - [Start Conversations](#start-conversations) - [π» Development Guide](#-development-guide) - [Install Dependencies](#install-dependencies) - [Start Development](#start-development) - [Build](#build) - [π₯ Community \& Contribution](#-community--contribution) - [β Star History](#-star-history) - [π¨βπ» Contributors](#-contributors) - [π License](#-license) ## π Project Introduction PolyMind is an AI intelligent assistant developed based on [DeepChat](https://github.com/thinkinaixyz/deepchat), providing environment awareness and powerful tool scheduling capabilities for the DevStation distribution. As a cross-platform AI assistant application, it not only supports basic chat functionality but also provides advanced features such as search enhancement, tool calling, multimodal interaction, making AI capabilities more convenient and efficient. ## π‘ Why Choose PolyMind Compared to other AI tools, PolyMind has the following unique advantages: - **Unified Multi-Model Management**: One application supports almost all mainstream LLMs, no need to switch between multiple applications - **Seamless Local Model Integration**: Built-in Ollama support, no command-line operations required to manage and use local models - **Advanced Tool Calling**: Built-in MCP support, no additional configuration required to use tools like code execution, network access, etc. - **openEuler Community Friendly**: Integrated with openEuler community open-source projects like DevStore, openEuler Intelligence - **System Prompt Management**: Powerful system prompt management, making general AI more efficient and intelligent ## π₯ Main Features - π **Multiple Cloud LLM Provider Support**: DeepSeek, OpenAI, SiliconFlow, Grok, Gemini, Anthropic, etc. - π **Local Model Deployment Support**: - Integrated Ollama with comprehensive management capabilities - No command-line operations required to control and manage Ollama model downloads, deployments, and runs - π **Rich and Easy-to-Use Chat Capabilities** - Complete Markdown rendering with code block rendering based on industry-leading [CodeMirror](https://codemirror.net/) - Multi-window + multi-tab architecture supporting parallel multi-session operations across all dimensions, use large models like using a browser, non-blocking experience brings excellent efficiency - Supports Artifacts rendering for diverse result presentation, significantly saving token consumption after MCP integration - Messages support retry to generate multiple variations; conversations can be forked freely, ensuring there's always a suitable line of thought - Supports rendering images, Mermaid diagrams, and other multi-modal content; supports GPT-4o, Gemini, Grok text-to-image capabilities - Supports highlighting external information sources like search results within the content - π **Robust Search Extension Capabilities** - Built-in integration with leading search APIs like BoSearch, Brave Search via MCP mode, allowing the model to intelligently decide when to search - Supports mainstream search engines like Google, Bing, Baidu, and Sogou Official Accounts search by simulating user web browsing, enabling the LLM to read search engines like a human - Supports reading any search engine; simply configure a search assistant model to connect various search sources, whether internal networks, API-less engines, or vertical domain search engines, as information sources for the model - π§ **Excellent MCP (Model Context Protocol) Support** - Complete support for the three core capabilities of Resources/Prompts/Tools in the MCP protocol - Supports semantic workflows, enabling more complex and intelligent automation by understanding the meaning and context of tasks - Extremely user-friendly configuration interface - Aesthetically pleasing and clear tool call display - Detailed tool call debugging window with automatic formatting of tool parameters and return data - Built-in Node.js runtime environment; npx/node-like services require no extra configuration and work out-of-the-box - Supports StreamableHTTP/SSE/Stdio protocol Transports - Supports inMemory services with built-in utilities like code execution, web information retrieval, and file operations; ready for most common use cases out-of-the-box without secondary installation - Converts visual model capabilities into universally usable functions for any model via the built-in MCP service - π» **Multi-Platform Support**: Windows, macOS, Linux - π¨ **Beautiful and User-Friendly Interface**, user-oriented design, meticulously themed light and dark modes - π **Rich DeepLink Support**: Initiate conversations via links for seamless integration with other applications. Also supports one-click installation of MCP services for simplicity and speed - π **Security-First Design**: Chat data and configuration data have reserved encryption interfaces and code obfuscation capabilities - π‘οΈ **Privacy Protection**: Supports screen projection hiding, network proxies, and other privacy protection methods to reduce the risk of information leakage - π° **Business-Friendly**: - Embraces open source, based on the Apache License 2.0 protocol, enterprise use without worry - Enterprise integration requires only minimal configuration code changes to use reserved encrypted obfuscation security capabilities - Clear code structure, both model providers and MCP services are highly decoupled, can be freely customized with minimal cost - Reasonable architecture, data interaction and UI behavior separation, fully utilizing Electron's capabilities, rejecting simple web wrappers, excellent performance ## π€ Supported Model Providers|
Ollama |
Deepseek |
PPIO |
DashScope |
|
Doubao |
MiniMax |
Fireworks |
302.AI |
|
OpenAI |
Gemini |
GitHub Models |
Moonshot |
|
OpenRouter |
Azure OpenAI |
Qiniu |
Grok |
|
Zhipu |
SiliconFlow |
AIHubMix |
Hunyuan |
|
LM Studio |
Groq |