# DeepTutor
**Repository Path**: Heconnor/DeepTutor
## Basic Information
- **Project Name**: DeepTutor
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: AGPL-3.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2026-01-07
- **Last Updated**: 2026-01-07
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README

# DeepTutor: AI-Powered Personalized Learning Assistant
[](https://www.python.org/downloads/)
[](https://fastapi.tiangolo.com/)
[](https://react.dev/)
[](https://nextjs.org/)
[](https://tailwindcss.com/)
[](LICENSE)
[**Quick Start**](#quick-start) ยท [**Core Modules**](#core-modules) ยท [**FAQ**](#faq)
[๐จ๐ณ ไธญๆ](assets/README/README_CN.md) ยท [๐ฏ๐ต ๆฅๆฌ่ช](assets/README/README_JA.md) ยท [๐ช๐ธ Espaรฑol](assets/README/README_ES.md) ยท [๐ซ๐ท Franรงais](assets/README/README_FR.md) ยท [๐ธ๐ฆ ุงูุนุฑุจูุฉ](assets/README/README_AR.md) ยท [๐ท๐บ ะ ัััะบะธะน](assets/README/README_RU.md) ยท [๐ฎ๐ณ เคนเคฟเคจเฅเคฆเฅ](assets/README/README_HI.md) ยท [๐ต๐น Portuguรชs](assets/README/README_PT.md)
๐ **Massive Document Knowledge Q&A** โข ๐จ **Interactive Learning Visualization**
๐ฏ **Knowledge Reinforcement** โข ๐ **Deep Research & Idea Generation**
---
> **[2026.1.3]** Released DeepTutor [v0.3.0](https://github.com/HKUDS/DeepTutor/releases/tag/v0.3.0) - Any feedbacks are welcomed! โค๏ธ
> **[2026.1.1]** Join our [Discord Community](https://discord.gg/zpP9cssj) and [GitHub Discussions](https://github.com/HKUDS/DeepTutor/discussions) - shape the future of DeepTutor! ๐ฌ
> **[2025.12.30]** Visit our [Official Website](https://hkuds.github.io/DeepTutor/) for more details!
> **[2025.12.29]** DeepTutor is now live! โจ
---
## Key Features of DeepTutor
### ๐ Massive Document Knowledge Q&A
โข **Smart Knowledge Base**: Upload textbooks, research papers, technical manuals, and domain-specific documents. Build a comprehensive AI-powered knowledge repository for instant access.
โข **Multi-Agent Problem Solving**: Dual-loop reasoning architecture with RAG, web search, and code execution -- delivering step-by-step solutions with precise citations.
### ๐จ Interactive Learning Visualization
โข **Knowledge Simplification & Explanations**: Transform complex concepts, knowledge, and algorithms into easy-to-understand visual aids, detailed step-by-step breakdowns, and engaging interactive demonstrations.
โข **Personalized Q&A**: Context-aware conversations that adapt to your learning progress, with interactive pages and session-based knowledge tracking.
### ๐ฏ Knowledge Reinforcement with Practice Exercise Generator
โข **Intelligent Exercise Creation**: Generate targeted quizzes, practice problems, and customized assessments tailored to your current knowledge level and specific learning objectives.
โข **Authentic Exam Simulation**: Upload reference exams to generate practice questions that perfectly match the original style, format, and difficultyโgiving you realistic preparation for the actual test.
### ๐ Deep Research & Idea Generation
โข **Comprehensive Research & Literature Review**: Conduct in-depth topic exploration with systematic analysis. Identify patterns, connect related concepts across disciplines, and synthesize existing research findings.
โข **Novel Insight Discovery**: Generate structured learning materials and uncover knowledge gaps. Identify promising new research directions through intelligent cross-domain knowledge synthesis.
---
๐ Massive Document Knowledge Q&A
Multi-agent Problem Solving with Exact Citations
|
๐จ Interactive Learning Visualization
Step-by-step Visual Explanations with Personal QAs.
|
๐ฏ Knowledge Reinforcement
**Custom Questions**
Auto-Validated Practice Questions Generation
|
**Mimic Questions**
Clone Exam Style for Authentic Practice
|
๐ Deep Research & Idea Generation
**Deep Research**
Knowledge Extention from Textbook with RAG, Web and Paper-search
|
**Automated IdeaGen**
Brainstorming and Concept Synthesis with Dual-filter Workflow
|
**Interactive IdeaGen**
RAG and Web-search Powered Co-writer with Podcast Generation
|
๐๏ธ All-in-One Knowledge System
**Personal Knowledge Base**
Build and Organize Your Own Knowledge Repository
|
**Personal Notebook**
Your Contextual Memory for Learning Sessions
|
๐ Use DeepTutor in Dark Mode!
---
## ๐๏ธ DeepTutor's Framework
### ๐ฌ User Interface Layer
โข **Intuitive Interaction**: Simple bidirectional query-response flow for intuitive interaction.
โข **Structured Output**: Structured response generation that organizes complex information into actionable outputs.
### ๐ค Intelligent Agent Modules
โข **Problem Solving & Assessment**: Step-by-step problem solving and custom assessment generation.
โข **Research & Learning**: Deep Research for topic exploration and Guided Learning with visualization.
โข **Idea Generation**: Automated and interactive concept development with multi-source insights.
### ๐ง Tool Integration Layer
โข **Information Retrieval**: RAG hybrid retrieval, real-time web search, and academic paper databases.
โข **Processing & Analysis**: Python code execution, query item lookup, and PDF parsing for document analysis.
### ๐ง Knowledge & Memory Foundation
โข **Knowledge Graph**: Entity-relation mapping for semantic connections and knowledge discovery.
โข **Vector Store**: Embedding-based semantic search for intelligent content retrieval.
โข **Memory System**: Session state management and citation tracking for contextual continuity.
## ๐ Todo
> ๐ Star to follow our future updates!
- [-] Refactor RAG Module (see [Discussions](https://github.com/HKUDS/DeepTutor/discussions))
- [ ] Deep-coding from idea generation
- [ ] Personalized Interaction with Notebook
## ๐ Getting Started
### Step 1: Pre-Configuration
**โ Clone Repository**
```bash
git clone https://github.com/HKUDS/DeepTutor.git
cd DeepTutor
```
**โก Set Up Environment Variables**
```bash
cp .env.example .env
# Edit .env file with your API keys
```
๐ Environment Variables Reference
| Variable | Required | Description |
|:---|:---:|:---|
| `LLM_MODEL` | **Yes** | Model name (e.g., `gpt-4o`) |
| `LLM_BINDING_API_KEY` | **Yes** | Your LLM API key |
| `LLM_BINDING_HOST` | **Yes** | API endpoint URL |
| `EMBEDDING_MODEL` | **Yes** | Embedding model name |
| `EMBEDDING_BINDING_API_KEY` | **Yes** | Embedding API key |
| `EMBEDDING_BINDING_HOST` | **Yes** | Embedding API endpoint |
| `BACKEND_PORT` | No | Backend port (default: `8001`) |
| `FRONTEND_PORT` | No | Frontend port (default: `3782`) |
| `TTS_*` | No | Text-to-Speech settings |
| `PERPLEXITY_API_KEY` | No | For web search |
**โข Configure Ports & LLM** *(Optional)*
- **Ports**: Edit `config/main.yaml` โ `server.backend_port` / `server.frontend_port`
- **LLM**: Edit `config/agents.yaml` โ `temperature` / `max_tokens` per module
- See [Configuration Docs](config/README.md) for details
**โฃ Try Demo Knowledge Bases** *(Optional)*
๐ Available Demos
- **Research Papers** โ 5 papers from our lab ([AI-Researcher](https://github.com/HKUDS/AI-Researcher), [LightRAG](https://github.com/HKUDS/LightRAG), etc.)
- **Data Science Textbook** โ 8 chapters, 296 pages ([Book Link](https://ma-lab-berkeley.github.io/deep-representation-learning-book/))
1. Download from [Google Drive](https://drive.google.com/drive/folders/1iWwfZXiTuQKQqUYb5fGDZjLCeTUP6DA6?usp=sharing)
2. Extract into `data/` directory
> Demo KBs use `text-embedding-3-large` with `dimensions = 3072`
**โค Create Your Own Knowledge Base** *(After Launch)*
1. Go to http://localhost:3782/knowledge
2. Click "New Knowledge Base" โ Enter name โ Upload PDF/TXT/MD files
3. Monitor progress in terminal
---
### Step 2: Choose Your Installation Method
๐ณ Docker Deployment
Recommended โ No Python/Node.js setup
---
**Prerequisites**: [Docker](https://docs.docker.com/get-docker/) & [Docker Compose](https://docs.docker.com/compose/install/)
๐ Option A: Pre-built Image (Fastest)
```bash
# Pull and run pre-built image (Linux/macOS)
docker run -d --name deeptutor \
-p 8001:8001 -p 3782:3782 \
-e LLM_MODEL=gpt-4o \
-e LLM_BINDING_API_KEY=your-api-key \
-e LLM_BINDING_HOST=https://api.openai.com/v1 \
-e EMBEDDING_MODEL=text-embedding-3-large \
-e EMBEDDING_BINDING_API_KEY=your-api-key \
-e EMBEDDING_BINDING_HOST=https://api.openai.com/v1 \
-v $(pwd)/data:/app/data \
-v $(pwd)/config:/app/config:ro \
ghcr.io/hkuds/deeptutor:latest
# Windows PowerShell: use ${PWD} instead of $(pwd)
docker run -d --name deeptutor `
-p 8001:8001 -p 3782:3782 `
-e LLM_MODEL=gpt-4o `
-e LLM_BINDING_API_KEY=your-api-key `
-e LLM_BINDING_HOST=https://api.openai.com/v1 `
-e EMBEDDING_MODEL=text-embedding-3-large `
-e EMBEDDING_BINDING_API_KEY=your-api-key `
-e EMBEDDING_BINDING_HOST=https://api.openai.com/v1 `
-v ${PWD}/data:/app/data `
-v ${PWD}/config:/app/config:ro `
ghcr.io/hkuds/deeptutor:latest
```
Or use with `.env` file:
```bash
docker run -d --name deeptutor \
-p 8001:8001 -p 3782:3782 \
--env-file .env \
-v $(pwd)/data:/app/data \
-v $(pwd)/config:/app/config:ro \
ghcr.io/hkuds/deeptutor:latest
```
๐จ Option B: Build from Source
```bash
# Build and start (~5-10 min first run)
docker compose up --build -d
# View logs
docker compose logs -f
```
**Commands**:
```bash
docker compose up -d # Start
docker compose logs -f # Logs
docker compose down # Stop
docker compose up --build # Rebuild
docker pull ghcr.io/hkuds/deeptutor:latest # Update image
```
> **Dev Mode**: Add `-f docker-compose.dev.yml`
|
๐ป Manual Installation
For development or non-Docker environments
---
**Prerequisites**: Python 3.10+, Node.js 18+
**Set Up Environment**:
```bash
# Using conda (Recommended)
conda create -n deeptutor python=3.10
conda activate deeptutor
# Or using venv
python -m venv venv
source venv/bin/activate
```
**Install Dependencies**:
```bash
bash scripts/install_all.sh
# Or manually:
pip install -r requirements.txt
npm install --prefix web
```
**Launch**:
```bash
# Start web interface
python scripts/start_web.py
# Or CLI only
python scripts/start.py
# Stop: Ctrl+C
```
|
### Access URLs
| Service | URL | Description |
|:---:|:---|:---|
| **Frontend** | http://localhost:3782 | Main web interface |
| **API Docs** | http://localhost:8001/docs | Interactive API documentation |
---
## ๐ Data Storage
All user content and system data are stored in the `data/` directory:
```
data/
โโโ knowledge_bases/ # Knowledge base storage
โโโ user/ # User activity data
โโโ solve/ # Problem solving results and artifacts
โโโ question/ # Generated questions
โโโ research/ # Research reports and cache
โโโ co-writer/ # Interactive IdeaGen documents and audio files
โโโ notebook/ # Notebook records and metadata
โโโ guide/ # Guided learning sessions
โโโ logs/ # System logs
โโโ run_code_workspace/ # Code execution workspace
```
Results are automatically saved during all activities. Directories are created automatically as needed.
## ๐ฆ Core Modules
๐ง Smart Solver
Architecture Diagram

> **Intelligent problem-solving system** based on **Analysis Loop + Solve Loop** dual-loop architecture, supporting multi-mode reasoning and dynamic knowledge retrieval.
**Core Features**
| Feature | Description |
|:---:|:---|
| Dual-Loop Architecture | **Analysis Loop**: InvestigateAgent โ NoteAgent
**Solve Loop**: PlanAgent โ ManagerAgent โ SolveAgent โ CheckAgent โ Format |
| Multi-Agent Collaboration | Specialized agents: InvestigateAgent, NoteAgent, PlanAgent, ManagerAgent, SolveAgent, CheckAgent |
| Real-time Streaming | WebSocket transmission with live reasoning process display |
| Tool Integration | RAG (naive/hybrid), Web Search, Query Item, Code Execution |
| Persistent Memory | JSON-based memory files for context preservation |
| Citation Management | Structured citations with reference tracking |
**Usage**
1. Visit http://localhost:{frontend_port}/solver
2. Select a knowledge base
3. Enter your question, click "Solve"
4. Watch the real-time reasoning process and final answer
Python API
```python
import asyncio
from src.agents.solve import MainSolver
async def main():
solver = MainSolver(kb_name="ai_textbook")
result = await solver.solve(
question="Calculate the linear convolution of x=[1,2,3] and h=[4,5]",
mode="auto"
)
print(result['formatted_solution'])
asyncio.run(main())
```
Output Location
```
data/user/solve/solve_YYYYMMDD_HHMMSS/
โโโ investigate_memory.json # Analysis Loop memory
โโโ solve_chain.json # Solve Loop steps & tool records
โโโ citation_memory.json # Citation management
โโโ final_answer.md # Final solution (Markdown)
โโโ performance_report.json # Performance monitoring
โโโ artifacts/ # Code execution outputs
```
---
๐ Question Generator
Architecture Diagram

> **Dual-mode question generation system** supporting **custom knowledge-based generation** and **reference exam paper mimicking** with automatic validation.
**Core Features**
| Feature | Description |
|:---:|:---|
| Custom Mode | **Background Knowledge** โ **Question Planning** โ **Generation** โ **Single-Pass Validation**
Analyzes question relevance without rejection logic |
| Mimic Mode | **PDF Upload** โ **MinerU Parsing** โ **Question Extraction** โ **Style Mimicking**
Generates questions based on reference exam structure |
| ReAct Engine | QuestionGenerationAgent with autonomous decision-making (think โ act โ observe) |
| Validation Analysis | Single-pass relevance analysis with `kb_coverage` and `extension_points` |
| Question Types | Multiple choice, fill-in-the-blank, calculation, written response, etc. |
| Batch Generation | Parallel processing with progress tracking |
| Complete Persistence | All intermediate files saved (background knowledge, plan, individual results) |
| Timestamped Output | Mimic mode creates batch folders: `mimic_YYYYMMDD_HHMMSS_{pdf_name}/` |
**Usage**
**Custom Mode:**
1. Visit http://localhost:{frontend_port}/question
2. Fill in requirements (topic, difficulty, question type, count)
3. Click "Generate Questions"
4. View generated questions with validation reports
**Mimic Mode:**
1. Visit http://localhost:{frontend_port}/question
2. Switch to "Mimic Exam" tab
3. Upload PDF or provide parsed exam directory
4. Wait for parsing โ extraction โ generation
5. View generated questions alongside original references
Python API
**Custom Mode - Full Pipeline:**
```python
import asyncio
from src.agents.question import AgentCoordinator
async def main():
coordinator = AgentCoordinator(
kb_name="ai_textbook",
output_dir="data/user/question"
)
# Generate multiple questions from text requirement
result = await coordinator.generate_questions_custom(
requirement_text="Generate 3 medium-difficulty questions about deep learning basics",
difficulty="medium",
question_type="choice",
count=3
)
print(f"โ
Generated {result['completed']}/{result['requested']} questions")
for q in result['results']:
print(f"- Relevance: {q['validation']['relevance']}")
asyncio.run(main())
```
**Mimic Mode - PDF Upload:**
```python
from src.agents.question.tools.exam_mimic import mimic_exam_questions
result = await mimic_exam_questions(
pdf_path="exams/midterm.pdf",
kb_name="calculus",
output_dir="data/user/question/mimic_papers",
max_questions=5
)
print(f"โ
Generated {result['successful_generations']} questions")
print(f"Output: {result['output_file']}")
```
Output Location
**Custom Mode:**
```
data/user/question/custom_YYYYMMDD_HHMMSS/
โโโ background_knowledge.json # RAG retrieval results
โโโ question_plan.json # Question planning
โโโ question_1_result.json # Individual question results
โโโ question_2_result.json
โโโ ...
```
**Mimic Mode:**
```
data/user/question/mimic_papers/
โโโ mimic_YYYYMMDD_HHMMSS_{pdf_name}/
โโโ {pdf_name}.pdf # Original PDF
โโโ auto/{pdf_name}.md # MinerU parsed markdown
โโโ {pdf_name}_YYYYMMDD_HHMMSS_questions.json # Extracted questions
โโโ {pdf_name}_YYYYMMDD_HHMMSS_generated_questions.json # Generated questions
```
---
๐ Guided Learning
Architecture Diagram

> **Personalized learning system** based on notebook content, automatically generating progressive learning paths through interactive pages and smart Q&A.
**Core Features**
| Feature | Description |
|:---:|:---|
| Multi-Agent Architecture | **LocateAgent**: Identifies 3-5 progressive knowledge points
**InteractiveAgent**: Converts to visual HTML pages
**ChatAgent**: Provides contextual Q&A
**SummaryAgent**: Generates learning summaries |
| Smart Knowledge Location | Automatic analysis of notebook content |
| Interactive Pages | HTML page generation with bug fixing |
| Smart Q&A | Context-aware answers with explanations |
| Progress Tracking | Real-time status with session persistence |
| Cross-Notebook Support | Select records from multiple notebooks |
**Usage Flow**
1. **Select Notebook(s)** โ Choose one or multiple notebooks (cross-notebook selection supported)
2. **Generate Learning Plan** โ LocateAgent identifies 3-5 core knowledge points
3. **Start Learning** โ InteractiveAgent generates HTML visualization
4. **Learning Interaction** โ Ask questions, click "Next" to proceed
5. **Complete Learning** โ SummaryAgent generates learning summary
Output Location
```
data/user/guide/
โโโ session_{session_id}.json # Complete session state, knowledge points, chat history
```
---
โ๏ธ Interactive IdeaGen (Co-Writer)
Architecture Diagram

> **Intelligent Markdown editor** supporting AI-assisted writing, auto-annotation, and TTS narration.
**Core Features**
| Feature | Description |
|:---:|:---|
| Rich Text Editing | Full Markdown syntax support with live preview |
| EditAgent | **Rewrite**: Custom instructions with optional RAG/web context
**Shorten**: Compress while preserving key information
**Expand**: Add details and context |
| Auto-Annotation | Automatic key content identification and marking |
| NarratorAgent | Script generation, TTS audio, multiple voices (Cherry, Stella, Annie, Cally, Eva, Bella) |
| Context Enhancement | Optional RAG or web search for additional context |
| Multi-Format Export | Markdown, PDF, etc. |
**Usage**
1. Visit http://localhost:{frontend_port}/co_writer
2. Enter or paste text in the editor
3. Use AI features: Rewrite, Shorten, Expand, Auto Mark, Narrate
4. Export to Markdown or PDF
Output Location
```
data/user/co-writer/
โโโ audio/ # TTS audio files
โ โโโ {operation_id}.mp3
โโโ tool_calls/ # Tool call history
โ โโโ {operation_id}_{tool_type}.json
โโโ history.json # Edit history
```
---
๐ฌ Deep Research
Architecture Diagram

> **DR-in-KG** (Deep Research in Knowledge Graph) โ A systematic deep research system based on **Dynamic Topic Queue** architecture, enabling multi-agent collaboration across three phases: **Planning โ Researching โ Reporting**.
**Core Features**
| Feature | Description |
|:---:|:---|
| Three-Phase Architecture | **Phase 1 (Planning)**: RephraseAgent (topic optimization) + DecomposeAgent (subtopic decomposition)
**Phase 2 (Researching)**: ManagerAgent (queue scheduling) + ResearchAgent (research decisions) + NoteAgent (info compression)
**Phase 3 (Reporting)**: Deduplication โ Three-level outline generation โ Report writing with citations |
| Dynamic Topic Queue | Core scheduling system with TopicBlock state management: `PENDING โ RESEARCHING โ COMPLETED/FAILED`. Supports dynamic topic discovery during research |
| Execution Modes | **Series Mode**: Sequential topic processing
**Parallel Mode**: Concurrent multi-topic processing with `AsyncCitationManagerWrapper` for thread-safe operations |
| Multi-Tool Integration | **RAG** (hybrid/naive), **Query Item** (entity lookup), **Paper Search**, **Web Search**, **Code Execution** โ dynamically selected by ResearchAgent |
| Unified Citation System | Centralized CitationManager as single source of truth for citation ID generation, ref_number mapping, and deduplication |
| Preset Configurations | **quick**: Fast research (1-2 subtopics, 1-2 iterations)
**medium/standard**: Balanced depth (5 subtopics, 4 iterations)
**deep**: Thorough research (8 subtopics, 7 iterations)
**auto**: Agent autonomously decides depth |
**Citation System Architecture**
The citation system follows a centralized design with CitationManager as the single source of truth:
```
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ CitationManager โ
โ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โ
โ โ ID Generation โ โ ref_number Map โ โ Deduplication โ โ
โ โ PLAN-XX โ โ citation_id โ โ โ (papers only) โ โ
โ โ CIT-X-XX โ โ ref_number โ โ โ โ
โ โโโโโโโโโโฌโโโโโโโโโ โโโโโโโโโโฌโโโโโโโโโ โโโโโโโโโโฌโโโโโโโโโ โ
โโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโ
โ โ โ
โโโโโโโโดโโโโโโโ โโโโโโโโดโโโโโโโ โโโโโโโโดโโโโโโโ
โDecomposeAgentโ โReportingAgentโ โ References โ
โ ResearchAgentโ โ (inline [N]) โ โ Section โ
โ NoteAgent โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโ
```
| Component | Description |
|:---:|:---|
| ID Format | **PLAN-XX** (planning stage RAG queries) + **CIT-X-XX** (research stage, X=block number) |
| ref_number Mapping | Sequential 1-based numbers built from sorted citation IDs, with paper deduplication |
| Inline Citations | Simple `[N]` format in LLM output, post-processed to clickable `[[N]](#ref-N)` links |
| Citation Table | Clear reference table provided to LLM: `Cite as [1] โ (RAG) query preview...` |
| Post-processing | Automatic format conversion + validation to remove invalid citation references |
| Parallel Safety | Thread-safe async methods (`get_next_citation_id_async`, `add_citation_async`) for concurrent execution |
**Parallel Execution Architecture**
When `execution_mode: "parallel"` is enabled, multiple topic blocks are researched concurrently:
```
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Parallel Research Execution โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ DynamicTopicQueue AsyncCitationManagerWrapper โ
โ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Topic 1 (PENDING)โ โโโ โ Thread-safe wrapper โ โ
โ โ Topic 2 (PENDING)โ โโโผโโโ asyncio โ for CitationManager โ โ
โ โ Topic 3 (PENDING)โ โโโค Semaphore โ โ โ
โ โ Topic 4 (PENDING)โ โโโค (max=5) โ โข get_next_citation_ โ โ
โ โ Topic 5 (PENDING)โ โโโ โ id_async() โ โ
โ โโโโโโโโโโโโโโโโโโโ โ โข add_citation_async() โ โ
โ โ โโโโโโโโโโโโโฌโโโโโโโโโโโโโโ โ
โ โผ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Concurrent ResearchAgent Tasks โ โ
โ โ โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โ โ
โ โ โ Task 1 โ โ Task 2 โ โ Task 3 โ โ Task 4 โ ... โ โ
โ โ โ(Topic 1)โ โ(Topic 2)โ โ(Topic 3)โ โ(Topic 4)โ โ โ
โ โ โโโโโโฌโโโโโ โโโโโโฌโโโโโ โโโโโโฌโโโโโ โโโโโโฌโโโโโ โ โ
โ โ โ โ โ โ โ โ
โ โ โโโโโโโโโโโโโโดโโโโโโโโโโโโโดโโโโโโโโโโโโโ โ โ
โ โ โ โ โ
โ โ โผ โ โ
โ โ AsyncManagerAgentWrapper โ โ
โ โ (Thread-safe queue updates) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
```
| Component | Description |
|:---:|:---|
| `asyncio.Semaphore` | Limits concurrent tasks to `max_parallel_topics` (default: 5) |
| `AsyncCitationManagerWrapper` | Wraps CitationManager with `asyncio.Lock()` for thread-safe ID generation |
| `AsyncManagerAgentWrapper` | Ensures queue state updates are atomic across parallel tasks |
| Real-time Progress | Live display of all active research tasks with status indicators |
**Agent Responsibilities**
| Agent | Phase | Responsibility |
|:---:|:---:|:---|
| RephraseAgent | Planning | Optimizes user input topic, supports multi-turn user interaction for refinement |
| DecomposeAgent | Planning | Decomposes topic into subtopics with RAG context, obtains citation IDs from CitationManager |
| ManagerAgent | Researching | Queue state management, task scheduling, dynamic topic addition |
| ResearchAgent | Researching | Knowledge sufficiency check, query planning, tool selection, requests citation IDs before each tool call |
| NoteAgent | Researching | Compresses raw tool outputs into summaries, creates ToolTraces with pre-assigned citation IDs |
| ReportingAgent | Reporting | Builds citation map, generates three-level outline, writes report sections with citation tables, post-processes citations |
**Report Generation Pipeline**
```
1. Build Citation Map โ CitationManager.build_ref_number_map()
2. Generate Outline โ Three-level headings (H1 โ H2 โ H3)
3. Write Sections โ LLM uses [N] citations with provided citation table
4. Post-process โ Convert [N] โ [[N]](#ref-N), validate references
5. Generate References โ Academic-style entries with collapsible source details
```
**Usage**
1. Visit http://localhost:{frontend_port}/research
2. Enter research topic
3. Select research mode (quick/medium/deep/auto)
4. Watch real-time progress with parallel/series execution
5. View structured report with clickable inline citations
6. Export as Markdown or PDF (with proper page splitting and Mermaid diagram support)
CLI
```bash
# Quick mode (fast research)
python src/agents/research/main.py --topic "Deep Learning Basics" --preset quick
# Medium mode (balanced)
python src/agents/research/main.py --topic "Transformer Architecture" --preset medium
# Deep mode (thorough research)
python src/agents/research/main.py --topic "Graph Neural Networks" --preset deep
# Auto mode (agent decides depth)
python src/agents/research/main.py --topic "Reinforcement Learning" --preset auto
```
Python API
```python
import asyncio
from src.agents.research import ResearchPipeline
from src.core.core import get_llm_config, load_config_with_main
async def main():
# Load configuration (main.yaml merged with any module-specific overrides)
config = load_config_with_main("research_config.yaml")
llm_config = get_llm_config()
# Create pipeline (agent parameters loaded from agents.yaml automatically)
pipeline = ResearchPipeline(
config=config,
api_key=llm_config["api_key"],
base_url=llm_config["base_url"],
kb_name="ai_textbook" # Optional: override knowledge base
)
# Run research
result = await pipeline.run(topic="Attention Mechanisms in Deep Learning")
print(f"Report saved to: {result['final_report_path']}")
asyncio.run(main())
```
Output Location
```
data/user/research/
โโโ reports/ # Final research reports
โ โโโ research_YYYYMMDD_HHMMSS.md # Markdown report with clickable citations [[N]](#ref-N)
โ โโโ research_*_metadata.json # Research metadata and statistics
โโโ cache/ # Research process cache
โโโ research_YYYYMMDD_HHMMSS/
โโโ queue.json # DynamicTopicQueue state (TopicBlocks + ToolTraces)
โโโ citations.json # Citation registry with ID counters and ref_number mapping
โ # - citations: {citation_id: citation_info}
โ # - counters: {plan_counter, block_counters}
โโโ step1_planning.json # Planning phase results (subtopics + PLAN-XX citations)
โโโ planning_progress.json # Planning progress events
โโโ researching_progress.json # Researching progress events
โโโ reporting_progress.json # Reporting progress events
โโโ outline.json # Three-level report outline structure
โโโ token_cost_summary.json # Token usage statistics
```
**Citation File Structure** (`citations.json`):
```json
{
"research_id": "research_20241209_120000",
"citations": {
"PLAN-01": {"citation_id": "PLAN-01", "tool_type": "rag_hybrid", "query": "...", "summary": "..."},
"CIT-1-01": {"citation_id": "CIT-1-01", "tool_type": "paper_search", "papers": [...], ...}
},
"counters": {
"plan_counter": 2,
"block_counters": {"1": 3, "2": 2}
}
}
```
Configuration Options
Key configuration in `config/main.yaml` (research section) and `config/agents.yaml`:
```yaml
# config/agents.yaml - Agent LLM parameters
research:
temperature: 0.5
max_tokens: 12000
# config/main.yaml - Research settings
research:
# Execution Mode
researching:
execution_mode: "parallel" # "series" or "parallel"
max_parallel_topics: 5 # Max concurrent topics
max_iterations: 5 # Max iterations per topic
# Tool Switches
enable_rag_hybrid: true # Hybrid RAG retrieval
enable_rag_naive: true # Basic RAG retrieval
enable_paper_search: true # Academic paper search
enable_web_search: true # Web search (also controlled by tools.web_search.enabled)
enable_run_code: true # Code execution
# Queue Limits
queue:
max_length: 5 # Maximum topics in queue
# Reporting
reporting:
enable_inline_citations: true # Enable clickable [N] citations in report
# Presets: quick, medium, deep, auto
# Global tool switches in tools section
tools:
web_search:
enabled: true # Global web search switch (higher priority)
```
---
๐ก Automated IdeaGen
Architecture Diagram

> **Research idea generation system** that extracts knowledge points from notebook records and generates research ideas through multi-stage filtering.
**Core Features**
| Feature | Description |
|:---:|:---|
| MaterialOrganizerAgent | Extracts knowledge points from notebook records |
| Multi-Stage Filtering | **Loose Filter** โ **Explore Ideas** (5+ per point) โ **Strict Filter** โ **Generate Markdown** |
| Idea Exploration | Innovative thinking from multiple dimensions |
| Structured Output | Organized markdown with knowledge points and ideas |
| Progress Callbacks | Real-time updates for each stage |
**Usage**
1. Visit http://localhost:{frontend_port}/ideagen
2. Select a notebook with records
3. Optionally provide user thoughts/preferences
4. Click "Generate Ideas"
5. View generated research ideas organized by knowledge points
Python API
```python
import asyncio
from src.agents.ideagen import IdeaGenerationWorkflow, MaterialOrganizerAgent
from src.core.core import get_llm_config
async def main():
llm_config = get_llm_config()
# Step 1: Extract knowledge points from materials
organizer = MaterialOrganizerAgent(
api_key=llm_config["api_key"],
base_url=llm_config["base_url"]
)
knowledge_points = await organizer.extract_knowledge_points(
"Your learning materials or notebook content here"
)
# Step 2: Generate research ideas
workflow = IdeaGenerationWorkflow(
api_key=llm_config["api_key"],
base_url=llm_config["base_url"]
)
result = await workflow.process(knowledge_points)
print(result) # Markdown formatted research ideas
asyncio.run(main())
```
---
๐ Dashboard + Knowledge Base Management
> **Unified system entry** providing activity tracking, knowledge base management, and system status monitoring.
**Key Features**
| Feature | Description |
|:---:|:---|
| Activity Statistics | Recent solving/generation/research records |
| Knowledge Base Overview | KB list, statistics, incremental updates |
| Notebook Statistics | Notebook counts, record distribution |
| Quick Actions | One-click access to all modules |
**Usage**
- **Web Interface**: Visit http://localhost:{frontend_port} to view system overview
- **Create KB**: Click "New Knowledge Base", upload PDF/Markdown documents
- **View Activity**: Check recent learning activities on Dashboard
---
๐ Notebook
> **Unified learning record management**, connecting outputs from all modules to create a personalized learning knowledge base.
**Core Features**
| Feature | Description |
|:---:|:---|
| Multi-Notebook Management | Create, edit, delete notebooks |
| Unified Record Storage | Integrate solving/generation/research/Interactive IdeaGen records |
| Categorization Tags | Auto-categorize by type, knowledge base |
| Custom Appearance | Color, icon personalization |
**Usage**
1. Visit http://localhost:{frontend_port}/notebook
2. Create new notebook (set name, description, color, icon)
3. After completing tasks in other modules, click "Add to Notebook"
4. View and manage all records on the notebook page
---
### ๐ Module Documentation
## โ FAQ
Backend fails to start?
**Checklist**
- Confirm Python version >= 3.10
- Confirm all dependencies installed: `pip install -r requirements.txt`
- Check if port 8001 is in use (configurable in `config/main.yaml`)
- Check `.env` file configuration
**Solutions**
- **Change port**: Edit `config/main.yaml` server.backend_port
- **Check logs**: Review terminal error messages
Port occupied after Ctrl+C?
**Problem**
After pressing Ctrl+C during a running task (e.g., deep research), restarting shows "port already in use" error.
**Cause**
Ctrl+C sometimes only terminates the frontend process while the backend continues running in the background.
**Solution**
```bash
# macOS/Linux: Find and kill the process
lsof -i :8001
kill -9
# Windows: Find and kill the process
netstat -ano | findstr :8001
taskkill /PID /F
```
Then restart the service with `python scripts/start_web.py`.
npm: command not found error?
**Problem**
Running `scripts/start_web.py` shows `npm: command not found` or exit status 127.
**Checklist**
- Check if npm is installed: `npm --version`
- Check if Node.js is installed: `node --version`
- Confirm conda environment is activated (if using conda)
**Solutions**
```bash
# Option A: Using Conda (Recommended)
conda install -c conda-forge nodejs
# Option B: Using Official Installer
# Download from https://nodejs.org/
# Option C: Using nvm
nvm install 18
nvm use 18
```
**Verify Installation**
```bash
node --version # Should show v18.x.x or higher
npm --version # Should show version number
```
Frontend cannot connect to backend?
**Checklist**
- Confirm backend is running (visit http://localhost:8001/docs)
- Check browser console for error messages
**Solution**
Create `.env.local` in `web` directory:
```bash
NEXT_PUBLIC_API_BASE=http://localhost:8001
```
WebSocket connection fails?
**Checklist**
- Confirm backend is running
- Check firewall settings
- Confirm WebSocket URL is correct
**Solution**
- **Check backend logs**
- **Confirm URL format**: `ws://localhost:8001/api/v1/...`
Where are module outputs stored?
| Module | Output Path |
|:---:|:---|
| Solve | `data/user/solve/solve_YYYYMMDD_HHMMSS/` |
| Question | `data/user/question/question_YYYYMMDD_HHMMSS/` |
| Research | `data/user/research/reports/` |
| Interactive IdeaGen | `data/user/co-writer/` |
| Notebook | `data/user/notebook/` |
| Guide | `data/user/guide/session_{session_id}.json` |
| Logs | `data/user/logs/` |
How to add a new knowledge base?
**Web Interface**
1. Visit http://localhost:{frontend_port}/knowledge
2. Click "New Knowledge Base"
3. Enter knowledge base name
4. Upload PDF/TXT/MD documents
5. System will process documents in background
**CLI**
```bash
python -m src.knowledge.start_kb init --docs
```
How to incrementally add documents to existing KB?
**CLI (Recommended)**
```bash
python -m src.knowledge.add_documents --docs
```
**Benefits**
- Only processes new documents, saves time and API costs
- Automatically merges with existing knowledge graph
- Preserves all existing data
Numbered items extraction failed with uvloop.Loop error?
**Problem**
When initializing a knowledge base, you may encounter this error:
```
ValueError: Can't patch loop of type
```
This occurs because Uvicorn uses `uvloop` event loop by default, which is incompatible with `nest_asyncio`.
**Solution**
Use one of the following methods to extract numbered items:
```bash
# Option 1: Using the shell script (recommended)
./scripts/extract_numbered_items.sh
# Option 2: Direct Python command
python src/knowledge/extract_numbered_items.py --kb --base-dir ./data/knowledge_bases
```
This will extract numbered items (Definitions, Theorems, Equations, etc.) from your knowledge base without reinitializing it.
## โญ Star History
## ๐ค Contribution
We hope DeepTutor could become a gift for the community. ๐
## ๐ Related Projects
| [โก LightRAG](https://github.com/HKUDS/LightRAG) | [๐จ RAG-Anything](https://github.com/HKUDS/RAG-Anything) | [๐ป DeepCode](https://github.com/HKUDS/DeepCode) | [๐ฌ AI-Researcher](https://github.com/HKUDS/AI-Researcher) |
|:---:|:---:|:---:|:---:|
| Simple and Fast RAG | Multimodal RAG | AI Code Assistant | Research Automation |
**[Data Intelligence Lab @ HKU](https://github.com/HKUDS)**
[โญ Star us](https://github.com/HKUDS/DeepTutor/stargazers) ยท [๐ Report a bug](https://github.com/HKUDS/DeepTutor/issues) ยท [๐ฌ Discussions](https://github.com/HKUDS/DeepTutor/discussions)
---
This project is licensed under the ***[AGPL-3.0 License](LICENSE)***.
*โจ Thanks for visiting **DeepTutor**!*