# XRAG **Repository Path**: heibaicha/XRAG ## Basic Information - **Project Name**: XRAG - **Description**: No description available - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-03-14 - **Last Updated**: 2025-03-14 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
[](https://badge.fury.io/py/examinationrag)
[](https://pypi.org/project/examinationrag/)
[](https://github.com/DocAILab/XRAG/blob/main/LICENSE)
[](https://pepy.tech/project/examinationrag)
[](https://github.com/DocAILab/XRAG/stargazers)
[](https://github.com/DocAILab/XRAG/issues)
[](https://arxiv.org/abs/2412.15529)
## π Table of Contents
- [:mega: Updates](#mega-updates)
- [:book: Introduction](#-introduction)
- [:sparkles: Features](#-features)
- [:globe_with_meridians: WebUI Demo](#-webui-demo)
- [:hammer_and_wrench: Installation](#οΈ-installation)
- [:rocket: Quick Start](#-quick-start)
- [:gear: Configuration](#οΈ-configuration)
- [:warning: Troubleshooting](#-troubleshooting)
- [:clipboard: Changelog](#-changelog)
- [:speech_balloon: Feedback and Support](#-feedback-and-support)
- [:round_pushpin: Acknowledgement](#round_pushpin-acknowledgement)
- [:books: Citation](#-citation)
## :mega: Updates
- **2025-01.09: Add API support. Now you can use XRAG as a backend service.**
- **2025-01.06: Add ollama LLM support.**
- **2025-01.05: Add generate command. Now you can generate your own QA pairs from a folder which contains your documents.**
- **2024-12.23: XRAG Documentation is released**π.
- **2024-12.20: XRAG is released**π.
---
## π Introduction
XRAG is a benchmarking framework designed to evaluate the foundational components of advanced Retrieval-Augmented Generation (RAG) systems. By dissecting and analyzing each core module, XRAG provides insights into how different configurations and components impact the overall performance of RAG systems.
---
## β¨ Features
- **π Comprehensive Evaluation Framework**:
- Multiple evaluation dimensions: LLM-based evaluation, Deep evaluation, and traditional metrics
- Support for evaluating retrieval quality, response faithfulness, and answer correctness
- Built-in evaluation models including LlamaIndex, DeepEval, and custom metrics
- **βοΈ Flexible Architecture**:
- Modular design with pluggable components for retrievers, embeddings, and LLMs
- Support for various retrieval methods: Vector, BM25, Hybrid, and Tree-based
- Easy integration with custom retrieval and evaluation strategies
- **π€ Multiple LLM Support**:
- Seamless integration with OpenAI models
- Support for local models (Qwen, LLaMA, etc.)
- Configurable model parameters and API settings
- **π Rich Evaluation Metrics**:
- Traditional metrics: F1, EM, MRR, Hit@K, MAP, NDCG
- LLM-based metrics: Faithfulness, Relevancy, Correctness
- Deep evaluation metrics: Contextual Precision/Recall, Hallucination, Bias
- **π― Advanced Retrieval Methods**:
- BM25-based retrieval
- Vector-based semantic search
- Tree-structured retrieval
- Keyword-based retrieval
- Document summary retrieval
- Custom retrieval strategies
- **π» User-Friendly Interface**:
- Command-line interface with rich options
- Web UI for interactive evaluation
- Detailed evaluation reports and visualizations
---
## π WebUI Demo
XRAG provides an intuitive web interface for interactive evaluation and visualization. Launch it with:
```bash
xrag-cli webui
```
The WebUI guides you through the following workflow:
### 1. Dataset Upload and Configuration
Upload and configure your datasets:
- Support for benchmark datasets (HotpotQA, DropQA, NaturalQA)
- Custom dataset integration
- Automatic format conversion and preprocessing
### 2. Index Building and Configuration
Configure system parameters and build indices:
- API key configuration
- Parameter settings
- Vector database index construction
- Chunk size optimization
### 3. RAG Strategy Configuration
Define your RAG pipeline components:
- Pre-retrieval methods
- Retriever selection
- Post-processor configuration
- Custom prompt template creation
### 4. Interactive Testing
Test your RAG system interactively:
- Real-time query testing
- Retrieval result inspection
- Response generation review
- Performance analysis
### 5. Comprehensive Evaluation
## π οΈ Installation
Before installing XRAG, ensure that you have Python 3.11 or later installed.
### Create a Virtual Environment via conda(Recommended)
```bash
# Create a new conda environment
conda create -n xrag python=3.11
# Activate the environment
conda activate xrag
```
### **Install via pip**
You can install XRAG directly using `pip`:
```bash
# Install XRAG
pip install examinationrag
# Install 'jury' without dependencies to avoid conflicts
pip install jury --no-deps
```
---
## π Quick Start
Here's how you can get started with XRAG:
### 1. **Prepare Configuration**:
Modify the `config.toml` file to set up your desired configurations.
### 2. Using `xrag-cli`
After installing XRAG, the `xrag-cli` command becomes available in your environment. This command provides a convenient way to interact with XRAG without needing to call Python scripts directly.
### **Command Structure**
```bash
xrag-cli [command] [options]
```
### **Commands and Options**
- **run**: Runs the benchmarking process.
```bash
xrag-cli run [--override key=value ...]
```
- **webui**: Launches the web-based user interface.
```bash
xrag-cli webui
```
- **ver**: Displays the current version of XRAG.
```bash
xrag-cli version
```
- **help**: Displays help information.
```bash
xrag-cli help
```
- **generate**: Generate QA pairs from a folder.
```bash
xrag-cli generate -i