# DeepSeek-OCR-2 **Repository Path**: chen334/DeepSeek-OCR-2 ## Basic Information - **Project Name**: DeepSeek-OCR-2 - **Description**: No description available - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-01-29 - **Last Updated**: 2026-01-29 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
📥 Model Download | 📄 Paper Link | 📄 Arxiv Paper Link |
DeepSeek-OCR 2: Visual Causal Flow
Explore more human-like visual encoding.
## Contents - [Install](#install) - [vLLM Inference](#vllm-inference) - [Transformers Inference](#transformers-inference) ## Install >Our environment is cuda11.8+torch2.6.0. 1. Clone this repository and navigate to the DeepSeek-OCR-2 folder ```bash git clone https://github.com/deepseek-ai/DeepSeek-OCR-2.git ``` 2. Conda ```Shell conda create -n deepseek-ocr2 python=3.12.9 -y conda activate deepseek-ocr2 ``` 3. Packages - download the vllm-0.8.5 [whl](https://github.com/vllm-project/vllm/releases/tag/v0.8.5) ```Shell pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu118 pip install vllm-0.8.5+cu118-cp38-abi3-manylinux1_x86_64.whl pip install -r requirements.txt pip install flash-attn==2.7.3 --no-build-isolation ``` **Note:** if you want vLLM and transformers codes to run in the same environment, you don't need to worry about this installation error like: vllm 0.8.5+cu118 requires transformers>=4.51.1 ## vLLM-Inference - VLLM: >**Note:** change the INPUT_PATH/OUTPUT_PATH and other settings in the DeepSeek-OCR2-master/DeepSeek-OCR2-vllm/config.py ```Shell cd DeepSeek-OCR2-master/DeepSeek-OCR2-vllm ``` 1. image: streaming output ```Shell python run_dpsk_ocr2_image.py ``` 2. pdf: concurrency (on-par speed with DeepSeek-OCR) ```Shell python run_dpsk_ocr2_pdf.py ``` 3. batch eval for benchmarks (i.e., OmniDocBench v1.5) ```Shell python run_dpsk_ocr2_eval_batch.py ``` ## Transformers-Inference - Transformers ```python from transformers import AutoModel, AutoTokenizer import torch import os os.environ["CUDA_VISIBLE_DEVICES"] = '0' model_name = 'deepseek-ai/DeepSeek-OCR-2' tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = AutoModel.from_pretrained(model_name, _attn_implementation='flash_attention_2', trust_remote_code=True, use_safetensors=True) model = model.eval().cuda().to(torch.bfloat16) # prompt = "