# OmniParser **Repository Path**: hycong/OmniParser ## Basic Information - **Project Name**: OmniParser - **Description**: 当前fork版本修改了omnitool对手机的操控 - **Primary Language**: Unknown - **License**: CC-BY-4.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 1 - **Forks**: 0 - **Created**: 2025-02-22 - **Last Updated**: 2025-09-02 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README 当前fork的版本的修改情况(主要在omnitool对手机的操控): - 修改paddleOCR默认语言为中文 - 修改omnitool中的gradio界面`omnitool/gradio/app.py`,默认模型为R1,api提供商是openai-like,使用阿里云百炼平台的apiKey - 修改omnitool中的大模型的提示词`omnitool\gradio\agent\vlm_agent.py`,`_get_system_prompt`修改成中文并且把动作改成对手机的动作 - 根据提示词中对手机的动作,增加了`omnitool/gradio/tools/android.py`和`omnitool/gradio/tools/screen_capture_android.py`,并且把相关调用的代码改成调用`AndroidTool`和`get_android_screenshot` **使用前请确保adb已经安装到环境变量当中** Changes on this fork (mostly changed omnitool to control phone): - PaddleOCR language changed to `ch` - Changes on `omnitool/gradio/app.py`: defualt model changed to R1, provider change to `"openai-like"`, using openai-like apiKey from aliyuncs.com - Changes on `omnitool\gradio\agent\vlm_agent.py`: `_get_system_prompt` changed to Chinese, asking the LLM to control a phone instead of computer, changed the action types to a phone - Adding `omnitool/gradio/tools/android.py` and `omnitool/gradio/tools/screen_capture_android.py`, using `adb` to implement the actions to android phone **Please make sure adb command is installed before using this version of omnitool** 以下是原版README.md:Original version of README.md: --- # OmniParser: Screen Parsing tool for Pure Vision Based GUI Agent

Logo

[![arXiv](https://img.shields.io/badge/Paper-green)](https://arxiv.org/abs/2408.00203) [![License](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) 📢 [[Project Page](https://microsoft.github.io/OmniParser/)] [[V2 Blog Post](https://www.microsoft.com/en-us/research/articles/omniparser-v2-turning-any-llm-into-a-computer-use-agent/)] [[Models V2](https://huggingface.co/microsoft/OmniParser-v2.0)] [[Models V1.5](https://huggingface.co/microsoft/OmniParser)] [[HuggingFace Space Demo](https://huggingface.co/spaces/microsoft/OmniParser-v2)] **OmniParser** is a comprehensive method for parsing user interface screenshots into structured and easy-to-understand elements, which significantly enhances the ability of GPT-4V to generate actions that can be accurately grounded in the corresponding regions of the interface. ## News - [2025/2] We release OmniParser V2 [checkpoints](https://huggingface.co/microsoft/OmniParser-v2.0). [Watch Video](https://1drv.ms/v/c/650b027c18d5a573/EWXbVESKWo9Buu6OYCwg06wBeoM97C6EOTG6RjvWLEN1Qg?e=alnHGC) - [2025/2] We introduce OmniTool: Control a Windows 11 VM with OmniParser + your vision model of choice. OmniTool supports out of the box the following large language models - OpenAI (4o/o1/o3-mini), DeepSeek (R1), Qwen (2.5VL) or Anthropic Computer Use. [Watch Video](https://1drv.ms/v/c/650b027c18d5a573/EehZ7RzY69ZHn-MeQHrnnR4BCj3by-cLLpUVlxMjF4O65Q?e=8LxMgX) - [2025/1] V2 is coming. We achieve new state of the art results 39.5% on the new grounding benchmark [Screen Spot Pro](https://github.com/likaixin2000/ScreenSpot-Pro-GUI-Grounding/tree/main) with OmniParser v2 (will be released soon)! Read more details [here](https://github.com/microsoft/OmniParser/tree/master/docs/Evaluation.md). - [2024/11] We release an updated version, OmniParser V1.5 which features 1) more fine grained/small icon detection, 2) prediction of whether each screen element is interactable or not. Examples in the demo.ipynb. - [2024/10] OmniParser was the #1 trending model on huggingface model hub (starting 10/29/2024). - [2024/10] Feel free to checkout our demo on [huggingface space](https://huggingface.co/spaces/microsoft/OmniParser)! (stay tuned for OmniParser + Claude Computer Use) - [2024/10] Both Interactive Region Detection Model and Icon functional description model are released! [Hugginface models](https://huggingface.co/microsoft/OmniParser) - [2024/09] OmniParser achieves the best performance on [Windows Agent Arena](https://microsoft.github.io/WindowsAgentArena/)! ## Install First clone the repo, and then install environment: ```python cd OmniParser conda create -n "omni" python==3.12 conda activate omni pip install -r requirements.txt ``` Ensure you have the V2 weights downloaded in weights folder (ensure caption weights folder is called icon_caption_florence). If not download them with: ``` # download the model checkpoints to local directory OmniParser/weights/ for f in icon_detect/{train_args.yaml,model.pt,model.yaml} icon_caption/{config.json,generation_config.json,model.safetensors}; do huggingface-cli download microsoft/OmniParser-v2.0 "$f" --local-dir weights; done mv weights/icon_caption weights/icon_caption_florence ``` ## Examples: We put together a few simple examples in the demo.ipynb. ## Gradio Demo To run gradio demo, simply run: ```python python gradio_demo.py ``` ## Model Weights License For the model checkpoints on huggingface model hub, please note that icon_detect model is under AGPL license since it is a license inherited from the original yolo model. And icon_caption_blip2 & icon_caption_florence is under MIT license. Please refer to the LICENSE file in the folder of each model: https://huggingface.co/microsoft/OmniParser. ## 📚 Citation Our technical report can be found [here](https://arxiv.org/abs/2408.00203). If you find our work useful, please consider citing our work: ``` @misc{lu2024omniparserpurevisionbased, title={OmniParser for Pure Vision Based GUI Agent}, author={Yadong Lu and Jianwei Yang and Yelong Shen and Ahmed Awadallah}, year={2024}, eprint={2408.00203}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2408.00203}, } ```