# FlexInfer **Repository Path**: AI52CV/FlexInfer ## Basic Information - **Project Name**: FlexInfer - **Description**: FlexInfer:一个灵活的Python前端推断库 代码原地址:https://github.com/Media-Smart/flexinfer - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 1 - **Forks**: 0 - **Created**: 2021-04-05 - **Last Updated**: 2021-04-05 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # FlexInfer A flexible Python front-end inference SDK. ## Features - Flexible FlexInfer has a Python front-end, which makes it easy to build a computer vision product prototype. - Efficient Most of time consuming part of FlexInfer is powered by C++ or CUDA, so FlexInfer is also efficient. If you are really hungry for efficiency and don't mind the trouble of C++, you can refer to [CheetahInfer](https://github.com/Media-Smart/cheetahinfer). ## License This project is released under [Apache 2.0 license](https://github.com/Media-Smart/flexinfer/blob/master/LICENSE). ## Installation ### Requirements - Linux - Python 3.6 or higher - TensorRT 7.1.3.4 or higher - PyTorch 1.4.0 or higher - CUDA 10.2 or higher - [volksdep](https://github.com/Media-Smart/volksdep.git) 3.2.0 or higher We have tested the following versions of OS and softwares: - OS: Ubuntu 16.04.6 LTS - Python 3.6.9 - TensorRT 7.1.3.4 - PyTorch 1.6.0 - CUDA: 10.2 - volksdep: 3.2.0 ### Install FlexInfer 1. If your platform is x86 or x64, you can create a conda virtual environment and activate it. ```shell conda create -n flexinfer python=3.6.9 -y conda activate flexinfer ``` 2. Install volksdep following the [official instructions](https://github.com/Media-Smart/volksdep) 3. Setup ```shell pip install "git+https://github.com/Media-Smart/flexinfer.git" ``` ## Usage We provide some examples for different tasks. - [Classification](https://github.com/Media-Smart/flexinfer/tree/master/examples/classification) - [Segmentation](https://github.com/Media-Smart/flexinfer/tree/master/examples/segmentation) - [Object Detection](https://github.com/Media-Smart/flexinfer/tree/master/examples/object_detection) - [Scene Text Recognition](https://github.com/Media-Smart/flexinfer/tree/master/examples/scene_text_recognition) ## Throughput benchmark - Device: Jetson AGX Xavier - CUDA: 10.2
Tasks framework version input shape data type throughput(FPS) latency(ms)
Classification (ResNet18) PyTorch 1.5.0 (1, 3, 224, 224) FP16 172 6.01
TensorRT 7.1.0.16 (1, 3, 224, 224) FP16 754 1.8
Segmentation(U-Net) PyTorch 1.5.0 (1, 3, 513, 513) FP16 15 63.27
tensorrt 7.1.0.16 (1, 3, 513, 513) FP16 29 34.03
Object Detection(RetinaNet-ResNet50) PyTorch 1.5.0 (1, 3, 768, 1280) FP16 8 118.79
TensorRT 7.1.0.16 (1, 3, 768, 1280) FP16 15 68.10
Scene Text Recognition (ResNet-CTC) PyTorch 1.5.0 (1, 1, 32, 100) FP16 113 10.75
TensorRT 7.1.0.16 (1, 1, 32, 100) FP16 308 3.55
## [Media-Smart toolboxes](https://github.com/Media-Smart) We provide some toolboxes of different tasks for training, testing and deploying. - [x] Classification ([vedacls](https://github.com/Media-Smart/vedacls)) - [x] Segmentation ([vedaseg](https://github.com/Media-Smart/vedaseg)) - [x] Object Detection ([vedadet](https://github.com/Media-Smart/vedadet)) - [x] Scene Text Recognition ([vedastr](https://github.com/Media-Smart/vedastr)) ## Contact This repository is currently maintained by Yuxin Zou ([@Yuxin Zou](https://github.com/YuxinZou)), Jun Sun([@ChaseMonsterAway](https://github.com/ChaseMonsterAway)), Hongxiang Cai ([@hxcai](http://github.com/hxcai)) and Yichao Xiong ([@mileistone](https://github.com/mileistone)).