# deepstream-yolov5 **Repository Path**: FIRC/deepstream-yolov5 ## Basic Information - **Project Name**: deepstream-yolov5 - **Description**: an implementation of yolov5 running on deepstream5 - **Primary Language**: C++ - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 1 - **Created**: 2022-01-10 - **Last Updated**: 2022-04-15 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README ## DeepStream YOLOv5 An implementation of YOLOv5 running on DeepStream 5 ## Requirements ``` CUDA 10.2 TensorRT 7.1 DeepStream 5 ``` ### or ``` CUDA 11.1 TensorRT 7.2.1.6 DeepStream 5.1 ``` ### How to Use Enter $ROOT of this repository. * Step1: Prepare the wts file of YOLOv5s model follow instructions [tensorrtx/yolov5](https://github.com/wang-xinyu/tensorrtx/tree/master/yolov5), then put the wts file into $ROOT/data folder and rename as `yolov5s.wts`. * Step2: Enter $ROOT/source folder, modify `EXFLAGS` and `EXLIBS` in `Makefile` corresponding to your installed TensorRT library path, run `make` command to compile the run-time library. * Step3: Back to $ROOT folder, run `deepstream-app -c configs/deepstream_app_config_yolov5s.txt` command. * Step4: rename the generated engine file `model_b1_gpu0_fp16.engine` as `yolov5s_b1_gpu0_fp16.engine` for reuse. ## Acknowledgements * [https://github.com/wang-xinyu/tensorrtx](https://github.com/wang-xinyu/tensorrtx)