同步操作将从 RichardoMu/yolov5-deepsort-tensorrt 强制同步,此操作会覆盖自 Fork 仓库以来所做的任何修改,且无法恢复!!!
确定后同步将在后台操作,完成时将刷新页面,请耐心等待。
This repository uses yolov5 and deepsort to follow humna heads which can run in Jetson Xavier nx and Jetson nano. In Jetson Xavier Nx, it can achieve 10 FPS when images contain heads about 70+(you can try python version, when you use python version, you can find it very slow in Jetson Xavier nx , and Deepsort can cost nearly 1s).
Thanks for B.Q Long, offer the windows cmakelists.txt. If you want run this rep in windows, you can use CMakeLists_deepsort-tensorrt_win10.txt and CMakeLists_yolov5-deepsort-tensorrt_win10.txt.
You can see video play in BILIBILI, or YOUTUBE and YOUTUBE.
if you have problem in this project, you can see this artical.
Whole process time from read image to finished deepsort (include every img preprocess and postprocess) and attention!!! the number of deepsort tracking is 70+, not single or 10-20 persons, is 70+. And all results can get in Jetson Xavier nx.
Backbone | before TensorRT without tracking | before TensorRT with tracking | TensortRT(detection) | TensorRT(detection + tracking) | FPS(detection + tracking) |
---|---|---|---|---|---|
Yolov5s_416 | 100ms | 0.9s | 10-15ms | 100-150ms | 8 ~ 9 |
Yolov5s-640 | 120ms | 1s | 18-20ms | 100-150ms | 8 ~ 9 |
git clone https://github.com/RichardoMrMu/yolov5-deepsort-tensorrt.git
cd yolov5-deepsort-tensorrt
// before you cmake and make, please change ./src/main.cpp char* yolo_engine = ""; char* sort_engine = ""; to your own path
mkdir build
cmake ..
make
if you meet some errors in cmake and make, please see this artical or see Attention.
If you need to train your own model with head detection, you can use this SCUT-HEAD, this dataset has bbox with head and can download freely.
You need two model, one is yolov5 model, for detection, generating from tensorrtx. And the other is deepsort model, for tracking. You should generate the model the same way.
For yolov5 detection model, I choose yolov5s, and choose yolov5s.pt->yolov5s.wts->yolov5s.engine
Note that, used models can get from yolov5 and deepsort, and if you need to use your own model, you can follow the Run Your Custom Model
.
You can also see tensorrtx official readme
The following is deepsort.onnx and deesort.engine files, you can find in baiduyun and https://github.com/RichardoMrMu/yolov5-deepsort-tensorrt/releases/tag/yolosort
Model | Url |
---|---|
百度云 | BaiduYun url passwd:z68e |
Note that, here uses the official pertained model.And I use yolov5-5, v5.0. So if you train your own model, please be sure your yolov5 code is v5.0.
git clone -b v5.0 https://github.com/ultralytics/yolov5.git
cd yolov5
mkdir weights
cd weights
// download https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt
wget https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt
git clone https://github.com/wang-xinyu/tensorrtx
cp tensorrtx/gen_wts.py yolov5/
cd yolov5
python3 gen_wts.py -w ./weights/yolov5s.pt -o ./weights/yolov5s.wts
// a file 'yolov5s.wts' will be generated.
You can get yolov5s.wts model in yolov5/weights/
cd tensorrtx/yolov5
// update CLASS_NUM in yololayer.h if your model is trained on custom dataset
mkdir build
cd build
cp {ultralytics}/yolov5/yolov5s.wts {tensorrtx}/yolov5/build
cmake ..
make
// yolov5s
sudo ./yolov5 -s yolov5s.wts yolov5s.engine s
// test your engine file
sudo ./yolov5 -d yolov5s.engine ../samples
Then you get the yolov5s.engine, and you can put yolov5s.engine
in My project. For example
cd {yolov5-deepsort-tensorrt}
mkdir resources
cp {tensorrtx}/yolov5/build/yolov5s.engine {yolov5-deepsort-tensorrt}/resources
git clone https://github.com/RichardoMrMu/deepsort-tensorrt.git
// 根据github的说明
cp {deepsort-tensorrt}/exportOnnx.py {deep_sort_pytorch}/
python3 exportOnnx.py
mv {deep_sort_pytorch}/deepsort.onnx {deepsort-tensorrt}/resources
cd {deepsort-tensorrt}
mkdir build
cd build
cmake ..
make
./onnx2engine ../resources/deepsort.onnx ../resources/deepsort.engine
// test
./demo ../resource/deepsort.engine ../resources/track.txt
After all 5 step, you can get the yolov5s.engine and deepsort.engine.
You may face some problems in getting yolov5s.engine and deepsort.engine, you can upload your issue in github or csdn artical.
Currently, tensorrt support yolov5 v1.0(yolov5s only), v2.0, v3.0, v3.1, v4.0 and v5.0.
git clone -b v5.0 https://github.com/ultralytics/yolov5.git
and git clone https://github.com/wang-xinyu/tensorrtx.git
, then follow how-to-run in current page.git clone -b v4.0 https://github.com/ultralytics/yolov5.git
and git clone -b yolov5-v4.0 https://github.com/wang-xinyu/tensorrtx.git
, then follow how-to-run in tensorrtx/yolov5-v4.0.git clone -b v3.1 https://github.com/ultralytics/yolov5.git
and git clone -b yolov5-v3.1 https://github.com/wang-xinyu/tensorrtx.git
, then follow how-to-run in tensorrtx/yolov5-v3.1.git clone -b v3.0 https://github.com/ultralytics/yolov5.git
and git clone -b yolov5-v3.0 https://github.com/wang-xinyu/tensorrtx.git
, then follow how-to-run in tensorrtx/yolov5-v3.0.git clone -b v2.0 https://github.com/ultralytics/yolov5.git
and git clone -b yolov5-v2.0 https://github.com/wang-xinyu/tensorrtx.git
, then follow how-to-run in tensorrtx/yolov5-v2.0.git clone -b v1.0 https://github.com/ultralytics/yolov5.git
and git clone -b yolov5-v1.0 https://github.com/wang-xinyu/tensorrtx.git
, then follow how-to-run in tensorrtx/yolov5-v1.0.How to Run
first and then go the INT8 Quantization
belowYou may need train your own model and transfer your trained-model to tensorRT. So you can follow the following steps.
Generate yolov5 model
to get yolov5 and tensorrt rep, next step is to transfer your pytorch model to tensorrt.
Before this, you need to change yololayer.h file 20,21 and 22 line(CLASS_NUM,INPUT_H,INPUT_W) to your own parameters.// before
static constexpr int CLASS_NUM = 80; // 20
static constexpr int INPUT_H = 640; // 21 yolov5's input height and width must be divisible by 32.
static constexpr int INPUT_W = 640; // 22
// after
// if your model is 2 classfication and image size is 416*416
static constexpr int CLASS_NUM = 2; // 20
static constexpr int INPUT_H = 416; // 21 yolov5's input height and width must be divisible by 32.
static constexpr int INPUT_W = 416; // 22
cd {tensorrtx}/yolov5/
// update CLASS_NUM in yololayer.h if your model is trained on custom dataset
mkdir build
cd build
cp {ultralytics}/yolov5/yolov5s.wts {tensorrtx}/yolov5/build
cmake ..
make
sudo ./yolov5 -s [.wts] [.engine] [s/m/l/x/s6/m6/l6/x6 or c/c6 gd gw] // serialize model to plan file
sudo ./yolov5 -d [.engine] [image folder] // deserialize and run inference, the images in [image folder] will be processed.
// For example yolov5s
sudo ./yolov5 -s yolov5s.wts yolov5s.engine s
sudo ./yolov5 -d yolov5s.engine ../samples
// For example Custom model with depth_multiple=0.17, width_multiple=0.25 in yolov5.yaml
sudo ./yolov5 -s yolov5_custom.wts yolov5.engine c 0.17 0.25
sudo ./yolov5 -d yolov5.engine ../samples
In this way, you can get your own tensorrt yolov5 model. Enjoy it!
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。