# StrongSORT-YOLO-main
**Repository Path**: testmodel_1/strong-sort-yolo-main
## Basic Information
- **Project Name**: StrongSORT-YOLO-main
- **Description**: 目标跟踪
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 1
- **Created**: 2023-12-21
- **Last Updated**: 2024-03-11
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# StrongSORT with OSNet for YoloV5, YoloV7, YoloV8 (Counter)
## Introduction
This repository contains a highly configurable two-stage-tracker that adjusts to different deployment scenarios. The detections generated by [YOLOv5](https://github.com/ultralytics/yolov5), [YOLOv7](https://github.com/WongKinYiu/yolov7), [YOLOv8](https://github.com/ultralytics/ultralytics) a family of object detection architectures and models pretrained on the COCO dataset, are passed to [StrongSORT](https://github.com/dyhBUPT/StrongSORT)[](https://arxiv.org/pdf/2202.13514.pdf) which combines motion and appearance information based on [OSNet](https://github.com/KaiyangZhou/deep-person-reid)[](https://arxiv.org/abs/1905.00953) in order to tracks the objects. It can track any object that your Yolov5 model was trained to detect.
## Before you run the tracker
1. Clone the repository recursively:
`git clone --recurse-submodules https://github.com/bharath5673/StrongSORT-YOLO.git `
If you already cloned and forgot to use `--recurse-submodules` you can run `git submodule update --init`
2. Make sure that you fulfill all the requirements: Python 3.8 or later with all [requirements.txt](https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch/blob/master/requirements.txt) dependencies installed, including torch>=1.7. To install, run:
`pip install -r requirements.txt`
## Tracking sources
Tracking can be run on most video formats
## Select object detectors and ReID model
### Yolov5
There is a clear trade-off between model inference speed and accuracy. In order to make it possible to fulfill your inference speed/accuracy needs
you can select a Yolov5 family model for automatic download
```bash
$ python track_v5.py --source 0 --yolo-weights weights/yolov5n.pt --img 640
yolov5s.pt
yolov5m.pt
yolov5l.pt
yolov5x.pt --img 1280
...
```
### Yolov7
There is a clear trade-off between model inference speed and accuracy. In order to make it possible to fulfill your inference speed/accuracy needs
you can select a Yolov5 family model for automatic download
```bash
$ python track_v7.py --source 0 --yolo-weights weights/yolov7-tiny.pt --img 640
yolov7.pt
yolov7x.pt
yolov7-w6.pt
yolov7-e6.pt
yolov7-d6.pt
yolov7-e6e.pt
...
```
### StrongSORT
The above applies to StrongSORT models as well. Choose a ReID model based on your needs from this ReID [model zoo](https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO)
```bash
$ python track_v*.py --source 0 --strong-sort-weights osnet_x0_25_market1501.pt
osnet_x0_5_market1501.pt
osnet_x0_75_msmt17.pt
osnet_x1_0_msmt17.pt
...
```
## Filter tracked classes
By default the tracker tracks all MS COCO classes.
If you only want to track persons I recommend you to get [these weights](https://drive.google.com/file/d/1gglIwqxaH2iTvy6lZlXuAcMpd_U0GCUb/view?usp=sharing) for increased performance
```bash
python track_v*.py --source 0 --yolo-weights weights/v*.pt --classes 0 # tracks persons, only
```
If you want to track a subset of the MS COCO classes, add their corresponding index after the classes flag
```bash
python track_v*.py --source 0 --yolo-weights weights/v*.pt --classes 16 17 # tracks cats and dogs, only
```
### Counter

#### get realtime counts of every tracking objects without any rois or any line interctions
```bash
$ python track_v*.py --source test.mp4 -yolo-weights weights/v*.pt --save-txt --count --show-vid
```
### Draw Object Trajectory