# trackers **Repository Path**: repo3rd/trackers ## Basic Information - **Project Name**: trackers - **Description**: A unified library for object tracking featuring clean room re-implementations of leading multi-object tracking algorithms - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-04-29 - **Last Updated**: 2025-04-29 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README

trackers

trackers logo [![version](https://badge.fury.io/py/trackers.svg)](https://badge.fury.io/py/trackers) [![license](https://img.shields.io/badge/license-Apache%202.0-blue)](https://github.com/roboflow/trackers/blob/main/LICENSE.md) [![python-version](https://img.shields.io/pypi/pyversions/trackers)](https://badge.fury.io/py/trackers) [![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1VT_FYIe3kborhWrfKKBqqfR0EjQeQNiO?usp=sharing) [![discord](https://img.shields.io/discord/1159501506232451173?logo=discord&label=discord&labelColor=fff&color=5865f2&link=https%3A%2F%2Fdiscord.gg%2FGbfgXGJ8Bk)](https://discord.gg/GbfgXGJ8Bk)
## Hello `trackers` is a unified library offering clean room re-implementations of leading multi-object tracking algorithms. Its modular design allows you to easily swap trackers and integrate them with object detectors from various libraries like `inference`, `ultralytics`, or `transformers`.
Tracker Paper MOTA Year Status Colab
SORT arXiv 74.6 2016 colab
DeepSORT arXiv 75.4 2017 colab
ByteTrack arXiv 77.8 2021 🚧 🚧
OC-SORT arXiv 75.9 2022 🚧 🚧
BoT-SORT arXiv 77.8 2022 🚧 🚧
https://github.com/user-attachments/assets/eef9b00a-cfe4-40f7-a495-954550e3ef1f ## Installation Pip install the `trackers` package in a [**Python>=3.9**](https://www.python.org/) environment. ```bash pip install trackers ```
install from source
By installing `trackers` from source, you can explore the most recent features and enhancements that have not yet been officially released. Please note that these updates are still in development and may not be as stable as the latest published release. ```bash pip install git+https://github.com/roboflow/trackers.git ```
## Quickstart With a modular design, `trackers` lets you combine object detectors from different libraries with the tracker of your choice. Here's how you can use `SORTTracker` with various detectors: ```python import supervision as sv from trackers import SORTTracker from inference import get_model tracker = SORTTracker() model = get_model(model_id="yolov11m-640") annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER) def callback(frame, _): result = model.infer(frame)[0] detections = sv.Detections.from_inference(result) detections = tracker.update(detections) return annotator.annotate(frame, detections, labels=detections.tracker_id) sv.process_video( source_path="input.mp4", target_path="output.mp4", callback=callback, ) ```
run with ultralytics
```python import supervision as sv from trackers import SORTTracker from ultralytics import YOLO tracker = SORTTracker() model = YOLO("yolo11m.pt") annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER) def callback(frame, _): result = model(frame)[0] detections = sv.Detections.from_ultralytics(result) detections = tracker.update(detections) return annotator.annotate(frame, detections, labels=detections.tracker_id) sv.process_video( source_path="input.mp4", target_path="output.mp4", callback=callback, ) ```
run with transformers
```python import torch import supervision as sv from trackers import SORTTracker from transformers import RTDetrV2ForObjectDetection, RTDetrImageProcessor tracker = SORTTracker() image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_v2_r18vd") model = RTDetrV2ForObjectDetection.from_pretrained("PekingU/rtdetr_v2_r18vd") annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER) def callback(frame, _): inputs = image_processor(images=frame, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) h, w, _ = frame.shape results = image_processor.post_process_object_detection( outputs, target_sizes=torch.tensor([(h, w)]), threshold=0.5 )[0] detections = sv.Detections.from_transformers( transformers_results=results, id2label=model.config.id2label ) detections = tracker.update(detections) return annotator.annotate(frame, detections, labels=detections.tracker_id) sv.process_video( source_path="input.mp4", target_path="output.mp4", callback=callback, ) ```
## License The code is released under the [Apache 2.0 license](https://github.com/roboflow/trackers/blob/main/LICENSE). ## Contribution We welcome all contributions—whether it’s reporting issues, suggesting features, or submitting pull requests. Please read our [contributor guidelines](https://github.com/roboflow/trackers/blob/main/CONTRIBUTING.md) to learn about our processes and best practices.