同步操作将从 OpenMMLab/mmtracking 强制同步,此操作会覆盖自 Fork 仓库以来所做的任何修改,且无法恢复!!!
确定后同步将在后台操作,完成时将刷新页面,请耐心等待。
English | 简体中文
Documentation: https://mmtracking.readthedocs.io/
MMTracking is an open source video perception toolbox by PyTorch. It is a part of OpenMMLab project.
The master branch works with PyTorch1.5+.
The First Unified Video Perception Platform
We are the first open source toolbox that unifies versatile video perception tasks include video object detection, multiple object tracking, single object tracking and video instance segmentation.
Modular Design
We decompose the video perception framework into different components and one can easily construct a customized method by combining different modules.
Simple, Fast and Strong
Simple: MMTracking interacts with other OpenMMLab projects. It is built upon MMDetection that we can capitalize any detector only through modifying the configs.
Fast: All operations run on GPUs. The training and inference speeds are faster than or comparable to other implementations.
Strong: We reproduce state-of-the-art models and some of them even outperform the official implementations.
This project is released under the Apache 2.0 license.
Release QDTrack pretrained models.
v0.13.0 was released in 29/04/2022. Please refer to changelog.md for details and release history.
Results and models are available in the model zoo.
Supported Methods
Supported Datasets
Supported Methods
Supported Datasets
Supported Methods
Supported Datasets
Supported Methods
Supported Datasets
Please refer to install.md for install instructions.
Please see dataset.md and quick_run.md for the basic usage of MMTracking.
A Colab tutorial is provided. You may preview the notebook here or directly run it on Colab.
There are also usage tutorials, such as learning about configs, an example about detailed description of vid config, an example about detailed description of mot config, an example about detailed description of sot config, customizing dataset, customizing data pipeline, customizing vid model, customizing mot model, customizing sot model, customizing runtime settings and useful tools.
We appreciate all contributions to improve MMTracking. Please refer to CONTRIBUTING.md for the contributing guideline and this discussion for development roadmap.
MMTracking is an open source project that welcome any contribution and feedback. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible as well as standardized toolkit to reimplement existing methods and develop their own new video perception methods.
If you find this project useful in your research, please consider cite:
@misc{mmtrack2020,
title={{MMTracking: OpenMMLab} video perception toolbox and benchmark},
author={MMTracking Contributors},
howpublished = {\url{https://github.com/open-mmlab/mmtracking}},
year={2020}
}
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。