同步操作将从 OpenMMLab/mmtracking 强制同步,此操作会覆盖自 Fork 仓库以来所做的任何修改,且无法恢复!!!
确定后同步将在后台操作,完成时将刷新页面,请耐心等待。
We use distributed training.
All pytorch-style pretrained backbones on ImageNet are from PyTorch model zoo.
For fair comparison with other codebases, we report the GPU memory as the maximum value of torch.cuda.max_memory_allocated()
for all 8 GPUs. Note that this value is usually less than what nvidia-smi
shows.
We report the inference time as the total time of network forwarding and post-processing, excluding the data loading time. Results are obtained with the script tools/analysis/benchmark.py
which computes the average time on 2000 images.
Speed benchmark environments
HardWare
Software environment
Please refer to DFF for details.
Please refer to FGFA for details.
Please refer to SELSA for details.
Please refer to Temporal RoI Align for details.
Please refer to SORT/DeepSORT for details.
Please refer to Tracktor for details.
Please refer to QDTrack for details.
Please refer to ByteTrack for details.
Please refer to SiameseRPN++ for details.
Please refer to STARK for details.
Please refer to MaskTrack R-CNN for details.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。