This is a PyTorch implementation of "SuperPoint: Self-Supervised Interest Point Detection and Description." Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich. ArXiv 2018. This code is partially based on the tensorflow implementation https://github.com/rpautrat/SuperPoint.
Please be generous to star this repo if it helps your research.
Task | Homography estimation | Detector metric | Descriptor metric | ||||
---|---|---|---|---|---|---|---|
Epsilon = 1 | 3 | 5 | Repeatability | MLE | NN mAP | Matching Score | |
Pretrained model | 0.44 | 0.77 | 0.83 | 0.606 | 1.14 | 0.81 | 0.55 |
Sift (subpixel accuracy) | 0.63 | 0.76 | 0.79 | 0.51 | 1.16 | - | - |
superpoint_coco_heat2_0_170k_hpatches_sub | 0.46 | 0.75 | 0.81 | 0.63 | 1.07 | 0.78 | 0.42 |
superpoint_kitti_heat2_0_50k_hpatches_sub | 0.44 | 0.71 | 0.77 | 0.56 | 0.95 | 0.78 | 0.41 |
conda create --name py36-sp python=3.6
conda activate py36-sp
pip install -r requirements.txt
pip install -r requirements_torch.txt # install pytorch
setting.py
Datasets should be downloaded into $DATA_DIR. The Synthetic Shapes dataset will also be generated there. The folder structure should look like:
$DATA_DIR
|-- COCO
| |-- train2014
| | |-- file1.jpg
| | `-- ...
| `-- val2014
| |-- file1.jpg
| `-- ...
`-- HPatches
| |-- i_ajuntament
| `-- ...
`-- synthetic_shapes # will be automatically created
`-- KITTI (accumulated folders from raw data)
| |-- 2011_09_26_drive_0020_sync
| | |-- image_00/
| | `-- ...
| |-- ...
| `-- 2011_09_28_drive_0001_sync
| | |-- image_00/
| | `-- ...
| |-- ...
| `-- 2011_09_29_drive_0004_sync
| | |-- image_00/
| | `-- ...
| |-- ...
| `-- 2011_09_30_drive_0016_sync
| | |-- image_00/
| | `-- ...
| |-- ...
| `-- 2011_10_03_drive_0027_sync
| | |-- image_00/
| | `-- ...
tensorboard --logdir=./runs/ [--host | static_ip_address] [--port | 6008]
python train4.py train_base configs/magicpoint_shapes_pair.yaml magicpoint_synth --eval
you don't need to download synthetic data. You will generate it when first running it.
Synthetic data is exported in ./datasets
. You can change the setting in settings.py
.
This is the step of homography adaptation(HA) to export pseudo ground truth for joint training.
export_folder: <'train' | 'val'> # set export for training or validation
python export.py <export task> <config file> <export folder> [--outputImg | output images for visualization (space inefficient)]
python export.py export_detector_homoAdapt configs/magicpoint_coco_export.yaml magicpoint_synth_homoAdapt_coco
python export.py export_detector_homoAdapt configs/magicpoint_coco_export.yaml magicpoint_synth_homoAdapt_coco
datasets/kitti_split/
.python export.py export_detector_homoAdapt configs/magicpoint_kitti_export.yaml magicpoint_base_homoAdapt_kitti
You need pseudo ground truth labels to traing detectors. Labels can be exported from step 2) or downloaded from link. Then, as usual, you need to set config file before training.
python train4.py <train task> <config file> <export folder> --eval
python train4.py train_joint configs/superpoint_coco_train_heatmap.yaml superpoint_coco --eval --debug
python train4.py train_joint configs/superpoint_kitti_train_heatmap.yaml superpoint_kitti --eval --debug
./run_export.sh
will run export then evaluation.python export.py <export task> <config file> <export folder>
python export.py export_descriptor configs/magicpoint_repeatability_heatmap.yaml superpoint_hpatches_test
python evaluation.py <path to npz files> [-r, --repeatibility | -o, --outputImg | -homo, --homography ]
python evaluation.py logs/superpoint_hpatches_test/predictions --repeatibility --outputImg --homography --plotMatching
python export_classical.py export_descriptor configs/classical_descriptors.yaml sift_test --correspondence
logs/superpoint_coco_heat2_0/checkpoints/superPointNet_170000_checkpoint.pth.tar
logs/superpoint_kitti_heat2_0/checkpoints/superPointNet_50000_checkpoint.pth.tar
pretrained/superpoint_v1.pth
notebooks/visualize_hpatches.ipynb -- show images saved in the folders
Please cite the original paper.
@inproceedings{detone2018superpoint,
title={Superpoint: Self-supervised interest point detection and description},
author={DeTone, Daniel and Malisiewicz, Tomasz and Rabinovich, Andrew},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops},
pages={224--236},
year={2018}
}
This implementation is developed by You-Yi Jau and Rui Zhu. Please contact You-Yi for any problems. Again the work is based on Tensorflow implementation by Rémi Pautrat and Paul-Edouard Sarlin and official SuperPointPretrainedNetwork. Thanks to Daniel DeTone for help during the implementation.
What have I learned from the implementation of deep learning paper?
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。