LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and Environment Analysis ICCV 2019, Seoul, Korea
Zhe Liu1, Shunbo Zhou1, Chuanzhe Suo1, Peng Yin3, Wen Chen1, Hesheng Wang2,Haoang Li1, Yun-Hui Liu1
1The Chinese University of Hong Kong, 2Shanghai Jiao Tong University, 3Carnegie Mellon University
Point cloud based Place Recognition is still an open issue due to the difficulty in extracting local features from the raw 3D point cloud and generating the global descriptor,and it’s even harder in the large-scale dynamic environments. We develop a novel deep neural network, named LPD-Net (Large-scale Place Description Network), which can extract discriminative and generalizable global descriptors from the raw 3D point cloud. The arXiv version of LPD-Net can be found here.
@InProceedings{Liu_2019_ICCV,
author = {Liu, Zhe and Zhou, Shunbo and Suo, Chuanzhe and Yin, Peng and Chen, Wen and Wang, Hesheng and Li, Haoang and Liu, Yun-Hui},
title = {LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and Environment Analysis},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}
The benchmark datasets introdruced in this work can be downloaded here, which created by PointNetVLAD for point cloud based retrieval for place recognition from the open-source Oxford RobotCar. Details can be found in PointNetVLAD.
Code was tested using Python 3 on Tensorflow 1.4.0 with CUDA 8.0 and Tensorflow 1.12.0 with CUDA 9.0
sudo apt-get install python3-pip python3-dev python-virtualenv
virtualenv --system-site-packages -p python3 ~/tensorflow
source ~/tensorflow/bin/activate
easy_install -U pip
pip3 install --upgrade tensorflow-gpu==1.4.0
pip install scipy, pandas, sklearn
pip install glog
Download the zip file of the benchmark datasets found here. Extract the folder on the same directory as the project code. Thus, on that directory you must have two folders: 1) benchmark_datasets/ and 2) LPD_net/
We preprocess the benchmark datasets at first and store the features of point clouds on bin files to save the training time. The files only need to be generated once and used as input of networks. The generation of these files may take a few hours.
# For pre-processing dataset to generate pointcloud with features
python prepare_data.py
# Parse Arguments: --k_start 20 --k_end 100 --k_step 10 --bin_core_num 10
# KNN Neighbor size from 20 to 100 with interval 10, parallel process pool core numbers:10
We store the positive and negative point clouds to each anchor on pickle files that are used in our training and evaluation codes. The files only need to be generated once. The generation of these files may take a few minutes.
cd generating_queries/
# For training tuples in LPD-Net
python generate_training_tuples_baseline.py
# For network evaluation
python generate_test_sets.py
# For network inference
python generate_inference_sets.py
# Need to modify the variables (folders or index_list) to specify the folder
To train our network, run the following command:
python train_lpdnet.py
# Parse Arguments: --model lpd_FNSF --log_dir log/ --restore
# For example, Train lpd_FNSF network from scratch
python train_lpdnet.py --model lpd_FNSF
# Retrain lpd_FNSF network with pretrained model
python train_lpdnet.py --log_dir log/<model_folder,eg.lpd_FNSF-18-11-16-12-03> --restore
To evaluate the model, run the following command:
python evaluate.py --log_dir log/<model_folder,eg.lpd_FNSF-18-11-16-12-03>
# The resulst.txt will be saved in results/<model_folder>
To infer the model, run the following command to get global descriptors:
python inference.py --log_dir log/<model_folder,eg.lpd_FNSF-18-11-16-12-03>
# The inference_vectors.bin will be saved in LPD_net folder
The pre-trained models for lpd_FNSF networks can be downloaded here. Put it under the /log
folder
This repository is released under MIT License (see LICENSE file for details).
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。