This is a Jittor implementation of PCT: Point Cloud Transformer.
Paper link: https://arxiv.org/pdf/2012.09688.pdf
The irregular domain and lack of ordering make it challenging to design deep neural networks for point cloud processing. This paper presents a novel framework named Point Cloud Transformer(PCT) for point cloud learning. PCT is based on Transformer, which achieves huge success in natural language processing and displays great potential in image processing. It is inherently permutation invariant for processing a sequence of points, making it well-suited for point cloud learning. To better capture local context within the point cloud, we enhance input embedding with the support of farthest point sampling and nearest neighbor search. Extensive experiments demonstrate that the PCT achieves the state-of-the-art performance on shape classification, part segmentation and normal estimation tasks
Jittor is a high-performance deep learning framework which is easy to learn and use. It provides interfaces like Pytorch.
You can learn how to use Jittor in following links:
Jittor homepage: https://cg.cs.tsinghua.edu.cn/jittor/
Jittor github: https://github.com/Jittor/jittor
If you has any questions about Jittor, you can ask in Jittor developer QQ Group: 761222083
Now, we only release the core code of our paper. All code and pretrained models will be available soon.
If it is helpful for your work, please cite this paper:
@misc{guo2021pct,
title={Pct: Point cloud transformer},
author={Meng-Hao Guo and Jun-Xiong Cai and Zheng-Ning Liu and Tai-Jiang Mu and Ralph R. Martin and Shi-Min Hu},
year={2021},
eprint={2012.09688},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。