# LightGlue-ONNX
**Repository Path**: aircraft-is-design/LightGlue-ONNX
## Basic Information
- **Project Name**: LightGlue-ONNX
- **Description**: ONNX-compatible LightGlue: Local Feature Matching at Light Speed. Supports TensorRT, OpenVINO
- **Primary Language**: Unknown
- **License**: Apache-2.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 1
- **Forks**: 0
- **Created**: 2024-06-17
- **Last Updated**: 2025-07-20
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
[](https://onnx.ai/)
[](https://developer.nvidia.com/tensorrt)
[](https://github.com/fabio-sim/LightGlue-ONNX/stargazers)
[](https://github.com/fabio-sim/LightGlue-ONNX/releases)
[](https://fabio-sim.github.io/blog/accelerating-lightglue-inference-onnx-runtime-tensorrt/)
# LightGlue ONNX
Open Neural Network Exchange (ONNX) compatible implementation of [LightGlue: Local Feature Matching at Light Speed](https://github.com/cvg/LightGlue). The ONNX model format allows for interoperability across different platforms with support for multiple execution providers, and removes Python-specific dependencies such as PyTorch. Supports TensorRT and OpenVINO.
> ✨ ***What's New***: End-to-end parallel dynamic batch size support. Read more in this [blog post](https://fabio-sim.github.io/blog/accelerating-lightglue-inference-onnx-runtime-tensorrt/).

⏱️ Inference Time Comparison

Changelog
- **17 July 2024**: End-to-end parallel dynamic batch size support. Revamp script UX. Add [blog post](https://fabio-sim.github.io/blog/accelerating-lightglue-inference-onnx-runtime-tensorrt/).
- **02 November 2023**: Introduce TopK-trick to optimize out ArgMax for about 30% speedup.
- **04 October 2023:** Fused LightGlue ONNX Models with support for FlashAttention-2 via `onnxruntime>=1.16.0`, up to 80% faster inference on long sequence lengths (number of keypoints).
- **27 October 2023**: LightGlue-ONNX added to [Kornia](https://kornia.readthedocs.io/en/latest/feature.html#kornia.feature.OnnxLightGlue)!
- **04 October 2023**: Multihead-attention fusion optimization.
- **19 July 2023**: Add support for TensorRT.
- **13 July 2023**: Add support for Flash Attention.
- **11 July 2023**: Add support for mixed precision.
- **04 July 2023**: Add inference time comparisons.
- **01 July 2023**: Add support for extractor `max_num_keypoints`.
- **30 June 2023**: Add support for DISK extractor.
- **28 June 2023**: Add end-to-end SuperPoint+LightGlue export & inference pipeline.
## Install
```
conda create -n ml python=3.11
conda activate ml
# 安装cudatoolkit
conda install cudatoolkit
# 安装最新版本
conda install pytorch torchvision torchaudio pytorch-cuda -c pytorch -c nvidia
pip install onnx onnxruntime-gpu opencv-python
pip install typer
```
## ONNX Inference
With ONNX models in hand, one can perform inference on Python using ONNX Runtime (see requirements-onnx.txt).
The LightGlue inference pipeline has been encapsulated into a runner class:
```python
from onnx_runner import LightGlueRunner, load_image, rgb_to_grayscale
image0, scales0 = load_image("assets/sacre_coeur1.jpg", resize=512)
image1, scales1 = load_image("assets/sacre_coeur2.jpg", resize=512)
image0 = rgb_to_grayscale(image0) # only needed for SuperPoint
image1 = rgb_to_grayscale(image1) # only needed for SuperPoint
# Create ONNXRuntime runner
runner = LightGlueRunner(
extractor_path="weights/superpoint.onnx",
lightglue_path="weights/superpoint_lightglue_fused.onnx",
providers=["CUDAExecutionProvider", "CPUExecutionProvider"],
# TensorrtExecutionProvider, OpenVINOExecutionProvider
)
# Run inference
m_kpts0, m_kpts1 = runner.run(image0, image1, scales0, scales1)
print(m_kpts0.shape, m_kpts1.shape)
```
注意事项:
可能找不到libcudnn.so的库文件,需要手动指定路径
```
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/bushuhui/anaconda3/envs/ml/lib/python3.11/site-packages/torch/lib
```
demo inference:
```bash
python infer.py --img_paths assets/sacre_coeur1.jpg assets/sacre_coeur2.jpg --lightglue_path weights/superpoint_lightglue_end2end_fused.onnx --extractor_type superpoint
```
## ⭐ ONNX Export & Inference
We provide a [typer](https://github.com/tiangolo/typer) CLI [`dynamo.py`](dynamo.py) to easily export LightGlue to ONNX and perform inference using ONNX Runtime. If you would like to try out inference right away, you can download ONNX models that have already been exported [here](https://github.com/fabio-sim/LightGlue-ONNX/releases).
```shell
$ python dynamo.py --help
Usage: dynamo.py [OPTIONS] COMMAND [ARGS]...
LightGlue Dynamo CLI
╭─ Commands ───────────────────────────────────────╮
│ export Export LightGlue to ONNX. │
│ infer Run inference for LightGlue ONNX model. │
| trtexec Run pure TensorRT inference using |
| Polygraphy. |
╰──────────────────────────────────────────────────╯
```
Pass `--help` to see the available options for each command. The CLI will export the full extractor-matcher pipeline so that you don't have to worry about orchestrating intermediate steps.
## 📖 Example Commands
🔥 ONNX Export
python dynamo.py export superpoint \
--num-keypoints 1024 \
-b 2 -h 1024 -w 1024 \
-o weights/superpoint_lightglue_pipeline.onnx
⚡ ONNX Runtime Inference (CUDA)
python dynamo.py infer \
weights/superpoint_lightglue_pipeline.onnx \
assets/sacre_coeur1.jpg assets/sacre_coeur2.jpg \
superpoint \
-h 1024 -w 1024 \
-d cuda
🚀 ONNX Runtime Inference (TensorRT)
python dynamo.py infer \
weights/superpoint_lightglue_pipeline.trt.onnx \
assets/sacre_coeur1.jpg assets/sacre_coeur2.jpg \
superpoint \
-h 1024 -w 1024 \
-d tensorrt --fp16
🧩 TensorRT Inference
python dynamo.py trtexec \
weights/superpoint_lightglue_pipeline.trt.onnx \
assets/sacre_coeur1.jpg assets/sacre_coeur2.jpg \
superpoint \
-h 1024 -w 1024 \
--fp16
🟣 ONNX Runtime Inference (OpenVINO)
python dynamo.py infer \
weights/superpoint_lightglue_pipeline.onnx \
assets/sacre_coeur1.jpg assets/sacre_coeur2.jpg \
superpoint \
-h 512 -w 512 \
-d openvino
## Credits
If you use any ideas from the papers or code in this repo, please consider citing the authors of [LightGlue](https://arxiv.org/abs/2306.13643) and [SuperPoint](https://arxiv.org/abs/1712.07629) and [DISK](https://arxiv.org/abs/2006.13566). Lastly, if the ONNX versions helped you in any way, please also consider starring this repository.
```txt
@inproceedings{lindenberger23lightglue,
author = {Philipp Lindenberger and
Paul-Edouard Sarlin and
Marc Pollefeys},
title = {{LightGlue}: Local Feature Matching at Light Speed},
booktitle = {ArXiv PrePrint},
year = {2023}
}
```
```txt
@article{DBLP:journals/corr/abs-1712-07629,
author = {Daniel DeTone and
Tomasz Malisiewicz and
Andrew Rabinovich},
title = {SuperPoint: Self-Supervised Interest Point Detection and Description},
journal = {CoRR},
volume = {abs/1712.07629},
year = {2017},
url = {http://arxiv.org/abs/1712.07629},
eprinttype = {arXiv},
eprint = {1712.07629},
timestamp = {Mon, 13 Aug 2018 16:47:29 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1712-07629.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```txt
@article{DBLP:journals/corr/abs-2006-13566,
author = {Michal J. Tyszkiewicz and
Pascal Fua and
Eduard Trulls},
title = {{DISK:} Learning local features with policy gradient},
journal = {CoRR},
volume = {abs/2006.13566},
year = {2020},
url = {https://arxiv.org/abs/2006.13566},
eprinttype = {arXiv},
eprint = {2006.13566},
timestamp = {Wed, 01 Jul 2020 15:21:23 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2006-13566.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```