# FE-LSD
**Repository Path**: WLLwssy/FE-LSD
## Basic Information
- **Project Name**: FE-LSD
- **Description**: 《Detecting Line Segments in Motion-blurred Images with Events》
设计了一个通用的帧-事件特征融合网络,该网络由基于通道注意力的浅融合模块和基于自注意力的双沙漏模块组成。然后,利用两个最先进的线框解析网络来检测融合特征图上的线段。还贡献 FE-Wireframe 和 FE-Blurframe,具有成对的运动模糊图像和事件。
- **Primary Language**: Unknown
- **License**: GPL-3.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2024-06-18
- **Last Updated**: 2024-06-18
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
[
](https://github.com/lh9171338/Outline) FE-LSD
===========================================================================================================================================
This repository contains the official PyTorch implementation of the paper: [Detecting Line Segments in Motion-blurred Images with Events](https://levenberg.github.io/FE-LSD/).
# Introduction
[FE-LSD](https://levenberg.github.io/FE-LSD/) is an event-enhanced line segment detection framework for motion-blurred images with thoughtful information fusion of both modalities and advanced wireframe parsing network. Extensive results on both synthetic and realistic datasets demonstrate the effectiveness of the proposed method for handling motion blurs in line segment detection.
# Network Architecture

# Results
## FE-Wireframe Dataset
* Quantitative Comparisons
| Method |
sAP5 |
sAP10 |
sAP15 |
msAP |
mAPJ |
APH |
FH |
FPS |
| LSD |
0.1 |
0.6 |
1.1 |
0.6 |
3.0 |
19.5 |
42.6 |
76.7 |
| FBSD |
0.2 |
0.4 |
0.9 |
0.5 |
2.9 |
24.9 |
47.0 |
21.7 |
| L-CNN |
3.4 |
5.1 |
6.2 |
4.9 |
7.0 |
22.7 |
38.8 |
28.8 |
| HAWP |
3.5 |
5.1 |
6.3 |
5.0 |
6.8 |
21.7 |
40.2 |
36.6 |
| ULSD |
3.5 |
5.3 |
6.8 |
5.2 |
7.5 |
20.2 |
40.3 |
39.7 |
| LETR |
2.8 |
5.0 |
6.5 |
4.8 |
7.3 |
21.9 |
41.9 |
4.2 |
| L-CNN (Retrained) |
40.6 |
45.8 |
48.2 |
44.8 |
45.6 |
70.5 |
71.1 |
10.6 |
| HAWP (Retrained) |
45.1 |
50.4 |
52.9 |
49.5 |
46.8 |
75.0 |
73.2 |
26.8 |
| ULSD (Retrained) |
47.0 |
52.7 |
55.2 |
51.7 |
48.8 |
72.2 |
73.7 |
32.2 |
| LETR (Retrained) |
24.7 |
34.7 |
39.7 |
33.1 |
25.4 |
66.1 |
71.5 |
3.9 |
| FE-HAWP |
48.7 |
53.9 |
56.2 |
53.0 |
49.4 |
77.1 |
75.1 |
22.2 |
| FE-ULSD |
50.9 |
56.5 |
58.8 |
55.4 |
51.1 |
75.3 |
75.9 |
24.2 |
* Qualitative Comparisons
## FE-Blurframe Dataset
* Quantitative Comparisons
| Method |
sAP5 |
sAP10 |
sAP15 |
msAP |
mAPJ |
APH |
FH |
FPS |
| LSD |
1.1 |
2.8 |
4.1 |
2.7 |
5.1 |
29.4 |
48.1 |
61.0 |
| FBSD |
0.9 |
1.9 |
2.7 |
1.8 |
5.1 |
34.2 |
53.2 |
15.9 |
| L-CNN |
7.5 |
11.5 |
13.7 |
10.9 |
12.4 |
27.9 |
45.2 |
29.7 |
| HAWP |
8.4 |
12.8 |
15.3 |
12.2 |
12.4 |
32.0 |
48.2 |
38.1 |
| ULSD |
6.8 |
10.8 |
13.0 |
10.2 |
11.8 |
26.7 |
45.6 |
40.6 |
| LETR |
7.1 |
13.0 |
16.8 |
12.3 |
12.1 |
30.2 |
51.1 |
3.6 |
| L-CNN (Retrained) |
34.0 |
40.3 |
43.0 |
39.1 |
40.3 |
66.0 |
67.1 |
17.7 |
| HAWP (Retrained) |
37.0 |
43.9 |
46.9 |
42.6 |
41.6 |
67.9 |
69.6 |
29.0 |
| ULSD (Retrained) |
42.0 |
47.8 |
50.4 |
46.7 |
48.5 |
67.0 |
69.3 |
32.2 |
| LETR (Retrained) |
22.6 |
33.8 |
38.8 |
31.7 |
23.2 |
57.7 |
65.4 |
3.3 |
| FE-HAWP |
47.5 |
53.0 |
55.4 |
52.0 |
50.9 |
74.0 |
73.9 |
19.3 |
| FE-ULSD |
47.3 |
52.9 |
55.2 |
51.8 |
52.2 |
72.9 |
73.7 |
19.7 |
| FE-HAWP (Fine-tuned) |
59.8 |
64.2 |
65.9 |
63.3 |
60.1 |
82.0 |
79.7 |
21.1 |
| FE-ULSD (Fine-tuned) |
59.3 |
63.8 |
65.7 |
62.9 |
61.0 |
77.8 |
77.1 |
21.6 |
* Qualitative Comparisons
# Requirements
* torch>=1.6.0
* torchvision>=0.7.0
* CUDA>=10.1
* lh_tool, matplotlib, numpy, opencv_python, Pillow, scikit_learn, scipy, setuptools, tensorboardX, timm, torch, torchvision, tqdm, yacs,
# Step-by-step installation
```shell
conda create --name FE-LSD python=3.8
conda activate FE-LSD
cd
git clone https://github.com/lh9171338/FE-LSD.git
cd FE-LSD
pip install -r requirements.txt
python setup.py build_ext --inplace
```
# Quickstart with the pretrained model
* There are pretrained models in [Google drive](https://drive.google.com/drive/folders/1WGSftMoUgdAFjYjJtMP-JQN0CXiMmKXq?usp=sharing) and [Baiduyun](https://pan.baidu.com/s/19nWYeWQMn9qbvLErHsOyYw?pwd=spth). Please download them and put in the **model/** folder.
* Put your test data in the **dataset/** folder and generate the `test.json` file.
```shell
python image2json.py --dataset_name
```
* The file structure is as follows:
```
|-- dataset
|-- events
|-- 000001.npz
|-- ...
|-- images-blur
|-- 000001.png
|-- ...
|-- test.json
```
* Test with the pretrained model. The results are saved in the **output/** folder.
```shell
python test.py --arch --dataset_name --model_name --save_image
```
# Training & Testing
## Data Preparation
* Download the dataset from [Baiduyun](https://pan.baidu.com/s/19nWYeWQMn9qbvLErHsOyYw?pwd=spth).
* Unzip the dataset to the **dataset/** folder.
* Convert event streams into synchronous frames using Event Spike Tensor (EST) representation.
```shell
python event2frame.py --dataset_name --representation EST
ln -s events-EST-10 events
```
## Train
```shell
python train.py --arch FE-HAWP --dataset_name --model_name [--gpu ] # FE-HAWP
python train.py --arch FE-ULSD --dataset_name --model_name [--gpu ] # FE-ULSD
```
## Test
```shell
python test.py --arch FE-HAWP --dataset_name --model_name --save_image --with_clear [--gpu ] # FE-HAWP
python test.py --arch FE-ULSD --dataset_name --model_name --save_image --with_clear [--gpu ] # FE-ULSD
```
## Evaluation
To evaluate the mAPJ, sAP, and FPS
```shell
python test.py --arch FE-HAWP --dataset_name --model_name --evaluate [--gpu ] # FE-HAWP
python test.py --arch FE-ULSD --dataset_name --model_name --evaluate [--gpu ] # FE-ULSD
```
To evaluate APH and FH, MATLAB is required
```shell
cd metric
python eval_APH.py --arch FE-HAWP --dataset_name --model_name # FE-HAWP
python eval_APH.py --arch FE-ULSD --dataset_name --model_name # FE-ULSD
```
# Citation
```
@ARTICLE{10323537,
author={Yu, Huai and Li, Hao and Yang, Wen and Yu, Lei and Xia, Gui-Song},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Detecting Line Segments in Motion-Blurred Images With Events},
year={2023},
pages={1-16},
doi={10.1109/TPAMI.2023.3334877}
}
```