1 Star 0 Fork 0

t125735890 / ttpla_dataset

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
README.md 5.71 KB
一键复制 编辑 原始数据 按行查看 历史
Rabab Abdelfattah 提交于 2021-08-22 22:04 . Update README.md

TTPLA: An Aerial-Image Dataset for Detection and Segmentation of Transmission Towers and Power Lines

TTPLA is a public dataset which is a collection of aerial images on Transmission Towers (TTs) and Powers Lines (PLs). This is the official repository of paper TTPLA: An Aerial-Image Dataset for Detection and Segmentation of Transmission Towers and Power Lines.

Screenshot

The repository includes:

  • The original images of TTPLA dataset with pixel level annotation in COCO format. The dataset images here (updated March 2021).
  • Splitting text files contain a list of images names after splitting to train.txt, validate.txt, and test.txt.
  • Weights of training models based on two different backbones and three different image sizes.

Preparation data:

  1. Modify resize_image_and_annotation-final.py to use the target image dimension (line 10). Then, call the script using python resize_image_and_annotation-final.py -t <images_path>. It will produce new folder called sized_data.

  2. Then call remove_void.py to remove void label if you would like to remove it. python remove_void.py -t <sized_images_path>. It will produce new folder called newjsons, you may renamed to whatever is fit.

  3. Based on three lists of train.txt, test.txt, and val.txt, split_jsons.py is used to split the created newjsons to three folders train , val, and test to prepare this before get the COCO json file.You can use the following command. python split_jsons.py -t newjsons/. It will produce new folder called splitting_jsons, you may renamed to whatever is fit.

  4. Use labelme2coco_2.py to get the COCO_json that used by Yolact. python labelme2coco_2.py splitting_jsons/train_jsons/. This step is done for three folders train_jsons , val_jsons, and test_jsons.

Tips to use our files directly

  • Install yolact Yolact.
  • Rename yolact folder to yolact700. Based on different sizes, it can rename also to yolact550 or yolact640.
  • In setp 1 in Prepration data, rename the generated sized_data folder name to data_700x700 and upload in yolact700/data/data_700x700. Based on different sizes, data_550x550 and data_640x360 are the other named folders with different sizes.
  • Use the suitable configuration from next table according to image size and backbone. Rename the picked config file to config.py and insert in yolact700/data/.
  • The generated json from step 5 in Prepration data, rename to train_coco_700x700, 2_test_json700, 2_val_json700 and put them into yolact700/data/ if you would like to use our config file directly or you can use any name and modify the pathes into config file.

Train Model:

For train image for example with size 700x700,

python train.py --config=yolact_img700_val_config --batch_size=8 --resume=weights/yolact_img550_108_12253_interrupt.pth

For evaluation,

python eval.py --config=yolact_img550_secondtest_config --mask_proto_debug --trained_model=weights/weights_img550_resnet50/yolact_img550_400_30061_resnet50_sep7_2217.pth --fast_nms=false

Evaluation:

Image Size Backbone configs weights
640 x 360 Resnet50 config_img640_resnet50_aspect.py yolact_img640_secondval_399_30000_resnet50.pth
550 x 550 Resnet50 config_img550_resnet50.py yolact_img550_399_30000_resnet50.pth
700 x 700 Resnet50 config_img700_resnet50.py yolact_img700_399_30000_resnet50.pth
640 x 360 Resnet101 config_img640_resnet101_aspect.py yolact_img640_secondval_399_45100_resnet101.pth
550 x 550 Resnet101 config_img550_resnet101.py yolact_img550_399_45100_resnet101_b8.pth
700 x 700 Resnet101 config_img700_resnet101.py yolact_img700_399_45100_resnet101_b8.pth

Results:

Average Precision for Different Deep Learning Models on TTPLA is reported in the following table

results

Citation:

@inproceedings{abdelfattah2020ttpla,
  title={TTPLA: An Aerial-Image Dataset for Detection and Segmentation of Transmission Towers and Power Lines},
  author={Abdelfattah, Rabab and Wang, Xiaofeng and Wang, Song},
  booktitle={Proceedings of the Asian Conference on Computer Vision},
  year={2020}
}

Contact:

For questions about our paper or code, please contact Rabab Abdelfattah.

马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
Python
1
https://gitee.com/t125735890/ttpla_dataset.git
git@gitee.com:t125735890/ttpla_dataset.git
t125735890
ttpla_dataset
ttpla_dataset
master

搜索帮助

344bd9b3 5694891 D2dac590 5694891