# Tiny-DSOD
**Repository Path**: hedilong/Tiny-DSOD
## Basic Information
- **Project Name**: Tiny-DSOD
- **Description**: Tiny-DSOD: Lightweight Object Detection for Resource-Restricted Usage
- **Primary Language**: C++
- **License**: Not specified
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 1
- **Forks**: 1
- **Created**: 2020-01-18
- **Last Updated**: 2020-12-19
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# Tiny-DSOD: Lightweight Object Detection for Resource Restricted Usage
This repository releases the code for our paper
**Tiny-DSOD: Lightweight Object Detection for Resource Restricted Usage (BMVC2018)**
**Yuxi Li, [Jianguo Li](https://sites.google.com/site/leeplus/), Jiuwei Li and Weiyao Lin**
The code is based on the [SSD](https://github.com/weiliu89/caffe/tree/ssd) and [DSOD](https://github.com/szq0214/DSOD) framework.
## Introduction
Tiny-DSOD tries to tackle the trade-off between detection accuracy and computation resource consumption. In this work, our tiny-model outperforms other small sized detection network (pelee, mobilenet-ssd or tiny-yolo) in the metrics of FLOPs, parameter size and accuracy. To be specific, on the dataset of PASCAL VOC2007, Tiny-DSOD achieves mAP of 72.1% with less than 1 million parameters (0.95M)


## Preparation
1. Install dependencies our caffe framework needs. You can visit the [caffe official website](http://caffe.berkeleyvision.org/installation.html) and follow the instructions there to install the dependent libraries and drivers correctly.
2. Clone this repository and compile the code
```Shell
git clone https://github.com/lyxok1/Tiny-DSOD.git
cd Tiny-DSOD
# visit the Makefile then modify the compile options and path to your library there
make -j8
```
3. Prepare corresponding dataset (if need training). Please see the document in [SSD](https://github.com/weiliu89/caffe/tree/ssd) detail
## Train a model from scratch
Suppose the code is runing under the main directory of caffe.
First generate the model prototxt files
```Shell
python examples/DCOD/DCOD_pascal.py # for voc training
python examples/DCOD/DCOD_kitti.py # for kitti training
python examples/DCOD/DCOD_coco.py # for coco training
```
And then use the binary `./build/tools/caffe` to train the generated network
```Shell
./jobs/DCOD300/${DATASET}/DCOD300_300x300/DCOD300_${DATASET}_DCOD300_300x300.sh
# Alternatively, you can directly use the binary to train in command line
./build/tools/caffe train -solver models/DCOD300/$DATASET/DCOD300_300x300/solver.prototxt -gpu all 2>&1 | tee models/DCOD300/$DATASET/DCOD300_300x300/train.log
```
## Deploy a pre-trained model
If you want to directly deploy a pre-trained model, you can use the demo scripts we provide in the `example/DCOD/` directory
- for image input detection, use the following command:
```Shell
python examples/DCOD/image_detection_demo.py