This is a PyTorch implementation of YOLOv2. This project is mainly based on darkflow and darknet.
For details about YOLO and YOLOv2 please refer to their project page and the paper: YOLO9000: Better, Faster, Stronger by Joseph Redmon and Ali Farhadi.
I used a Cython extension for postprocessing and
multiprocessing.Pool
for image preprocessing.
Testing an image in VOC2007 costs about 13~20ms.
TODO: Build the loss function for training.
Clone this repository
git clone git@github.com:longcw/yolo2-pytorch.git
Build the reorg layer (tf.extract_image_patches
)
cd yolo2-pytorch
./make.sh
Download the trained model yolo-voc.weights.h5
and set the model path in demo.py
Run demo python demo.py
.
Follow this project (TFFRCNN) to download and prepare the training, validation, test data.
Since the program loading the data in yolo2-pytorch/data
by default,
you can set the data path as following.
cd yolo2-pytorch
mkdir data
cd data
ln -s $VOCdevkit VOCdevkit2007
Set the path of the trained_model
in yolo2-pytorch/cfgs/config.py
.
cd faster_rcnn_pytorch
mkdir output
python test.py
###Discuss I am confused about the difference between YOLO and RPN (region proposal network).
YOLO divides the image into a grid and predicts bounding boxes and probabilities for each cell in the grid. I think this is what RPN does, especially for YOLOv2 which uses a set of anchors for each cell.
One of the main difference between YOLO and RPN is the loss functions. YOLO limits predicted location coordinates in one of the cells during training and testing while RPN associates predicted ROIs with ground-truth boxes without any limitation. Is is enough to make a region proposal method become a detector? Or I have missed something important. Welcome to discuss with me.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。