# Optical-Flow-Guided-Feature **Repository Path**: QingYuBaiLu_admin/Optical-Flow-Guided-Feature ## Basic Information - **Project Name**: Optical-Flow-Guided-Feature - **Description**: Implementation Code of the paper Optical Flow Guided Feature, CVPR 2018 - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2022-02-04 - **Last Updated**: 2022-02-04 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Optical Flow Guided Feature: A Fast and Robust Motion Representation for Video Action Recognition  This repo holds the implementation code of the paper: [Optical Flow Guided Feature: A Fast and Robust Motion Representation for Video Action Recognition](http://openaccess.thecvf.com/content_cvpr_2018/papers/Sun_Optical_Flow_Guided_CVPR_2018_paper.pdf), [Shuyang Sun](https://kevin-ssy.github.io/), [Zhanghui Kuang](http://jeffreykuang.github.io/index.html), [Lu Sheng](http://www.ee.cuhk.edu.hk/~lsheng/), [Wanli Ouyang](https://wlouyang.github.io/), [Wei Zhang](http://www.statfe.com/), CVPR 2018. ### Prerequisites - OpenCV 2.4.12 - OpenMPI 1.8.5 (enable-thread-multiple when install) - CUDA 7.5 - CUDNN 5 - [Caffe Dependencies](http://caffe.berkeleyvision.org/install_apt.html) ### Data Preparation You may refer to the document of the project [TSN](https://github.com/yjxiong/temporal-segment-networks#construct-file-lists-for-training-and-validation) to have the data of UCF-101 and HMDB-51 prepared. ### How to Build For training use, first modify the file ```make_train.sh``` with your own lib path filled in. Simply run ```sh make_train.sh```, the script will automatically build the caffe for you. For testing, you can simply run ```make pycaffe``` to make all stuff well prepared. ### Training You need to make two folders before you launch your training. The one is ```logs``` under the root of this project, and the other is the ```model``` under the folder ```models/DATASET/METHOD/SPLIT/```. For instance, if you want to train ```RGB_OFF``` on the dataset ```UCF101 split 1```, then your ```model``` directory should be made under the path ```models/ucf101/RGB_OFF/1/```. The network structure for training is defined in ```train.prototxt```, and the hyperparameters are defined in ```solver.prototxt```. For detailed training strategies and observations not included in the paper, please refer to our **[training recipes](https://kevin-ssy.github.io/off/)**. ### Testing You need to create another directory ```proto_splits``` under the same folder of ```model```. Our test code use pycaffe to call the functions defined in C++, therefore, we need to write some temporary files for synchronization. Remember to clean the temporary files everytime you launch a new test. Run the script ```test.sh``` with your ```METHOD```, ```MODEL_NAME```, ```SPLIT``` and ```NUM_GPU``` specified. The ```deploy_tpl.prototxt``` defines the network for reference. To transfer your network defined in ```train.prototxt``` into ```deploy_tpl.prototxt```, you may need to copy all the layers except the data layer and layers after each fully connected layer. As there are dynamic parameters defined in the ```deploy_tpl.prototxt```, e.g. ```$SOURCE $OVERSAMPLE_ID_PATH```, the format of the ```deploy_tpl.prototxt``` is a little bit different to the normal prototxt file. After testing, an aggregation operation is needed to fuse the scores from different sources. The script ```ensemble_test.sh``` may help you to aggregate results with manually searched weights (though inelegant:(......). You can find those weights settings among the comments of the script ```ensemble_test.sh```. ### Results Due to the unexpected server migration, our original models trained on all 3 splits of UCF101 and HMDB51 were all lost. Therefore, we re-train the models on the first split of UCF101: | RGB | OFF(RGB) | RGB DIFF | OFF(RGB DIFF) | FLOW | OFF(FLOW) | Acc. (Acc. in Paper) | | :-: | :-: | :-: | :-: | :-: | :-: | :-: | |