# evaluate **Repository Path**: Im34v/evaluate ## Basic Information - **Project Name**: evaluate - **Description**: A study of metrics for assessment on videos of invisible light. - **Primary Language**: Python - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2022-07-12 - **Last Updated**: 2023-02-13 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # evaluate   This evaluate repository is primarily a reference to three GitHub projects, [deepmind](https://github.com/deepmind)/**[kinetics-i3d](https://github.com/deepmind/kinetics-i3d)**, [hassony2](https://github.com/hassony2)/**[kinetics_i3d_pytorch](https://github.com/hassony2/kinetics_i3d_pytorch)** and [miracleyoo](https://github.com/miracleyoo)/**[Trainable-i3d-pytorch](https://github.com/miracleyoo/Trainable-i3d-pytorch)**. ## dataset ### data directory ``` ├── images // If you use image series instead of video as raw input │   ├── pre-processed │   │   ├── class1 │ │ │ └── Series1 │   │   │ │ ├── rgb-SampleNum_xx.npy │   │   │ │ └── flow-SampleNum_xx.npy │   │   │ ├── ... │   │   └── ... │   └── raw │   ├── class1 │   │ └── Series1 │   │ ├── pic1.png │   │ ├── pic2.jpg │   │ ├── pic3.jpg │   │ ├── ... │   │ └── picx.jpg │       │ ├── ... │      └── ... └── videos // If you use video as raw input ├── pre-processed │   ├── train │   │   ├── class1 │   │   │   └── Series1 │   │   │   ├── rgb-FPS_xx.npy │   │   │   └── flow-FPS_xx.npy │   │   │ ├── ... │   │   └── ... │   └── val │   ├── class1 │   │   └── Series1 │   │   ├── rgb-FPS_xx.npy │   │   └── flow-FPS_xx.npy │       │ ├── ... │       └── ... └── raw ├── class1 │ ├── Series1.mp4 │ ├── Series2.mp4 │ └── ...        └── ... ``` ### Kinetics-400   You may need the Kinetics-400 during testing, if you don't happen to have this dataset you can download it by clicking [here](https://pan.baidu.com/s/1ISqYffUJO53jaFdQuzZ0aA)[password: wwwe]. After downloading you will see multiple `compress.tar.gz.xx` files in raw-part, then ``` # merging segmented documents cat compress.tar.gz.* >> compress.tar.gz # unzip the file tar xvf compress.tar.gz ``` ## how to run ### pre-process   You can use pictures or videos as pre-processing input. ``` mass: Compute RGBs and Flows massively. init_dir: Initialize the data pre-processed folder tree. input_path: Path to input video or images folder out_path: Where you want to save the output rgb and flow files. resize: Resize the size of pictures. random_choice: Whether to choose frames randomly or uniformly. rgb: Whether to generate rbg data. flow: Whether to generate flow data. ```   If you use pictures as input, you had better pay attention to the parameters: ``` is_image: Use a series of images(its folder) as input. sample_num: The number of the output frames after the sample. sample_type: Choose the sample method. If you choose images, the value should be set to "num". For example: python pre_process.py --rgb --is_image --sample_num=xx --sample_type=num --resize=xx(default: 224) --input_path=xxx ```   If you use videos as input directly, you had better pay attention to the parameters: ``` in_fps: Video's FPS can be directly acquired. The default value is 30. out_fps: The FPS of the output video. The default value is 5. At this time: sample_num = frames_sum * out_fps / in_fps For example: python pre_process.py --rgb --sample_type=fps --input_path=xx ``` ### predict   This part is relatively simple, look at the `predict.py` code to understand how to predict. ### retrain   To be continue ... ### compute FVD   For convenience, you can put the video pairs for which you want to calculate the FVD value under the directory `evaluate\i3d\data\videos\raw\test`. In the directory `test`, name the folder with the original video name and put the original video and the generated video under that folder. The specific file directory is shown below. ``` ├── origin1 │   ├── origin1.mp4 │   ├── blur_1.mp4 │ └── ... ├── origin2 │   ├── origin2.mp4 │   ├── blur_2.mp4 │ └── ... └── ... ```