# nerf_pl
**Repository Path**: feed69/nerf_pl
## Basic Information
- **Project Name**: nerf_pl
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 1
- **Created**: 2023-10-31
- **Last Updated**: 2024-06-01
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# nerf_pl
### Update: NVIDIA open-sourced a lightning-fast version of NeRF: [NGP](https://github.com/NVlabs/instant-ngp). I re-implemented in pytorch [here](https://github.com/kwea123/ngp_pl). This version is ~100x faster than this repo with also better quality!
### Update: an improved [NSFF](https://www.cs.cornell.edu/~zl548/NSFF/) implementation to handle dynamic scene is [open](https://github.com/kwea123/nsff_pl)!
### Update: [NeRF-W](https://nerf-w.github.io/) (NeRF in the Wild) implementation is added to [nerfw](https://github.com/kwea123/nerf_pl/tree/nerfw) branch!
### Update: The lastest code (using the latest libraries) will be updated to [dev](https://github.com/kwea123/nerf_pl/tree/dev) branch. The master branch remains to support the colab files. If you don't use colab, it is recommended to switch to dev branch. Only issues of the dev and nerfw branch will be considered currently.
### :gem: [**Project page**](https://kwea123.github.io/nerf_pl/) (live demo!)
Unofficial implementation of [NeRF](https://arxiv.org/pdf/2003.08934.pdf) (Neural Radiance Fields) using pytorch ([pytorch-lightning](https://github.com/PyTorchLightning/pytorch-lightning)). This repo doesn't aim at reproducibility, but aim at providing a simpler and faster training procedure (also simpler code with detailed comments to help to understand the work). Moreover, I try to extend much more opportunities by integrating this algorithm into game engine like Unity.
Official implementation: [nerf](https://github.com/bmild/nerf) .. Reference pytorch implementation: [nerf-pytorch](https://github.com/yenchenlin/nerf-pytorch)
### Recommend to read: A detailed NeRF extension list: [awesome-NeRF](https://github.com/yenchenlin/awesome-NeRF)
## :milky_way: Features
* Multi-gpu training: Training on 8 GPUs finishes within 1 hour for the synthetic dataset!
* [Colab](#mortar_board-colab) notebooks to allow easy usage!
* [Reconstruct](#ribbon-mesh) **colored** mesh!
* [Mixed Reality](https://youtu.be/S5phWFTs2iM) in Unity!
* [REAL TIME volume rendering](https://youtu.be/w9qTbVzCdWk) in Unity!
* [Portable Scenes](#portable-scenes) to let you play with other people's scenes!
### You can find the Unity project including mesh, mixed reality and volume rendering [here](https://github.com/kwea123/nerf_Unity)! See [README_Unity](README_Unity.md) for generating your own data for Unity rendering!
## :beginner: Tutorial
### What can NeRF do?
### Tutorial videos
# :computer: Installation
## Hardware
* OS: Ubuntu 18.04
* NVIDIA GPU with **CUDA>=10.1** (tested with 1 RTX2080Ti)
## Software
* Clone this repo by `git clone --recursive https://github.com/kwea123/nerf_pl`
* Python>=3.6 (installation via [anaconda](https://www.anaconda.com/distribution/) is recommended, use `conda create -n nerf_pl python=3.6` to create a conda environment and activate it by `conda activate nerf_pl`)
* Python libraries
* Install core requirements by `pip install -r requirements.txt`
* Install `torchsearchsorted` by `cd torchsearchsorted` then `pip install .`
# :key: Training
Please see each subsection for training on different datasets. Available training datasets:
* [Blender](#blender) (Realistic Synthetic 360)
* [LLFF](#llff) (Real Forward-Facing)
* [Your own data](#your-own-data) (Forward-Facing/360 inward-facing)
## Blender
Steps
### Data download
Download `nerf_synthetic.zip` from [here](https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1)
### Training model
Run (example)
```
python train.py \
--dataset_name blender \
--root_dir $BLENDER_DIR \
--N_importance 64 --img_wh 400 400 --noise_std 0 \
--num_epochs 16 --batch_size 1024 \
--optimizer adam --lr 5e-4 \
--lr_scheduler steplr --decay_step 2 4 8 --decay_gamma 0.5 \
--exp_name exp
```
These parameters are chosen to best mimic the training settings in the original repo. See [opt.py](opt.py) for all configurations.
NOTE: the above configuration doesn't work for some scenes like `drums`, `ship`. In that case, consider increasing the `batch_size` or change the `optimizer` to `radam`. I managed to train on all scenes with these modifications.
You can monitor the training process by `tensorboard --logdir logs/` and go to `localhost:6006` in your browser.
Steps
### Data download
Download `nerf_llff_data.zip` from [here](https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1)
### Training model
Run (example)
```
python train.py \
--dataset_name llff \
--root_dir $LLFF_DIR \
--N_importance 64 --img_wh 504 378 \
--num_epochs 30 --batch_size 1024 \
--optimizer adam --lr 5e-4 \
--lr_scheduler steplr --decay_step 10 20 --decay_gamma 0.5 \
--exp_name exp
```
These parameters are chosen to best mimic the training settings in the original repo. See [opt.py](opt.py) for all configurations.
You can monitor the training process by `tensorboard --logdir logs/` and go to `localhost:6006` in your browser.
Steps
1. Install [COLMAP](https://github.com/colmap/colmap) following [installation guide](https://colmap.github.io/install.html)
2. Prepare your images in a folder (around 20 to 30 for forward facing, and 40 to 50 for 360 inward-facing)
3. Clone [LLFF](https://github.com/Fyusion/LLFF) and run `python img2poses.py $your-images-folder`
4. Train the model using the same command as in [LLFF](#llff). If the scene is captured in a 360 inward-facing manner, add `--spheric` argument.
For more details of training a good model, please see the video [here](#colab).