# nfvs-ms **Repository Path**: sjtuyc/nfvs-ms ## Basic Information - **Project Name**: nfvs-ms - **Description**: Code for NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry Scaffolds - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2023-04-25 - **Last Updated**: 2023-08-17 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry Scaffolds ## Abstract We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room. NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views. To address this issue, we utilize the holistic priors, including pseudo depth maps and view coverage information, from neural reconstruction to guide the learning of implicit neural representations of 3D indoor scenes. Concretely, an off-the-shelf neural reconstruction method is leveraged to generate a geometry scaffold. Then, two loss functions based on the holistic priors are proposed to improve the learning of NeRF: 1) A robust depth loss that can tolerate the error of the pseudo depth map to guide the geometry learning of NeRF; 2) A variance loss to regularize the variance of implicit neural representations to reduce the geometry and color ambiguity in the learning procedure. These two loss functions are modulated during NeRF optimization according to the view coverage information to reduce the negative influence brought by the view coverage imbalance. Extensive results demonstrate that our NeRFVS outperforms state-of-the-art view synthesis methods quantitatively and qualitatively on indoor scenes, achieving high-fidelity free navigation results. ![Teaser Image](images/demo.jpg) *** ## Environment We recommend to use [Anaconda](https://www.anaconda.com/products/individual) to set up the environment. First, create a new `nfvs` environment: ```bash conda create -n nfvs python=3.8.13 ``` Next, activate the environment: ```bash conda activate nfvs ``` You can then install the dependencies: ```bash pip install -r requirements.txt ``` Finally, install MindSpore with the correct CUDA version. For example, if you have CUDA 11.1 installed, you can run ```bash pip install --upgrade pip pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/1.7.1/MindSpore/gpu/x86_64/cuda-11.1/mindspore_gpu-1.10.1-cp38-cp38-linux_x86_64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple # Refer to the following installation guide, ensure that the installation dependency and environment variables are correctly configured. ``` *** ## Tested environments * Ubuntu 22 with CUDA 11.1 & MindSpore 1.7.1. *** ## Usage We majorly support scannet dataset like [Manhattan_sdf](https://github.com/zju3dv/manhattan_sdf). We provide the [preprocessed data](https://drive.google.com/file/d/1wk_cejMyZCPY7_tlmB-FEcv2yYaaaDJM/view?usp=sharing) for inference. Please download the preprocessed data and put it under the `data` folder. We also provide the pretrained model of [NeRF](https://drive.google.com/file/d/13rfS0YuAMvNitRQq4jCmwnXN3M_BfIIC/view?usp=sharing) and our [NeRFVS](https://drive.google.com/file/d/1rcdq8vpxe6twCGoy8m6KGG3xA0BhLOvZ/view?usp=sharing) for inference. Please download the pretrained model and put it under the `ckpts` folder. To infer the results, please run the following command: ```bash # assume that the ckpt is in `ckpts` and the data is in `data` dir: # for nerfvs results python src/eval_scannet.py --expname nerfvs_render --data_dir data/scannet/{scene_id} --ckpt ckpts/nerfvs/{scene_id_ckpt} --output_dir results/nerfvs --render_test # for nerf results python src/eval_scannet.py --expname nerf_render --data_dir data/scannet/{scene_id} --ckpt ckpts/nerf/{scene_id_ckpt} --output_dir results/nerf --render_test ``` *** ## Roaming For roaming in the scene, please run the following command: ```bash # assume that the ckpt is in `ckpts` and the data is in `data` dir: # for nerfvs results python src/eval_scannet.py --expname nerfvs_roam --data_dir data/scannet/{scene_id} --ckpt ckpts/nerfvs/{scene_id_ckpt} --output_dir results/nerfvs --roaming # for nerf results python src/eval_scannet.py --expname nerf_roam --data_dir data/scannet/{scene_id} --ckpt ckpts/nerf/{scene_id_ckpt} --output_dir results/nerf --roaming ``` *** ## Citation If you find our work useful in your research please consider citing our papers: ``` @article{yang2023nerfvs, title={NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry Scaffolds}, author={Yang, Chen and Li, Peihao and Zhou, Zanwei and Yuan, Shanxin and Liu, Bingbing and Yang, Xiaokang and Qiu, Weichao and Shen, Wei}, journal={arXiv preprint arXiv:2304.06287}, year={2023} } ``` *** ## Acknowledgement 1. Thanks to the brilliant framework [MindSpore](https://www.mindspore.cn/), which makes our work more efficient. 1. Thanks to the open source project [nerf-pytorch](https://github.com/yenchenlin/nerf-pytorch). 1. Thanks to the Manhattan_sdf from [Manhattan_sdf](https://github.com/zju3dv/manhattan_sdf).