# nerfms_mindspore **Repository Path**: nerfms/nerfms_mindspore ## Basic Information - **Project Name**: nerfms_mindspore - **Description**: Code for NeRF-MS: Neural Radiance Fields with Multi-Sequence - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 2 - **Forks**: 0 - **Created**: 2024-01-02 - **Last Updated**: 2024-09-25 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # NeRF-MS with MindSpore ### [Project Page](https://nerf-ms.github.io/) | [Paper](https://openaccess.thecvf.com/content/ICCV2023/html/Li_NeRF-MS_Neural_Radiance_Fields_with_Multi-Sequence_ICCV_2023_paper.html) Mindspore Implementation of NeRF-MS, optimizing NeRF with multi-sequence inputs. *** ## Abstract Neural radiance fields (NeRF) achieve impressive performance in novel view synthesis when trained on only single sequence data. However, leveraging multiple sequences captured by different cameras at different times is essential for better reconstruction performance. Multi-sequence data takes two main challenges: appearance variation due to different lighting conditions and non-static objects like pedestrians. To address these issues, we propose NeRF-MS, a novel approach to training NeRF with multi-sequence data. Specifically, we utilize a triplet loss to regularize the distribution of per-image appearance code, which leads to better high-frequency texture and consistent appearance, such as specular reflections. Then, we explicitly model non-static objects to reduce floaters. Extensive results demonstrate that NeRF-MS not only outperforms state-of-the-art view synthesis methods on outdoor and synthetic scenes, but also achieves 3D consistent rendering and robust appearance controlling. ![Teaser Image](images/teaser.png) *** ## Environment Frist, The code runs with the deep learning framework [Mindspore==2.2.10](https://www.mindspore.cn/install/). If you have CUDA 11.1 installed, you can run: ```shell wget https://gitee.com/mindspore/mindspore/raw/r2.2/scripts/install/ubuntu-gpu-conda.sh # install Python 3.7, CUDA 11.6 and the latest MindSpore by default bash -i ./ubuntu-gpu-conda.sh # to specify Python, CUDA and MindSpore version, taking Python 3.9, CUDA 10.1 and MindSpore 1.6.0 as examples, use the following manners # PYTHON_VERSION=3.9 CUDA_VERSION=10.1 MINDSPORE_VERSION=1.6.0 bash -i ./ubuntu-gpu-conda.sh ``` Then, you can install the dependences: ```shell conda activate mindspore_py37 pip install -r requirements.txt ``` *** ## QuickStart You can train/eval your own model with command like this: ```shell python {train,eval}.py --config ./configs/lego.txt \ [--ckpt /path/to/checkpoint] [--no_reload] [--gpu 0] \ [--render_test] ``` *** ## Citation If you find our work useful in your research please consider citing our papers: ``` @InProceedings{Li_2023_ICCV, author = {Li, Peihao and Wang, Shaohui and Yang, Chen and Liu, Bingbing and Qiu, Weichao and Wang, Haoqian}, title = {NeRF-MS: Neural Radiance Fields with Multi-Sequence}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {18591-18600} } ``` *** ## Acknowledgement Thanks to the open source project [nerf-mindspore](https://gitee.com/mindspore/course/tree/master/application_example/nerf).