109 Star 881 Fork 1.5K

MindSpore/models

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
.gitee
.github
.ipynb_checkpoints
.jenkins
benchmark/ascend
community
how_to_contribute
official
research
3d/DeepLM
audio
cv
3D_DenseNet
3dcnn
ADNet
APDrawingGAN
AVA_cifar
AVA_hpa
AdaBin
AdderNGD
AdderQuant
Alexnet
AlignedReID++
AlignedReID
AlphaPose
ArbitraryStyleTransfer
ArtTrack
AttGAN
AttentionCluster
Auto-DeepLab
AutoSlim
BMN
BiMLP
C3D
CBAM
CFDT
CGAN
CLIFF
CMT
CSNL
CTSDG
CascadeRCNN
Cnet
ConstrainedBayesian
DBPN
DDAG
DDM
DDRNet
DRNet
DecoMR
DeepID
Deepsort
DeiT
DepthNet
DnCNN
DynamicQuant
E-NET
ECENet
EF-ENAS
EGnet
ESRGAN
EfficientDet_d0
FCANet
FCN8s
FCOS
FDA-BNN
FSAF
FaceAttribute
FaceDetection
FaceNet
FaceQualityAssessment
FaceRecognition
FaceRecognitionForTracking
Focus-DETR
FreeAnchor
GENet_Res50
GMMBN
GhostSR
Gold_YOLO
GridRCNN
HDR-Transformer
HMR
HRNetV2
HRNetW48_cls
HRNetW48_seg
HiFaceGAN
HireMLP
HourNAS
I3D
ICNet
IPT
IRN
ISyNet
Inception-v2
IndexNet
Instaboost
IntegralNeuralNetworks
JDE
LEO
LearningToSeeInTheDark
LightCNN
MAML
MCNN
MGN
MIMO-UNet
MODNet
MTCNN
MVD
ManiDP
MaskedFaceRecognition
NFNet
Neighbor2Neighbor
Non_local
OSVOS
OctSqueeze
PAGENet
PAMTRI
PDarts
PGAN
PSPNet
PTQ4SR
PVAnet
PWCNet
PaDiM
PatchCore
Pix2Pix
Pix2PixHD
PoseNet
ProtoNet
PyramidBox
RAOD
RCAN
RDN
REDNet30
RNA
ReIDStrongBaseline
RefSR-NeRF
scripts
src
README.md
convert_weights.py
eval.py
export_3d_mesh.py
infer.py
requirements.txt
RefineDet
RefineNet
RepVGG
ResNeSt50
ResNeXt
ResidualAttentionNet
S-GhostNet
SDNet
SE-Net
SE_ResNeXt50
SPADE
SPC-Net
SPPNet
SRGAN
SSIM-AE
STGAN
STPM
Segformer
SemanticHumanMatting
SiamFC
SinGAN
SpineNet
Spnas
StackedHourglass
StarGAN
StyTr-2
TCN
TNT
TinySAM
TokenFusion
Twins
U-GAT-IT
UNet3+
Unet3d
VehicleNet
ViG
WGAN_GP
Yolact++
adelaide_ea
advanced_east
aecrnet
ats
augvit
autoaugment
beit
brdnet
cait
cct
cdp
centerface
centernet
centernet_det
centernet_resnet101
centernet_resnet50_v1
cmr
cnn_direction_model
cnnctc
conformer
convmixer
convnext
crnn_seq2seq_ocr
csd
cspdarknet53
darknet53
dcgan
dcrnn
delf
dem
densenet
detr
dgcnet_res101
dlinknet
dncnn
dnet_nas
dpn
east
ecolite
efficientnet-b0
eppmvsnet
erfnet
esr_ea
essay-recogination
faceboxes
fairmot
faster_rcnn_dcn
faster_rcnn_ssod
fastflow
fastscnn
fishnet99
flownet2
foveabox
frustum-pointnet
gan
ghostnet
ghostnet_d
ghostnet_quant
ghostnetv2
glore_res
googlenet
guided_anchoring
hardnet
hed
hlg
ibnnet
inception_resnet_v2
ivpf
lenet
lerf
libra-rcnn
lite-hrnet
llnet
lresnet100e_ir
m2det
meta-baseline
metric_learn
midas
mifnet
mnasnet
mobilenetV3_small_x1_0
mobilenetv3_large
ms_rcnn
nas-fpn
nasnet
nima
nima_vgg16
nnUNet
ntsnet
osnet
pcb
pcb_rpp
pnasnet
pointpillars
pointtransformer
predrnn++
proxylessnas
psenet
r2plus1d
ras
rbpn
rcnn
relationnet
renas
repvgg
res2net
res2net_deeplabv3
res2net_faster_rcnn
res2net_yolov3
resnet3d
resnet50_adv_pruning
resnet50_bam
resnetv2
resnetv2_50_frn
resnext152_64x4d
retinaface
retinanet_resnet101
retinanet_resnet152
rfcn
se_resnext50
siamRPN
simclr
simple_baselines
simple_pose
single_path_nas
sknet
slowfast
snn_mlp
sphereface
squeezenet
squeezenet1_1
sr_ea
srcnn
ssc_resnet50
ssd_ghostnet
ssd_inception_v2
ssd_inceptionv2
ssd_mobilenetV2
ssd_mobilenetV2_FPNlite
ssd_resnet34
ssd_resnet50
ssd_resnet_34
stgcn
stnet
swenet
t2t-vit
tall
textfusenet
tgcn
tinydarknet
tinynet
tnt
tracktor++
tracktor
trn
tsm
tsn
u2net
uni-uvpt
unisiam
vanillanet
vit_base
vnet
warpctc
wave_mlp
wdsr
wideresnet
yolov3_resnet18
yolov3_tiny
gflownets
gnn
hpc
l2o/hem-learning-to-cut
mm
nerf
nlp
recommend
rl
.gitkeep
README.md
README_CN.md
__init__.py
utils
.clang-format
.gitignore
CONTRIBUTING.md
CONTRIBUTING_CN.md
LICENSE
OWNERS
README.md
README_CN.md
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
2年前
2年前
2年前
2年前
2年前
Loading...
README

Contents

RefSR-NeRF Description

RefSR-NeRF is an end-to-end framework that first learns a low resolution NeRF representation, and then reconstructs the high frequency details with the help of a high resolution reference image. We observe that simply introducing the pre-trained models from the literature tends to produce unsatisfied artifacts due to the divergence in the degradation model. To this end, we design a novel lightweight RefSR model to learn the inverse degradation process from NeRF renderings to target HR ones. Extensive experiments on multiple benchmarks demonstrate that our method exhibits an impressive trade-off among rendering quality, speed, and memory usage, outperforming or on par with NeRF and its variants while being 52× speedup with minor extra memory usage

Paper: Xudong Huang, Wei Li, Jie Hu, Hanting Chen, Yunhe Wang . CVPR, 2023.

Dataset

Note that you can run the scripts based on the dataset mentioned in the original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.

  • LLFF (“Real Forward-Facing”): real images of complex scenes captured with roughly forward-facing images. This dataset consists of 8 scenes captured with a handheld cellphone (5 taken from the LLFF paper and 3 that we capture), captured with 20 to 62 images, and hold out 1/8 of these for the test set. All images are 1008×756 pixels.

Environment Requirements

  • Download the datasets and locate to some folder /path/to/the/dataset:

  • Install the requirements:

Use requirement.txt file with the following command:

pip install -r requirements.txt

Or you can install the following packages manually:

matplotlib
mindspore-dev==2.0.0.dev20230226
numpy
opencv-python
PyMCubes==0.1.2
scikit-learn
scipy
tqdm
trimesh==3.20.2
imageio[ffmpeg]

Quick Start

Prepare the model

All necessary configs examples can be found in the project directory src/configs.

Evaluation stage:

  • dataset-based settings config (*_ds_config.json)
  • nerf architecture config (the default in use)

Note: * - dataset type (one from llff).

  1. Prepare the model directory: copy necessary configs for choosing dataset to some directory /path/to/model_cfgs/ for future scripts launching.

  2. Hyper parameters that recommended to be changed for training stage in dataset-based train config:

    • train_rays_batch_size
    • val_rays_batch_size
    • epochs
    • val_epochs_freq
    • lr
    • lr_decay_rate
    • precrop_train_epochs

Run the scripts

After preparing directory with configs /path/to/model_cfgs/ you can start training and evaluation:

  • running on GPU
# run eval on GPU
bash scripts/run_eval.sh [DATA_PATH] [DATA_CONFIG] [CHECKPOINT_PATH] [OUT_PATH] [RAYS_BATCH]

# run export 3d mesh
bash scripts/run_export_3d.sh [SCENE_CONFIG] [CHECKPOINT_PATH] [OUT_STL]

# run export video
bash scripts/run_export_video.sh [POSES] [SCENE_CONFIG] [CHECKPOINT_PATH] [OUT_PATH] [RAYS_BATCH]

Script Description

Script and Sample Code

├── cv
│ └── RefSR-NeRF
│    ├── convert_weights.py
│     ├── eval.py                                 ## evaluation script
│     ├── export_3d_model.py                      ## 3d model exporting
│     ├── README.md                               ## NeRF description
│     ├── requirements.txt                        ## requirements
│     ├── scripts
│     │    ├── run_eval_gpu.sh                    ## bash script for evaluation
│     │    ├── run_export_3d.sh                   ## bash script for 3d mesh exporting
│     │    ├── run_export_video.sh                ## bash script for video exporting
│     ├── src
│     │    ├── configs
│     │    │    ├── inference
│     │    │    │    ├── blend_poses.json         ## blender dataset one pose config
│     │    │    │    ├── blend_scene_config.json  ## blender dataset scene config
│     │    │    │    ├── blend_video_poses.json   ## blender dataset poses for video exporting
│     │    │    │    ├── llff_poses.json          ## LLFF dataset one pose config
│     │    │    │    ├── llff_scene_config.json   ## LLFF dataset scene config
│     │    │    │    └── llff_video_poses.json    ## LLFF dataset poses for video exporting
│     │    │    ├── llff_ds_config.json           ## LLFF dataset settings config
│     │    │    ├── llff_train_config.json        ## LLFF dataset train config
│     │    │    └── nerf_config.json              ## NeRF architecture config
│     │    ├── data
│     │    │    ├── data_loader.py                ## module with dataset loader func
│     │    │    ├── dataset.py                    ## mindspore based datasets
│     │    │    ├── ds_loaders
│     │    │    │    ├── llff.py                  ## LLFF dataset loader
│     │    │    └── __init__.py                   ## init file
│     │    ├── model
│     │    │    ├── deg.py                        ## module with degredation branch
│     │    │    ├── sr.py                         ## module of sr branch
│     │    │    ├── unet_parts.py                 #refine module
│     │    │    ├── unet.py                       #refine module
│     │    ├── __init__.py                        ## init file
│     │    ├── tools
│     │    │    ├── callbacks.py                  ## custom callbacks
│     │    │    ├── common.py                     ## auxiliary funcs
│     │    │    ├── __init__.py                   ## init file
│     │    │    ├── mlflow_funcs.py               ## mlflow auxiliary funcs
│     │    │    └── rays.py                       ## rays sampling
│     │    └── volume_rendering
│     │        ├── coordinates_samplers.py        ## NeRF coordinates sampling
│     │        ├── __init__.py                    ## init file
│     │        ├── scene_representation.py        ## NeRF scene representation
│     │        └── volume_rendering.py            ## NeRF volume rendering pipeline

Script Parameters

Dataset settings config parameters differ based on the dataset. But the common dataset settings:

{
  "data_type": "llff",                # dataset type - one from blender and llff
  "white_background": false,          # make white background after image loading - useful for the synthetic scenes
  "is_ndc": true,                     # normal device coordinates space rays sampling
  "linear_disparity_sampling": false  # linear points sampling along ray
}

Evaluation

Evaluation process

Evaluation on GPU

bash scripts/run_eval_gpu.sh [DATA_PATH] [DATA_CONFIG] [CHECKPOINT_PATH] [OUT_PATH] [RAYS_BATCH]

Script parameters:

  • DATA_PATH: the path to the blender or llff dataset scene.
  • DATA_CONFIG: dataset scene loading settings config.
  • CHECKPOINT_PATH: the path of pretrained checkpoint file.
  • OUT_PATH: output directory to store the evaluation result.
  • SCALE: super resolution scale.
  • RAYS_BATCH: rays batch number for NeRF evaluation. Should be the divider of image height * width.

Evaluation result

Result:

  • Store GT and predict images with the PSNR metric in the files name.
  • Store the CSV file with image-wise PSNR value and the total PSNR.

Model Description

Performance

Parameters GPU
Model Version RefSR-NeRF
Resource 1xGPU(NVIDIA V100), CPU 2.1GHz 64 cores, Memory 256G
Uploaded Date 12/25/2023 (month/day/year)
MindSpore Version 2.0.0.dev20230226
Dataset LLFF (realistic)

ModelZoo Homepage

Please check the official homepage.

马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/mindspore/models.git
git@gitee.com:mindspore/models.git
mindspore
models
models
master

搜索帮助