1 Star 3 Fork 0

孙浩/3D-UNet-PyTorch

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
MIT

3DUNet implemented with pytorch

Introduction

The repository is a 3DUNet implemented with pytorch, referring to this project. I have redesigned the code structure and used the model to perform liver and tumor segmentation on the lits2017 dataset.
paper: 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation

Requirements:

pytorch >= 1.1.0
torchvision
SimpleITK
Tensorboard
Scipy

Code Structure

├── dataset          # Training and testing dataset
│   ├── dataset_lits_train.py 
│   ├── dataset_lits_val.py
│   ├── dataset_lits_test.py
│   └── transforms.py 
├── models           # Model design
│   ├── nn
│   │   └── module.py
│   │── ResUNet.py      # 3DUNet class
│   │── Unet.py      # 3DUNet class
│   │── SegNet.py      # 3DUNet class
│   └── KiUNet.py      # 3DUNet class
├── experiments           # Trained model
|── utils            # Some related tools
|   ├── common.py
|   ├── weights_init.py
|   ├── logger.py
|   ├── metrics.py
|   └── loss.py
├── preprocess_LiTS.py  # preprocessing for  raw data
├── test.py          # Test code
├── train.py         # Standard training code
└── config.py        # Configuration information for training and testing

Quickly Start

1) LITS2017 dataset preprocessing:

  1. Download dataset from google drive: Liver Tumor Segmentation Challenge.
    Or from my share: https://pan.baidu.com/s/1WgP2Ttxn_CV-yRT4UyqHWw Extraction code:hfl8 (The dataset consists of two parts: batch1 and batch2)
  2. Then you need decompress and merge batch1 and batch2 into one folder. It is recommended to use 20 samples(27~46) of the LiTS dataset as the testset and 111 samples(0~26 and 47~131) as the trainset. Please put the volume data and segmentation labels of trainset and testset into different local folders, such as:
raw_dataset:
    ├── test  # 20 samples(27~46) 
    │   ├── ct
    │   │   ├── volume-27.nii
    │   │   ├── volume-28.nii
    |   |   |—— ...
    │   └── label
    │       ├── segmentation-27.nii
    │       ├── segmentation-28.nii
    |       |—— ...
    │       
    ├── train # 111 samples(0\~26 and 47\~131)
    │   ├── ct
    │   │   ├── volume-0.nii
    │   │   ├── volume-1.nii
    |   |   |—— ...
    │   └── label
    │       ├── segmentation-0.nii
    │       ├── segmentation-1.nii
    |       |—— ...
  1. Finally, you need to change the root path of the volume data and segmentation labels in ./preprocess_LiTS.py, such as:
    row_dataset_path = './raw_dataset/train/'  # path of origin dataset
    fixed_dataset_path = './fixed_data/'  # path of fixed(preprocessed) dataset
  1. Run python ./preprocess_LiTS.py
    If nothing goes wrong, you can see the following files in the dir ./fixed_data
│—— train_path_list.txt
│—— val_path_list.txt
│
|—— ct
│       volume-0.nii
│       volume-1.nii
│       volume-2.nii
│       ...
└─ label
        segmentation-0.nii
        segmentation-1.nii
        segmentation-2.nii
        ...

2) Training 3DUNet

  1. Firstly, you should change the some parameters in config.py,especially, please set --dataset_path to ./fixed_data
    All parameters are commented in the file config.py.
  2. Secondely,run python train.py --save model_name
  3. Besides, you can observe the dice and loss during the training process in the browser through tensorboard --logdir ./output/model_name.

3) Testing 3DUNet

run test.py
Please pay attention to path of trained model in test.py.
(Since the calculation of the 3D convolution operation is too large, I use a sliding window to block the input tensor before prediction, and then stitch the results to get the final result. The size of the sliding window can be set by yourself in config.py)

After the test, you can get the test results in the corresponding folder:./experiments/model_name/result

You can also read my Chinese introduction about this 3DUNet project here. However, I no longer update the blog, I will try my best to update the github code.
If you have any suggestions or questions, welcome to open an issue to communicate with me.

MIT License Copyright (c) 2025 AI-Hao Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

简介

3D-ResUNet 肺叶分割 展开 收起
README
MIT
取消

发行版

暂无发行版

贡献者

全部

语言

近期动态

不能加载更多了
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
Python
1
https://gitee.com/SunHao-AI/3d-unet-pytorch.git
git@gitee.com:SunHao-AI/3d-unet-pytorch.git
SunHao-AI
3d-unet-pytorch
3D-UNet-PyTorch
master

搜索帮助