# tmp.bonnet **Repository Path**: tyloeng/tmp.bonnet ## Basic Information - **Project Name**: tmp.bonnet - **Description**: https://github.com/HanjiangZhu/Bonnet - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-02-21 - **Last Updated**: 2026-02-21 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # iTom's Notes - environment: [env.sh](env.sh) - download pretrained weight: [download.sh](download.sh) - link Ribsegv2 data: [link.sh](link.sh) - inference with pretrained model: [run_ribsegv2.sh](run_ribsegv2.sh) - my adapted inference code: [src/test.py](src/test.py) ## adaptations Add [src/utils/bin_packing.py](src/utils/bin_packing.py): this file is needed but missing in original repository. I ask Claude to write it. Not sure how this will affect the model performance. For Ribsegv2 dataset: - `Ribsegv2Dataset` in [src/data/ctdataset.py](src/data/ctdataset.py): mainly adapt the `create_sample` function, change the data loading. - `Ribsegv2` in [src/data/data_utils.py](src/data/data_utils.py): meta-data & helper functions. ## notes ### 20 Feb 2026 Directly using the downloaded weights to test on Ribsegv2 gives very bad results. I guess training on Ribsegv2 is needed to get reasonable results. # Bonnet: Ultra-Fast Whole-Body Bone Segmentation from CT Scans Bonnet is an ultra-fast whole-body bone segmentation pipeline for CT scans. It runs in seconds per scan on a single commodity GPU while maintaining reliable segmentation quality across different datasets. ## Train 1. Set dataset / output paths and other options in: - `Bonnet/conf/config_eva.yaml` 2. Run: ``` python main.py ``` ## Evaluate only Model checkpoint (Hugging Face): [https://huggingface.co/hanjiangjiang123/Bonnet](https://huggingface.co/hanjiangjiang123/Bonnet) 1. Open: - `Bonnet/conf/eval/eval_on_test.yaml` 2. Set: ``` eval_only: True ``` 3. Run: ``` python main.py ```