# DOPE-Uncertainty **Repository Path**: mirrors_NVlabs/DOPE-Uncertainty ## Basic Information - **Project Name**: DOPE-Uncertainty - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 1 - **Forks**: 0 - **Created**: 2021-04-24 - **Last Updated**: 2025-09-27 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # DOPE-Uncertainty It is the code base for ensemble-based uncertainty quantification in deep object pose estimation. For more details, see our [project website](https://sites.google.com/view/fastuq), [ICRA 2021 paper](https://arxiv.org/abs/2011.07748), and [video](https://youtu.be/7g91v6yAYwA). ## Dependencies * The `add` (average distance) metric is computed by `visii`, so `visii` needs to be installed [here](https://github.com/owl-project/NVISII/). * Downlad [neural network weights](https://drive.google.com/drive/folders/1mN4kCqVZOnr1xRvMc8M4xWuHcp-yl3X7?usp=sharing) (in `.pth`) and save them to the `content` folder to replace proxy files. Note that we only provide the weights and models (already in `content/models/grocery`) for the `Corn` object for now. ## Running Examples We provide some demo images in `uncertainty_quantification/output/test` for code test and demonstrations. These demo images are generated by the [NVISII](https://github.com/owl-project/NVISII/) render. There are two example scripts: 1. `uncertainty_quantification/run.py` requires the ground truth poses for statistics. This script would first do pose estimation based on [DOPE](https://github.com/NVlabs/Deep_Object_Pose) (but you do not need to install DOPE or ROS), and then do post-inference uncertainty quantification. The expected result is that this script would generate all files in `uncertainty_quantification/output/test_result`, including inference results, confidence plot, the most confident frame selection, uncertainty quantification correlation analysis, etc. 2. `uncertainty_quantification/run_realworld.py` is similar, but do not need the ground truth poses. The expected result is that this script would generate all files in `uncertainty_quantification/output/test_result_realworld`. This script corresponds to the real-world grasping experiment in [our paper](https://arxiv.org/abs/2011.07748), where there is no ground truth pose estimation.