Python scripts performing stereo depth estimation using the CREStereo model in ONNX.
Stereo depth estimation on the cones images from the Middlebury dataset (https://vision.middlebury.edu/stereo/data/scenes2003/)
git clone https://github.com/ibaiGorordo/ONNX-CREStereo-Depth-Estimation.git
cd ONNX-CREStereo-Depth-Estimation
pip install -r requirements.txt
For Nvidia GPU computers:
pip install onnxruntime-gpu
Otherwise:
pip install onnxruntime
pip install youtube_dl
pip install git+https://github.com/zizo-pro/pafy@b8976f22c19e4ab5515cacbfae0a3970370c102b
pip install depthai
You might need additional installations, check the depthai reference below for more details.
The models were converted from the Pytorch implementation below by PINTO0309, download the models from the download script in his repository and save them into the models folder.
The original model was trained in the MegEngine framework: original repository.
The original MegEngine model was converted to Pytorch with this repository: https://github.com/ibaiGorordo/CREStereo-Pytorch
python image_depth_estimation.py
python video_depth_estimation.py
python driving_stereo_test.py
Original video: Driving stereo dataset, reference below
python driving_stereo_point_cloud.py
Original video: Driving stereo dataset, reference below
python depthai_host_depth_estimation.py
In the graph below, the different model options, i.e. input shape, version (init or combined) and number of iterations are combined. The comparison is done compared to the results obtained with the largest model (720x1280 combined with 20 iters), as it is expected to provide the best results.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。