1 Star 0 Fork 0

sdboy/tensorrtx

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
.github
alexnet
arcface
centernet
crnn
csrnet
dbnet
densenet
detr
docker
efficient_ad
efficientnet
ghostnet
googlenet
hrnet
ibnnet
inception
lenet
lprnet
mlp
mnasnet
mobilenet
psenet
rcnn
real-esrgan
refinedet
repvgg
resnet
retinaface
retinafaceAntiCov
scaled-yolov4
senet
shufflenetv2
squeezenet
superpoint
swin-transformer/semantic-segmentation
tsm
tutorials
check_fp16_int8_support.md
contribution.md
faq.md
from_pytorch_to_trt_stepbystep_hrnet.md
getting_started.md
install.md
measure_performance.md
migrating_from_tensorrt_4_to_7.md
multi_GPU_processing.md
run_on_windows.md
ufld
unet
vgg
yolo11
yolop
yolov10
yolov3-spp
yolov3-tiny
yolov3
yolov4
yolov5
yolov7
yolov8
yolov9
.clang-format
.cmake-format.yaml
.gitignore
.pre-commit-config.yaml
LICENSE
README.md
克隆/下载
check_fp16_int8_support.md 506 Bytes
一键复制 编辑 原始数据 按行查看 历史

Check if Your GPU Supports FP16/INT8

1. check your GPU Compute Capability

visit https://developer.nvidia.com/cuda-gpus#compute and check your GPU compute capability.

For example, GTX1080 is 6.1, Tesla T4 is 7.5.

2. check the hardware-precision-matrix

visit https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#hardware-precision-matrix and check the matrix.

For example, compute capability 6.1 supports FP32 and INT8. 7.5 supports FP32, FP16, INT8, FP16 tensor core, etc.

Loading...
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
C++
1
https://gitee.com/sdboy/tensorrtx.git
git@gitee.com:sdboy/tensorrtx.git
sdboy
tensorrtx
tensorrtx
master

搜索帮助