1 Star 0 Fork 1

shineboy/pytorch-TensorRT

forked from mirrors/pytorch-TensorRT 
加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
克隆/下载
installation.html 66.10 KB
一键复制 编辑 原始数据 按行查看 历史
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150115111521153115411551156115711581159116011611162116311641165116611671168116911701171117211731174117511761177117811791180118111821183118411851186118711881189119011911192119311941195119611971198119912001201120212031204120512061207120812091210121112121213121412151216121712181219122012211222122312241225122612271228122912301231123212331234123512361237123812391240124112421243124412451246124712481249125012511252125312541255125612571258125912601261126212631264126512661267126812691270127112721273127412751276127712781279128012811282128312841285128612871288128912901291129212931294129512961297129812991300130113021303130413051306130713081309131013111312131313141315131613171318131913201321132213231324132513261327132813291330133113321333133413351336133713381339134013411342134313441345134613471348134913501351135213531354135513561357135813591360136113621363136413651366136713681369137013711372137313741375137613771378137913801381138213831384138513861387138813891390139113921393139413951396139713981399140014011402140314041405140614071408140914101411141214131414141514161417141814191420142114221423142414251426142714281429143014311432143314341435143614371438143914401441144214431444144514461447144814491450145114521453145414551456145714581459146014611462146314641465146614671468146914701471147214731474147514761477147814791480148114821483148414851486148714881489149014911492149314941495149614971498149915001501150215031504150515061507150815091510151115121513151415151516151715181519152015211522152315241525152615271528152915301531153215331534153515361537153815391540154115421543154415451546154715481549155015511552155315541555155615571558155915601561156215631564156515661567156815691570157115721573157415751576157715781579158015811582158315841585158615871588158915901591159215931594159515961597159815991600160116021603160416051606160716081609161016111612161316141615161616171618161916201621162216231624162516261627162816291630163116321633163416351636163716381639164016411642164316441645164616471648164916501651165216531654165516561657165816591660166116621663166416651666166716681669167016711672167316741675167616771678167916801681168216831684168516861687168816891690169116921693169416951696169716981699170017011702170317041705170617071708170917101711171217131714171517161717171817191720172117221723172417251726172717281729173017311732173317341735173617371738173917401741174217431744174517461747174817491750175117521753175417551756175717581759176017611762176317641765176617671768176917701771177217731774177517761777177817791780178117821783178417851786178717881789179017911792179317941795179617971798179918001801180218031804180518061807180818091810181118121813
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8"/>
<meta content="width=device-width, initial-scale=1.0" name="viewport"/>
<meta content="width=device-width,initial-scale=1" name="viewport"/>
<meta content="ie=edge" http-equiv="x-ua-compatible"/>
<meta content="Copy to clipboard" name="lang:clipboard.copy"/>
<meta content="Copied to clipboard" name="lang:clipboard.copied"/>
<meta content="en" name="lang:search.language"/>
<meta content="True" name="lang:search.pipeline.stopwords"/>
<meta content="True" name="lang:search.pipeline.trimmer"/>
<meta content="No matching documents" name="lang:search.result.none"/>
<meta content="1 matching document" name="lang:search.result.one"/>
<meta content="# matching documents" name="lang:search.result.other"/>
<meta content="[\s\-]+" name="lang:search.tokenizer"/>
<link crossorigin="" href="https://fonts.gstatic.com/" rel="preconnect"/>
<link href="https://fonts.googleapis.com/css?family=Roboto+Mono:400,500,700|Roboto:300,400,400i,700&amp;display=fallback" rel="stylesheet"/>
<style>
body,
input {
font-family: "Roboto", "Helvetica Neue", Helvetica, Arial, sans-serif
}
code,
kbd,
pre {
font-family: "Roboto Mono", "Courier New", Courier, monospace
}
</style>
<link href="../_static/stylesheets/application.css" rel="stylesheet"/>
<link href="../_static/stylesheets/application-palette.css" rel="stylesheet"/>
<link href="../_static/stylesheets/application-fixes.css" rel="stylesheet"/>
<link href="../_static/fonts/material-icons.css" rel="stylesheet"/>
<meta content="84bd00" name="theme-color"/>
<script src="../_static/javascripts/modernizr.js">
</script>
<title>
Installation — Torch-TensorRT v1.1.1 documentation
</title>
<link href="../_static/material.css" rel="stylesheet" type="text/css"/>
<link href="../_static/pygments.css" rel="stylesheet" type="text/css"/>
<link href="../_static/collapsible-lists/css/tree_view.css" rel="stylesheet" type="text/css"/>
<script data-url_root="../" id="documentation_options" src="../_static/documentation_options.js">
</script>
<script src="../_static/jquery.js">
</script>
<script src="../_static/underscore.js">
</script>
<script src="../_static/doctools.js">
</script>
<script src="../_static/language_data.js">
</script>
<script src="../_static/collapsible-lists/js/CollapsibleLists.compressed.js">
</script>
<script src="../_static/collapsible-lists/js/apply-collapsible-lists.js">
</script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js">
</script>
<script async="async" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/latest.js?config=TeX-AMS-MML_HTMLorMML">
</script>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({"tex2jax": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true, "ignoreClass": "document", "processClass": "math|output_area"}})
</script>
<link href="../genindex.html" rel="index" title="Index"/>
<link href="../search.html" rel="search" title="Search"/>
<link href="getting_started_with_cpp_api.html" rel="next" title="Getting Started with C++"/>
<link href="../index.html" rel="prev" title="Torch-TensorRT"/>
</head>
<body data-md-color-accent="light-green" data-md-color-primary="light-green" dir="ltr">
<svg class="md-svg">
<defs data-children-count="0">
<svg height="448" id="__github" viewbox="0 0 416 448" width="416" xmlns="http://www.w3.org/2000/svg">
<path d="M160 304q0 10-3.125 20.5t-10.75 19T128 352t-18.125-8.5-10.75-19T96 304t3.125-20.5 10.75-19T128 256t18.125 8.5 10.75 19T160 304zm160 0q0 10-3.125 20.5t-10.75 19T288 352t-18.125-8.5-10.75-19T256 304t3.125-20.5 10.75-19T288 256t18.125 8.5 10.75 19T320 304zm40 0q0-30-17.25-51T296 232q-10.25 0-48.75 5.25Q229.5 240 208 240t-39.25-2.75Q130.75 232 120 232q-29.5 0-46.75 21T56 304q0 22 8 38.375t20.25 25.75 30.5 15 35 7.375 37.25 1.75h42q20.5 0 37.25-1.75t35-7.375 30.5-15 20.25-25.75T360 304zm56-44q0 51.75-15.25 82.75-9.5 19.25-26.375 33.25t-35.25 21.5-42.5 11.875-42.875 5.5T212 416q-19.5 0-35.5-.75t-36.875-3.125-38.125-7.5-34.25-12.875T37 371.5t-21.5-28.75Q0 312 0 260q0-59.25 34-99-6.75-20.5-6.75-42.5 0-29 12.75-54.5 27 0 47.5 9.875t47.25 30.875Q171.5 96 212 96q37 0 70 8 26.25-20.5 46.75-30.25T376 64q12.75 25.5 12.75 54.5 0 21.75-6.75 42 34 40 34 99.5z" fill="currentColor">
</path>
</svg>
</defs>
</svg>
<input class="md-toggle" data-md-toggle="drawer" id="__drawer" type="checkbox"/>
<input class="md-toggle" data-md-toggle="search" id="__search" type="checkbox"/>
<label class="md-overlay" data-md-component="overlay" for="__drawer">
</label>
<a class="md-skip" href="#tutorials/installation" tabindex="1">
Skip to content
</a>
<header class="md-header" data-md-component="header">
<nav class="md-header-nav md-grid">
<div class="md-flex navheader">
<div class="md-flex__cell md-flex__cell--shrink">
<a class="md-header-nav__button md-logo" href="../index.html" title="Torch-TensorRT v1.1.1 documentation">
<i class="md-icon">
</i>
</a>
</div>
<div class="md-flex__cell md-flex__cell--shrink">
<label class="md-icon md-icon--menu md-header-nav__button" for="__drawer">
</label>
</div>
<div class="md-flex__cell md-flex__cell--stretch">
<div class="md-flex__ellipsis md-header-nav__title" data-md-component="title">
<span class="md-header-nav__topic">
Torch-TensorRT
</span>
<span class="md-header-nav__topic">
Installation
</span>
</div>
</div>
<div class="md-flex__cell md-flex__cell--shrink">
<label class="md-icon md-icon--search md-header-nav__button" for="__search">
</label>
<div class="md-search" data-md-component="search" role="dialog">
<label class="md-search__overlay" for="__search">
</label>
<div class="md-search__inner" role="search">
<form action="../search.html" class="md-search__form" method="get" name="search">
<input autocapitalize="off" autocomplete="off" class="md-search__input" data-md-component="query" data-md-state="active" name="q" placeholder="Search" spellcheck="false" type="text"/>
<label class="md-icon md-search__icon" for="__search">
</label>
<button class="md-icon md-search__icon" data-md-component="reset" tabindex="-1" type="reset">
</button>
</form>
<div class="md-search__output">
<div class="md-search__scrollwrap" data-md-scrollfix="">
<div class="md-search-result" data-md-component="result">
<div class="md-search-result__meta">
Type to start searching
</div>
<ol class="md-search-result__list">
</ol>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="md-flex__cell md-flex__cell--shrink">
<div class="md-header-nav__source">
<a class="md-source" data-md-source="github" href="https://github.com/nvidia/Torch-TensorRT/" title="Go to repository">
<div class="md-source__icon">
<svg height="28" viewbox="0 0 24 24" width="28" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<use height="24" width="24" xlink:href="#__github">
</use>
</svg>
</div>
<div class="md-source__repository">
Torch-TensorRT
</div>
</a>
</div>
</div>
<div class="md-flex__cell md-flex__cell--shrink dropdown">
<button class="dropdownbutton">
Versions
</button>
<div class="dropdown-content md-hero">
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
master
</a>
<a href="https://nvidia.github.io/Torch-TensorRT/v1.1.1/" title="v1.1.1">
v1.1.1
</a>
<a href="https://nvidia.github.io/Torch-TensorRT/v1.1.0/" title="v1.1.0">
v1.1.0
</a>
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
v1.0.0
</a>
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
v0.4.1
</a>
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.0/" title="v0.4.0">
v0.4.0
</a>
<a href="https://nvidia.github.io/Torch-TensorRT/v0.3.0/" title="v0.3.0">
v0.3.0
</a>
<a href="https://nvidia.github.io/Torch-TensorRT/v0.2.0/" title="v0.2.0">
v0.2.0
</a>
<a href="https://nvidia.github.io/Torch-TensorRT/v0.1.0/" title="v0.1.0">
v0.1.0
</a>
<a href="https://nvidia.github.io/Torch-TensorRT/v0.0.3/" title="v0.0.3">
v0.0.3
</a>
<a href="https://nvidia.github.io/Torch-TensorRT/v0.0.2/" title="v0.0.2">
v0.0.2
</a>
<a href="https://nvidia.github.io/Torch-TensorRT/v0.0.1/" title="v0.0.1">
v0.0.1
</a>
</div>
</div>
</div>
</nav>
</header>
<div class="md-container">
<nav class="md-tabs" data-md-component="tabs">
<div class="md-tabs__inner md-grid">
<ul class="md-tabs__list">
<li class="md-tabs__item">
<a class="md-tabs__link" href="../index.html">
Torch-TensorRT v1.1.1 documentation
</a>
</li>
</ul>
</div>
</nav>
<main class="md-main">
<div class="md-main__inner md-grid" data-md-component="container">
<div class="md-sidebar md-sidebar--primary" data-md-component="navigation">
<div class="md-sidebar__scrollwrap">
<div class="md-sidebar__inner">
<nav class="md-nav md-nav--primary" data-md-level="0">
<label class="md-nav__title md-nav__title--site" for="__drawer">
<a class="md-nav__button md-logo" href="../index.html" title="Torch-TensorRT v1.1.1 documentation">
<i class="md-icon">
</i>
</a>
<a href="../index.html" title="Torch-TensorRT v1.1.1 documentation">
Torch-TensorRT
</a>
</label>
<div class="md-nav__source">
<a class="md-source" data-md-source="github" href="https://github.com/nvidia/Torch-TensorRT/" title="Go to repository">
<div class="md-source__icon">
<svg height="28" viewbox="0 0 24 24" width="28" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<use height="24" width="24" xlink:href="#__github">
</use>
</svg>
</div>
<div class="md-source__repository">
Torch-TensorRT
</div>
</a>
</div>
<ul class="md-nav__list">
<li class="md-nav__item">
<span class="md-nav__link caption">
<span class="caption-text">
Getting Started
</span>
</span>
</li>
<li class="md-nav__item">
<input class="md-toggle md-nav__toggle" data-md-toggle="toc" id="__toc" type="checkbox"/>
<label class="md-nav__link md-nav__link--active" for="__toc">
Installation
</label>
<a class="md-nav__link md-nav__link--active" href="#">
Installation
</a>
<nav class="md-nav md-nav--secondary">
<label class="md-nav__title" for="__toc">
Contents
</label>
<ul class="md-nav__list" data-md-scrollfix="">
<li class="md-nav__item">
<a class="md-nav__link" href="#tutorials-installation--page-root">
Installation
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#precompiled-binaries">
Precompiled Binaries
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#dependencies">
Dependencies
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#python-package">
Python Package
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#c-binary-distribution">
C++ Binary Distribution
</a>
</li>
</ul>
</nav>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#compiling-from-source">
Compiling From Source
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#dependencies-for-compilation">
Dependencies for Compilation
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#choosing-the-right-abi">
Choosing the Right ABI
</a>
</li>
</ul>
</nav>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#building-using-cudnn-tensorrt-tarball-distributions">
<strong>
Building using cuDNN &amp; TensorRT tarball distributions
</strong>
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#release-build">
Release Build
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#debug-build">
Debug Build
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#pre-cxx11-abi-build">
Pre CXX11 ABI Build
</a>
</li>
</ul>
</nav>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#building-using-locally-installed-cudnn-tensorrt">
<strong>
Building using locally installed cuDNN &amp; TensorRT
</strong>
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#id6">
Release Build
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#build-from-local-debug">
Debug Build
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#id8">
Pre CXX11 ABI Build
</a>
</li>
</ul>
</nav>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#building-the-python-package">
<strong>
Building the Python package
</strong>
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#id9">
Debug Build
</a>
</li>
</ul>
</nav>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#building-natively-on-aarch64-jetson">
<strong>
Building Natively on aarch64 (Jetson)
</strong>
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#prerequisites">
Prerequisites
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#enviorment-setup">
Enviorment Setup
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#compile-c-library-and-compiler-cli">
Compile C++ Library and Compiler CLI
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#compile-python-api">
Compile Python API
</a>
</li>
</ul>
</nav>
</li>
</ul>
</nav>
</li>
</ul>
</nav>
</li>
<li class="md-nav__item">
<a class="md-nav__extra_link" href="../_sources/tutorials/installation.rst.txt">
Show Source
</a>
</li>
</ul>
</nav>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="getting_started_with_cpp_api.html">
Getting Started with C++
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="getting_started_with_python_api.html">
Using Torch-TensorRT in Python
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="creating_torchscript_module_in_python.html">
Creating a TorchScript Module
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="creating_torchscript_module_in_python.html#working-with-torchscript-in-python">
Working with TorchScript in Python
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="creating_torchscript_module_in_python.html#saving-torchscript-module-to-disk">
Saving TorchScript Module to Disk
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="ptq.html">
Post Training Quantization (PTQ)
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="torchtrtc.html">
torchtrtc
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="use_from_pytorch.html">
Using Torch-TensorRT Directly From PyTorch
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="runtime.html">
Deploying Torch-TensorRT Programs
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="using_dla.html">
DLA
</a>
</li>
<li class="md-nav__item">
<span class="md-nav__link caption">
<span class="caption-text">
Notebooks
</span>
</span>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../_notebooks/CitriNet-example.html">
Torch-TensorRT Getting Started - CitriNet
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../_notebooks/dynamic-shapes.html">
Torch-TensorRT - Using Dynamic Shapes
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../_notebooks/EfficientNet-example.html">
Torch-TensorRT Getting Started - EfficientNet-B0
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../_notebooks/Hugging-Face-BERT.html">
Masked Language Modeling (MLM) with Hugging Face BERT Transformer
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../_notebooks/lenet-getting-started.html">
Torch-TensorRT Getting Started - LeNet
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../_notebooks/Resnet50-example.html">
Torch-TensorRT Getting Started - ResNet 50
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../_notebooks/ssd-object-detection-demo.html">
Object Detection with Torch-TensorRT (SSD)
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../_notebooks/vgg-qat.html">
Deploying Quantization Aware Trained models in INT8 using Torch-TensorRT
</a>
</li>
<li class="md-nav__item">
<span class="md-nav__link caption">
<span class="caption-text">
Python API Documenation
</span>
</span>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../py_api/torch_tensorrt.html">
torch_tensorrt
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../py_api/logging.html">
torch_tensorrt.logging
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../py_api/ptq.html">
torch_tensorrt.ptq
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../py_api/ts.html">
torch_tensorrt.ts
</a>
</li>
<li class="md-nav__item">
<span class="md-nav__link caption">
<span class="caption-text">
C++ API Documenation
</span>
</span>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../_cpp_api/torch_tensort_cpp.html">
Torch-TensorRT C++ API
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../_cpp_api/namespace_torch_tensorrt.html">
Namespace torch_tensorrt
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../_cpp_api/namespace_torch_tensorrt__logging.html">
Namespace torch_tensorrt::logging
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../_cpp_api/namespace_torch_tensorrt__torchscript.html">
Namespace torch_tensorrt::torchscript
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../_cpp_api/namespace_torch_tensorrt__ptq.html">
Namespace torch_tensorrt::ptq
</a>
</li>
<li class="md-nav__item">
<span class="md-nav__link caption">
<span class="caption-text">
Contributor Documentation
</span>
</span>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../contributors/system_overview.html">
System Overview
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../contributors/writing_converters.html">
Writing Converters
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../contributors/useful_links.html">
Useful Links for Torch-TensorRT Development
</a>
</li>
<li class="md-nav__item">
<span class="md-nav__link caption">
<span class="caption-text">
Indices
</span>
</span>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="../indices/supported_ops.html">
Operators Supported
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<div class="md-sidebar md-sidebar--secondary" data-md-component="toc">
<div class="md-sidebar__scrollwrap">
<div class="md-sidebar__inner">
<nav class="md-nav md-nav--secondary">
<label class="md-nav__title" for="__toc">
Contents
</label>
<ul class="md-nav__list" data-md-scrollfix="">
<li class="md-nav__item">
<a class="md-nav__link" href="#tutorials-installation--page-root">
Installation
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#precompiled-binaries">
Precompiled Binaries
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#dependencies">
Dependencies
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#python-package">
Python Package
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#c-binary-distribution">
C++ Binary Distribution
</a>
</li>
</ul>
</nav>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#compiling-from-source">
Compiling From Source
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#dependencies-for-compilation">
Dependencies for Compilation
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#choosing-the-right-abi">
Choosing the Right ABI
</a>
</li>
</ul>
</nav>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#building-using-cudnn-tensorrt-tarball-distributions">
<strong>
Building using cuDNN &amp; TensorRT tarball distributions
</strong>
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#release-build">
Release Build
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#debug-build">
Debug Build
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#pre-cxx11-abi-build">
Pre CXX11 ABI Build
</a>
</li>
</ul>
</nav>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#building-using-locally-installed-cudnn-tensorrt">
<strong>
Building using locally installed cuDNN &amp; TensorRT
</strong>
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#id6">
Release Build
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#build-from-local-debug">
Debug Build
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#id8">
Pre CXX11 ABI Build
</a>
</li>
</ul>
</nav>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#building-the-python-package">
<strong>
Building the Python package
</strong>
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#id9">
Debug Build
</a>
</li>
</ul>
</nav>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#building-natively-on-aarch64-jetson">
<strong>
Building Natively on aarch64 (Jetson)
</strong>
</a>
<nav class="md-nav">
<ul class="md-nav__list">
<li class="md-nav__item">
<a class="md-nav__link" href="#prerequisites">
Prerequisites
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#enviorment-setup">
Enviorment Setup
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#compile-c-library-and-compiler-cli">
Compile C++ Library and Compiler CLI
</a>
</li>
<li class="md-nav__item">
<a class="md-nav__link" href="#compile-python-api">
Compile Python API
</a>
</li>
</ul>
</nav>
</li>
</ul>
</nav>
</li>
</ul>
</nav>
</li>
<li class="md-nav__item">
<a class="md-nav__extra_link" href="../_sources/tutorials/installation.rst.txt">
Show Source
</a>
</li>
<li class="md-nav__item" id="searchbox">
</li>
</ul>
</nav>
</div>
</div>
</div>
<div class="md-content">
<article class="md-content__inner md-typeset" role="main">
<span id="id1">
</span>
<h1 id="tutorials-installation--page-root">
Installation
<a class="headerlink" href="#tutorials-installation--page-root" title="Permalink to this headline">
</a>
</h1>
<h2 id="precompiled-binaries">
Precompiled Binaries
<a class="headerlink" href="#precompiled-binaries" title="Permalink to this headline">
</a>
</h2>
<h3 id="dependencies">
Dependencies
<a class="headerlink" href="#dependencies" title="Permalink to this headline">
</a>
</h3>
<p>
You need to have either PyTorch or LibTorch installed based on if you are using Python or C++
and you must have CUDA, cuDNN and TensorRT installed.
</p>
<blockquote>
<div>
<ul class="simple">
<li>
<p>
<a class="reference external" href="https://www.pytorch.org">
https://www.pytorch.org
</a>
</p>
</li>
<li>
<p>
<a class="reference external" href="https://developer.nvidia.com/cuda">
https://developer.nvidia.com/cuda
</a>
</p>
</li>
<li>
<p>
<a class="reference external" href="https://developer.nvidia.com/cudnn">
https://developer.nvidia.com/cudnn
</a>
</p>
</li>
<li>
<p>
<a class="reference external" href="https://developer.nvidia.com/tensorrt">
https://developer.nvidia.com/tensorrt
</a>
</p>
</li>
</ul>
</div>
</blockquote>
<h3 id="python-package">
Python Package
<a class="headerlink" href="#python-package" title="Permalink to this headline">
</a>
</h3>
<p>
You can install the python package using
</p>
<div class="highlight-sh notranslate">
<div class="highlight">
<pre><span></span>pip3 install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases
</pre>
</div>
</div>
<span id="bin-dist">
</span>
<h3 id="c-binary-distribution">
C++ Binary Distribution
<a class="headerlink" href="#c-binary-distribution" title="Permalink to this headline">
</a>
</h3>
<p>
Precompiled tarballs for releases are provided here:
<a class="reference external" href="https://github.com/NVIDIA/Torch-TensorRT/releases">
https://github.com/NVIDIA/Torch-TensorRT/releases
</a>
</p>
<span id="compile-from-source">
</span>
<h2 id="compiling-from-source">
Compiling From Source
<a class="headerlink" href="#compiling-from-source" title="Permalink to this headline">
</a>
</h2>
<span id="installing-deps">
</span>
<h3 id="dependencies-for-compilation">
Dependencies for Compilation
<a class="headerlink" href="#dependencies-for-compilation" title="Permalink to this headline">
</a>
</h3>
<p>
Torch-TensorRT is built with Bazel, so begin by installing it.
</p>
<blockquote>
<div>
<ul class="simple">
<li>
<p>
The easiest way is to install bazelisk using the method of your choosing
<a class="reference external" href="https://github.com/bazelbuild/bazelisk">
https://github.com/bazelbuild/bazelisk
</a>
</p>
</li>
<li>
<p>
Otherwise you can use the following instructions to install binaries
<a class="reference external" href="https://docs.bazel.build/versions/master/install.html">
https://docs.bazel.build/versions/master/install.html
</a>
</p>
</li>
<li>
<p>
Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions
</p>
</li>
</ul>
<div class="highlight-shell notranslate">
<div class="highlight">
<pre><span></span><span class="nb">export</span> <span class="nv">BAZEL_VERSION</span><span class="o">=</span><span class="k">$(</span>cat &lt;PATH_TO_TORCHTRT_ROOT&gt;/.bazelversion<span class="k">)</span>
mkdir bazel
<span class="nb">cd</span> bazel
curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/<span class="nv">$BAZEL_VERSION</span>/bazel-<span class="nv">$BAZEL_VERSION</span>-dist.zip
unzip bazel-<span class="nv">$BAZEL_VERSION</span>-dist.zip
bash ./compile.sh
cp output/bazel /usr/local/bin/
</pre>
</div>
</div>
</div>
</blockquote>
<p>
You will also need to have CUDA installed on the system (or if running in a container, the system must have
the CUDA driver installed and the container must have CUDA)
</p>
<p>
The correct LibTorch version will be pulled down for you by bazel.
</p>
<blockquote>
<div>
<p>
NOTE: For best compatability with official PyTorch, use torch==1.10.0+cuda113, TensorRT 8.0 and cuDNN 8.2 for CUDA 11.3 however Torch-TensorRT itself supports
TensorRT and cuDNN for other CUDA versions for usecases such as using NVIDIA compiled distributions of PyTorch that use other versions of CUDA
e.g. aarch64 or custom compiled version of PyTorch.
</p>
</div>
</blockquote>
<span id="abis">
</span>
<h4 id="choosing-the-right-abi">
Choosing the Right ABI
<a class="headerlink" href="#choosing-the-right-abi" title="Permalink to this headline">
</a>
</h4>
<p>
Likely the most complicated thing about compiling Torch-TensorRT is selecting the correct ABI. There are two options
which are incompatible with each other, pre-cxx11-abi and the cxx11-abi. The complexity comes from the fact that while
the most popular distribution of PyTorch (wheels downloaded from pytorch.org/pypi directly) use the pre-cxx11-abi, most
other distributions you might encounter (e.g. ones from NVIDIA - NGC containers, and builds for Jetson as well as certain
libtorch builds and likely if you build PyTorch from source) use the cxx11-abi. It is important you compile Torch-TensorRT
using the correct ABI to function properly. Below is a table with general pairings of PyTorch distribution sources and the
recommended commands:
</p>
<table>
<colgroup>
<col style="width: 33%"/>
<col style="width: 31%"/>
<col style="width: 36%"/>
</colgroup>
<thead>
<tr class="row-odd">
<th class="head">
<p>
PyTorch Source
</p>
</th>
<th class="head">
<p>
Recommended C++ Compilation Command
</p>
</th>
<th class="head">
<p>
Recommended Python Compilation Command
</p>
</th>
</tr>
</thead>
<tbody>
<tr class="row-even">
<td>
<p>
PyTorch whl file from PyTorch.org
</p>
</td>
<td>
<p>
bazel build //:libtorchtrt -c opt –config pre_cxx11_abi
</p>
</td>
<td>
<p>
python3 setup.py bdist_wheel
</p>
</td>
</tr>
<tr class="row-odd">
<td>
<p>
libtorch-shared-with-deps-
<a href="#id2">
<span class="problematic" id="id3">
*
</span>
</a>
.zip from PyTorch.org
</p>
</td>
<td>
<p>
bazel build //:libtorchtrt -c opt –config pre_cxx11_abi
</p>
</td>
<td>
<p>
python3 setup.py bdist_wheel
</p>
</td>
</tr>
<tr class="row-even">
<td>
<p>
libtorch-cxx11-abi-shared-with-deps-
<a href="#id4">
<span class="problematic" id="id5">
*
</span>
</a>
.zip from PyTorch.org
</p>
</td>
<td>
<p>
bazel build //:libtorchtrt -c opt
</p>
</td>
<td>
<p>
python3 setup.py bdist_wheel –use-cxx11-abi
</p>
</td>
</tr>
<tr class="row-odd">
<td>
<p>
PyTorch preinstalled in an NGC container
</p>
</td>
<td>
<p>
bazel build //:libtorchtrt -c opt
</p>
</td>
<td>
<p>
python3 setup.py bdist_wheel –use-cxx11-abi
</p>
</td>
</tr>
<tr class="row-even">
<td>
<p>
PyTorch from the NVIDIA Forums for Jetson
</p>
</td>
<td>
<p>
bazel build //:libtorchtrt -c opt
</p>
</td>
<td>
<p>
python3 setup.py bdist_wheel –jetpack-version 4.6 –use-cxx11-abi
</p>
</td>
</tr>
<tr class="row-odd">
<td>
<p>
PyTorch built from Source
</p>
</td>
<td>
<p>
bazel build //:libtorchtrt -c opt
</p>
</td>
<td>
<p>
python3 setup.py bdist_wheel –use-cxx11-abi
</p>
</td>
</tr>
</tbody>
</table>
<blockquote>
<div>
<p>
NOTE: For all of the above cases you must correctly declare the source of PyTorch you intend to use in your WORKSPACE file for both Python and C++ builds. See below for more information
</p>
</div>
</blockquote>
<p>
You then have two compilation options:
</p>
<span id="build-from-archive">
</span>
<h3 id="building-using-cudnn-tensorrt-tarball-distributions">
<strong>
Building using cuDNN &amp; TensorRT tarball distributions
</strong>
<a class="headerlink" href="#building-using-cudnn-tensorrt-tarball-distributions" title="Permalink to this headline">
</a>
</h3>
<blockquote>
<div>
<p>
This is recommended so as to build Torch-TensorRT hermetically and insures any compilation errors are not caused by version issues
</p>
<p>
Make sure when running Torch-TensorRT that these versions of the libraries are prioritized in your
<code class="docutils literal notranslate">
<span class="pre">
$LD_LIBRARY_PATH
</span>
</code>
</p>
</div>
</blockquote>
<dl class="simple">
<dt>
You need to download the tarball distributions of TensorRT and cuDNN from the NVIDIA website.
</dt>
<dd>
<ul class="simple">
<li>
<p>
<a class="reference external" href="https://developer.nvidia.com/cudnn">
https://developer.nvidia.com/cudnn
</a>
</p>
</li>
<li>
<p>
<a class="reference external" href="https://developer.nvidia.com/tensorrt">
https://developer.nvidia.com/tensorrt
</a>
</p>
</li>
</ul>
</dd>
</dl>
<p>
Place these files in a directory (the directories
<code class="docutils literal notranslate">
<span class="pre">
third_party/distdir/[x86_64-linux-gnu
</span>
<span class="pre">
|
</span>
<span class="pre">
aarch64-linux-gnu]
</span>
</code>
exist for this purpose)
</p>
<p>
Then compile referencing the directory with the tarballs
</p>
<blockquote>
<div>
<p>
If you get errors regarding the packages, check their sha256 hashes and make sure they match the ones listed in
<code class="docutils literal notranslate">
<span class="pre">
WORKSPACE
</span>
</code>
</p>
</div>
</blockquote>
<h4 id="release-build">
Release Build
<a class="headerlink" href="#release-build" title="Permalink to this headline">
</a>
</h4>
<div class="highlight-shell notranslate">
<div class="highlight">
<pre><span></span>bazel build //:libtorchtrt -c opt --distdir third_party/distdir/<span class="o">[</span>x86_64-linux-gnu <span class="p">|</span> aarch64-linux-gnu<span class="o">]</span>
</pre>
</div>
</div>
<p>
A tarball with the include files and library can then be found in
<code class="docutils literal notranslate">
<span class="pre">
bazel-bin
</span>
</code>
</p>
<span id="build-from-archive-debug">
</span>
<h4 id="debug-build">
Debug Build
<a class="headerlink" href="#debug-build" title="Permalink to this headline">
</a>
</h4>
<p>
To build with debug symbols use the following command
</p>
<div class="highlight-shell notranslate">
<div class="highlight">
<pre><span></span>bazel build //:libtorchtrt -c dbg --distdir third_party/distdir/<span class="o">[</span>x86_64-linux-gnu <span class="p">|</span> aarch64-linux-gnu<span class="o">]</span>
</pre>
</div>
</div>
<p>
A tarball with the include files and library can then be found in
<code class="docutils literal notranslate">
<span class="pre">
bazel-bin
</span>
</code>
</p>
<h4 id="pre-cxx11-abi-build">
Pre CXX11 ABI Build
<a class="headerlink" href="#pre-cxx11-abi-build" title="Permalink to this headline">
</a>
</h4>
<p>
To build using the pre-CXX11 ABI use the
<code class="docutils literal notranslate">
<span class="pre">
pre_cxx11_abi
</span>
</code>
config
</p>
<div class="highlight-shell notranslate">
<div class="highlight">
<pre><span></span>bazel build //:libtorchtrt --config pre_cxx11_abi -c <span class="o">[</span>dbg/opt<span class="o">]</span> --distdir third_party/distdir/<span class="o">[</span>x86_64-linux-gnu <span class="p">|</span> aarch64-linux-gnu<span class="o">]</span>
</pre>
</div>
</div>
<p>
A tarball with the include files and library can then be found in
<code class="docutils literal notranslate">
<span class="pre">
bazel-bin
</span>
</code>
</p>
<span id="build-from-local">
</span>
<h3 id="building-using-locally-installed-cudnn-tensorrt">
<strong>
Building using locally installed cuDNN &amp; TensorRT
</strong>
<a class="headerlink" href="#building-using-locally-installed-cudnn-tensorrt" title="Permalink to this headline">
</a>
</h3>
<blockquote>
<div>
<p>
If you encounter bugs and you compiled using this method please disclose that you used local sources in the issue (an ldd dump would be nice too)
</p>
</div>
</blockquote>
<p>
Install TensorRT, CUDA and cuDNN on the system before starting to compile.
</p>
<p>
In WORKSPACE comment out:
</p>
<div class="highlight-python notranslate">
<div class="highlight">
<pre><span></span><span class="c1"># Downloaded distributions to use with --distdir</span>
<span class="n">http_archive</span><span class="p">(</span>
<span class="n">name</span> <span class="o">=</span> <span class="s2">"cudnn"</span><span class="p">,</span>
<span class="n">urls</span> <span class="o">=</span> <span class="p">[</span><span class="s2">"&lt;URL&gt;"</span><span class="p">,],</span>
<span class="n">build_file</span> <span class="o">=</span> <span class="s2">"@//third_party/cudnn/archive:BUILD"</span><span class="p">,</span>
<span class="n">sha256</span> <span class="o">=</span> <span class="s2">"&lt;TAR SHA256&gt;"</span><span class="p">,</span>
<span class="n">strip_prefix</span> <span class="o">=</span> <span class="s2">"cuda"</span>
<span class="p">)</span>
<span class="n">http_archive</span><span class="p">(</span>
<span class="n">name</span> <span class="o">=</span> <span class="s2">"tensorrt"</span><span class="p">,</span>
<span class="n">urls</span> <span class="o">=</span> <span class="p">[</span><span class="s2">"&lt;URL&gt;"</span><span class="p">,],</span>
<span class="n">build_file</span> <span class="o">=</span> <span class="s2">"@//third_party/tensorrt/archive:BUILD"</span><span class="p">,</span>
<span class="n">sha256</span> <span class="o">=</span> <span class="s2">"&lt;TAR SHA256&gt;"</span><span class="p">,</span>
<span class="n">strip_prefix</span> <span class="o">=</span> <span class="s2">"TensorRT-&lt;VERSION&gt;"</span>
<span class="p">)</span>
</pre>
</div>
</div>
<p>
and uncomment
</p>
<div class="highlight-python notranslate">
<div class="highlight">
<pre><span></span><span class="c1"># Locally installed dependencies</span>
<span class="n">new_local_repository</span><span class="p">(</span>
<span class="n">name</span> <span class="o">=</span> <span class="s2">"cudnn"</span><span class="p">,</span>
<span class="n">path</span> <span class="o">=</span> <span class="s2">"/usr/"</span><span class="p">,</span>
<span class="n">build_file</span> <span class="o">=</span> <span class="s2">"@//third_party/cudnn/local:BUILD"</span>
<span class="p">)</span>
<span class="n">new_local_repository</span><span class="p">(</span>
<span class="n">name</span> <span class="o">=</span> <span class="s2">"tensorrt"</span><span class="p">,</span>
<span class="n">path</span> <span class="o">=</span> <span class="s2">"/usr/"</span><span class="p">,</span>
<span class="n">build_file</span> <span class="o">=</span> <span class="s2">"@//third_party/tensorrt/local:BUILD"</span>
<span class="p">)</span>
</pre>
</div>
</div>
<h4 id="id6">
Release Build
<a class="headerlink" href="#id6" title="Permalink to this headline">
</a>
</h4>
<p>
Compile using:
</p>
<div class="highlight-shell notranslate">
<div class="highlight">
<pre><span></span>bazel build //:libtorchtrt -c opt
</pre>
</div>
</div>
<p>
A tarball with the include files and library can then be found in
<code class="docutils literal notranslate">
<span class="pre">
bazel-bin
</span>
</code>
</p>
<span id="id7">
</span>
<h4 id="build-from-local-debug">
Debug Build
<a class="headerlink" href="#build-from-local-debug" title="Permalink to this headline">
</a>
</h4>
<p>
To build with debug symbols use the following command
</p>
<div class="highlight-shell notranslate">
<div class="highlight">
<pre><span></span>bazel build //:libtorchtrt -c dbg
</pre>
</div>
</div>
<p>
A tarball with the include files and library can then be found in
<code class="docutils literal notranslate">
<span class="pre">
bazel-bin
</span>
</code>
</p>
<h4 id="id8">
Pre CXX11 ABI Build
<a class="headerlink" href="#id8" title="Permalink to this headline">
</a>
</h4>
<p>
To build using the pre-CXX11 ABI use the
<code class="docutils literal notranslate">
<span class="pre">
pre_cxx11_abi
</span>
</code>
config
</p>
<div class="highlight-shell notranslate">
<div class="highlight">
<pre><span></span>bazel build //:libtorchtrt --config pre_cxx11_abi -c <span class="o">[</span>dbg/opt<span class="o">]</span>
</pre>
</div>
</div>
<h3 id="building-the-python-package">
<strong>
Building the Python package
</strong>
<a class="headerlink" href="#building-the-python-package" title="Permalink to this headline">
</a>
</h3>
<p>
Begin by installing
<code class="docutils literal notranslate">
<span class="pre">
ninja
</span>
</code>
</p>
<p>
You can build the Python package using
<code class="docutils literal notranslate">
<span class="pre">
setup.py
</span>
</code>
(this will also build the correct version of
<code class="docutils literal notranslate">
<span class="pre">
libtorchtrt.so
</span>
</code>
)
</p>
<div class="highlight-shell notranslate">
<div class="highlight">
<pre><span></span>python3 setup.py <span class="o">[</span>install/bdist_wheel<span class="o">]</span>
</pre>
</div>
</div>
<h4 id="id9">
Debug Build
<a class="headerlink" href="#id9" title="Permalink to this headline">
</a>
</h4>
<div class="highlight-shell notranslate">
<div class="highlight">
<pre><span></span>python3 setup.py develop <span class="o">[</span>--user<span class="o">]</span>
</pre>
</div>
</div>
<p>
This also compiles a debug build of
<code class="docutils literal notranslate">
<span class="pre">
libtorchtrt.so
</span>
</code>
</p>
<h3 id="building-natively-on-aarch64-jetson">
<strong>
Building Natively on aarch64 (Jetson)
</strong>
<a class="headerlink" href="#building-natively-on-aarch64-jetson" title="Permalink to this headline">
</a>
</h3>
<h4 id="prerequisites">
Prerequisites
<a class="headerlink" href="#prerequisites" title="Permalink to this headline">
</a>
</h4>
<p>
Install or compile a build of PyTorch/LibTorch for aarch64
</p>
<p>
NVIDIA hosts builds the latest release branch for Jetson here:
</p>
<blockquote>
<div>
<p>
<a class="reference external" href="https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048">
https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048
</a>
</p>
</div>
</blockquote>
<h4 id="enviorment-setup">
Enviorment Setup
<a class="headerlink" href="#enviorment-setup" title="Permalink to this headline">
</a>
</h4>
<p>
To build natively on aarch64-linux-gnu platform, configure the
<code class="docutils literal notranslate">
<span class="pre">
WORKSPACE
</span>
</code>
with local available dependencies.
</p>
<ol class="arabic simple">
<li>
<p>
Disable the rules with
<code class="docutils literal notranslate">
<span class="pre">
http_archive
</span>
</code>
for x86_64 by commenting the following rules:
</p>
</li>
</ol>
<div class="highlight-shell notranslate">
<div class="highlight">
<pre><span></span><span class="c1">#http_archive(</span>
<span class="c1"># name = "libtorch",</span>
<span class="c1"># build_file = "@//third_party/libtorch:BUILD",</span>
<span class="c1"># strip_prefix = "libtorch",</span>
<span class="c1"># urls = ["https://download.pytorch.org/libtorch/cu102/libtorch-cxx11-abi-shared-with-deps-1.5.1.zip"],</span>
<span class="c1"># sha256 = "cf0691493d05062fe3239cf76773bae4c5124f4b039050dbdd291c652af3ab2a"</span>
<span class="c1">#)</span>
<span class="c1">#http_archive(</span>
<span class="c1"># name = "libtorch_pre_cxx11_abi",</span>
<span class="c1"># build_file = "@//third_party/libtorch:BUILD",</span>
<span class="c1"># strip_prefix = "libtorch",</span>
<span class="c1"># sha256 = "818977576572eadaf62c80434a25afe44dbaa32ebda3a0919e389dcbe74f8656",</span>
<span class="c1"># urls = ["https://download.pytorch.org/libtorch/cu102/libtorch-shared-with-deps-1.5.1.zip"],</span>
<span class="c1">#)</span>
<span class="c1"># Download these tarballs manually from the NVIDIA website</span>
<span class="c1"># Either place them in the distdir directory in third_party and use the --distdir flag</span>
<span class="c1"># or modify the urls to "file:///&lt;PATH TO TARBALL&gt;/&lt;TARBALL NAME&gt;.tar.gz</span>
<span class="c1">#http_archive(</span>
<span class="c1"># name = "cudnn",</span>
<span class="c1"># urls = ["https://developer.nvidia.com/compute/machine-learning/cudnn/secure/8.0.1.13/10.2_20200626/cudnn-10.2-linux-x64-v8.0.1.13.tgz"],</span>
<span class="c1"># build_file = "@//third_party/cudnn/archive:BUILD",</span>
<span class="c1"># sha256 = "0c106ec84f199a0fbcf1199010166986da732f9b0907768c9ac5ea5b120772db",</span>
<span class="c1"># strip_prefix = "cuda"</span>
<span class="c1">#)</span>
<span class="c1">#http_archive(</span>
<span class="c1"># name = "tensorrt",</span>
<span class="c1"># urls = ["https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/7.1/tars/TensorRT-7.1.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz"],</span>
<span class="c1"># build_file = "@//third_party/tensorrt/archive:BUILD",</span>
<span class="c1"># sha256 = "9205bed204e2ae7aafd2e01cce0f21309e281e18d5bfd7172ef8541771539d41",</span>
<span class="c1"># strip_prefix = "TensorRT-7.1.3.4"</span>
<span class="c1">#)</span>
NOTE: You may also need to configure the CUDA version to <span class="m">10</span>.2 by setting the path <span class="k">for</span> the cuda new_local_repository
</pre>
</div>
</div>
<ol class="arabic" start="2">
<li>
<p>
Configure the correct paths to directory roots containing local dependencies in the
<code class="docutils literal notranslate">
<span class="pre">
new_local_repository
</span>
</code>
rules:
</p>
<blockquote>
<div>
<p>
NOTE: If you installed PyTorch using a pip package, the correct path is the path to the root of the python torch package.
In the case that you installed with
<code class="docutils literal notranslate">
<span class="pre">
sudo
</span>
<span class="pre">
pip
</span>
<span class="pre">
install
</span>
</code>
this will be
<code class="docutils literal notranslate">
<span class="pre">
/usr/local/lib/python3.6/dist-packages/torch
</span>
</code>
.
In the case you installed with
<code class="docutils literal notranslate">
<span class="pre">
pip
</span>
<span class="pre">
install
</span>
<span class="pre">
--user
</span>
</code>
this will be
<code class="docutils literal notranslate">
<span class="pre">
$HOME/.local/lib/python3.6/site-packages/torch
</span>
</code>
.
</p>
</div>
</blockquote>
</li>
</ol>
<p>
In the case you are using NVIDIA compiled pip packages, set the path for both libtorch sources to the same path. This is because unlike
PyTorch on x86_64, NVIDIA aarch64 PyTorch uses the CXX11-ABI. If you compiled for source using the pre_cxx11_abi and only would like to
use that library, set the paths to the same path but when you compile make sure to add the flag
<code class="docutils literal notranslate">
<span class="pre">
--config=pre_cxx11_abi
</span>
</code>
</p>
<div class="highlight-shell notranslate">
<div class="highlight">
<pre><span></span>new_local_repository<span class="o">(</span>
<span class="nv">name</span> <span class="o">=</span> <span class="s2">"libtorch"</span>,
<span class="nv">path</span> <span class="o">=</span> <span class="s2">"/usr/local/lib/python3.6/dist-packages/torch"</span>,
<span class="nv">build_file</span> <span class="o">=</span> <span class="s2">"third_party/libtorch/BUILD"</span>
<span class="o">)</span>
new_local_repository<span class="o">(</span>
<span class="nv">name</span> <span class="o">=</span> <span class="s2">"libtorch_pre_cxx11_abi"</span>,
<span class="nv">path</span> <span class="o">=</span> <span class="s2">"/usr/local/lib/python3.6/dist-packages/torch"</span>,
<span class="nv">build_file</span> <span class="o">=</span> <span class="s2">"third_party/libtorch/BUILD"</span>
<span class="o">)</span>
new_local_repository<span class="o">(</span>
<span class="nv">name</span> <span class="o">=</span> <span class="s2">"cudnn"</span>,
<span class="nv">path</span> <span class="o">=</span> <span class="s2">"/usr/"</span>,
<span class="nv">build_file</span> <span class="o">=</span> <span class="s2">"@//third_party/cudnn/local:BUILD"</span>
<span class="o">)</span>
new_local_repository<span class="o">(</span>
<span class="nv">name</span> <span class="o">=</span> <span class="s2">"tensorrt"</span>,
<span class="nv">path</span> <span class="o">=</span> <span class="s2">"/usr/"</span>,
<span class="nv">build_file</span> <span class="o">=</span> <span class="s2">"@//third_party/tensorrt/local:BUILD"</span>
<span class="o">)</span>
</pre>
</div>
</div>
<h4 id="compile-c-library-and-compiler-cli">
Compile C++ Library and Compiler CLI
<a class="headerlink" href="#compile-c-library-and-compiler-cli" title="Permalink to this headline">
</a>
</h4>
<blockquote>
<div>
<p>
NOTE: Due to shifting dependency locations between Jetpack 4.5 and 4.6 there is a now a flag to inform bazel of the Jetpack version
</p>
<div class="highlight-shell notranslate">
<div class="highlight">
<pre><span></span>--platforms //toolchains:jetpack_4.x
</pre>
</div>
</div>
</div>
</blockquote>
<p>
Compile Torch-TensorRT library using bazel command:
</p>
<div class="highlight-shell notranslate">
<div class="highlight">
<pre><span></span>bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6
</pre>
</div>
</div>
<h4 id="compile-python-api">
Compile Python API
<a class="headerlink" href="#compile-python-api" title="Permalink to this headline">
</a>
</h4>
<blockquote>
<div>
<p>
NOTE: Due to shifting dependencies locations between Jetpack 4.5 and Jetpack 4.6 there is now a flag for
<code class="docutils literal notranslate">
<span class="pre">
setup.py
</span>
</code>
which sets the jetpack version (default: 4.6)
</p>
</div>
</blockquote>
<p>
Compile the Python API using the following command from the
<code class="docutils literal notranslate">
<span class="pre">
//py
</span>
</code>
directory:
</p>
<div class="highlight-shell notranslate">
<div class="highlight">
<pre><span></span>python3 setup.py install --use-cxx11-abi
</pre>
</div>
</div>
<p>
If you have a build of PyTorch that uses Pre-CXX11 ABI drop the
<code class="docutils literal notranslate">
<span class="pre">
--use-cxx11-abi
</span>
</code>
flag
</p>
<p>
If you are building for Jetpack 4.5 add the
<code class="docutils literal notranslate">
<span class="pre">
--jetpack-version
</span>
<span class="pre">
4.5
</span>
</code>
flag
</p>
</article>
</div>
</div>
</main>
</div>
<footer class="md-footer">
<div class="md-footer-nav">
<nav class="md-footer-nav__inner md-grid">
<a class="md-flex md-footer-nav__link md-footer-nav__link--prev" href="../index.html" rel="prev" title="Torch-TensorRT">
<div class="md-flex__cell md-flex__cell--shrink">
<i class="md-icon md-icon--arrow-back md-footer-nav__button">
</i>
</div>
<div class="md-flex__cell md-flex__cell--stretch md-footer-nav__title">
<span class="md-flex__ellipsis">
<span class="md-footer-nav__direction">
Previous
</span>
Torch-TensorRT
</span>
</div>
</a>
<a class="md-flex md-footer-nav__link md-footer-nav__link--next" href="getting_started_with_cpp_api.html" rel="next" title="Getting Started with C++">
<div class="md-flex__cell md-flex__cell--stretch md-footer-nav__title">
<span class="md-flex__ellipsis">
<span class="md-footer-nav__direction">
Next
</span>
Getting Started with C++
</span>
</div>
<div class="md-flex__cell md-flex__cell--shrink">
<i class="md-icon md-icon--arrow-forward md-footer-nav__button">
</i>
</div>
</a>
</nav>
</div>
<div class="md-footer-meta md-typeset">
<div class="md-footer-meta__inner md-grid">
<div class="md-footer-copyright">
<div class="md-footer-copyright__highlight">
© Copyright 2021, NVIDIA Corporation.
</div>
Created using
<a href="http://www.sphinx-doc.org/">
Sphinx
</a>
3.1.2.
and
<a href="https://github.com/bashtage/sphinx-material/">
Material for
Sphinx
</a>
</div>
</div>
</div>
</footer>
<script src="../_static/javascripts/application.js">
</script>
<script>
app.initialize({version: "1.0.4", url: {base: ".."}})
</script>
</body>
</html>
Loading...
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
C++
1
https://gitee.com/shineboy/pytorch-tensor-rt.git
git@gitee.com:shineboy/pytorch-tensor-rt.git
shineboy
pytorch-tensor-rt
pytorch-TensorRT
1.4_docs_archive

搜索帮助