TIM-VX is a software integration module provided by VeriSilicon to facilitate deployment of Neural-Networks on Verisilicon ML accelerators. It serves as the backend binding for runtime frameworks such as Android NN, Tensorflow-Lite, MLIR, TVM and more.
Main Features
Feel free to raise a github issue if you wish to add TIM-VX for other frameworks.
TIM-VX supports both bazel and cmake.
To build TIM-VX for x86 with prebuilt:
mkdir host_build
cd host_build
cmake ..
make -j8
make install
All install files (both headers and *.so) is located in : host_build/install
cmake options:
option name | Summary | Default |
---|---|---|
TIM_VX_ENABLE_TEST |
Enable unit test case for public APIs and ops | OFF |
TIM_VX_ENABLE_LAYOUT_INFER |
Build with tensor data layout inference support | ON |
TIM_VX_USE_EXTERNAL_OVXLIB |
Replace internal with a prebuilt libovxlib library | OFF |
OVXLIB_LIB |
full path to libovxlib.so include so name, required if TIM_VX_USE_EXTERNAL_OVXLIB =ON |
Not set |
OVXLIB_INC |
ovxlib's include path, required if TIM_VX_USE_EXTERNAL_OVXLIB =ON |
Not set |
EXTERNAL_VIV_SDK |
Give external vivante openvx driver libraries | Not set |
TIM_VX_BUILD_EXAMPLES |
Build example applications | OFF |
TIM_VX_ENABLE_40BIT |
Enable large memory (over 4G) support in NPU driver | OFF |
Run unit test:
cd host_build/src/tim
export LD_LIBRARY_PATH=`pwd`/../../../prebuilt-sdk/x86_64_linux/lib:<path to libgtest_main.so>:$LD_LIBRARY_PATH
export VIVANTE_SDK_DIR=`pwd`/../../../prebuilt-sdk/x86_64_linux/
export VSIMULATOR_CONFIG=<hardware name should get from chip vendor>
# if you want to debug wit gdb, please set
export DISABLE_IDE_DEBUG=1
./unit_test
cd <wksp_root>
git clone --depth 1 -b release-1.10.0 git@github.com:google/googletest.git
cd <root_tim_vx>/build/
cmake ../ -DTIM_VX_ENABLE_TEST=ON -DFETCHCONTENT_SOURCE_DIR_GOOGLETEST=<wksp_root/googletest> <add other cmake define here>
-DEXTERNAL_VIV_SDK=<low-level-driver/out/sdk>
to cmake definitions, also remember -DCMAKE_TOOLCHAIN_FILE=<Toolchain_Config>
-DCONFIG=BUILDROOT -DCMAKE_SYSROOT=${CMAKE_SYSROOT} -DEXTERNAL_VIV_SDK=${BUILDROOT_SYSROOT}
If you want to build tim-vx as a static library, and link it to your shared library or application, please be carefull with the linker, "-Wl,--whole-archive" is required.
@see samples/lenet/CMakeLists.txt for reference
Install bazel to get started.
TIM-VX needs to be compiled and linked against VeriSilicon OpenVX SDK which provides related header files and pre-compiled libraries. A default linux-x86_64 SDK is provided which contains the simulation environment on PC. Platform specific SDKs can be obtained from respective SoC vendors.
To build TIM-VX:
bazel build libtim-vx.so
To run sample LeNet:
# set VIVANTE_SDK_DIR for runtime compilation environment
export VIVANTE_SDK_DIR=`pwd`/prebuilt-sdk/x86_64_linux
bazel build //samples/lenet:lenet_asymu8_cc
bazel run //samples/lenet:lenet_asymu8_cc
To build and run Tensorflow-Lite with TIM-VX, please see README
To build and run TVM with TIM-VX, please see TVM README
Chip | Vendor | References | Success Stories |
---|---|---|---|
i.MX 8M Plus | NXP | ML Guide, BSP | SageMaker with 8MP |
A311D | Khadas - VIM3 | A311D datasheet, BSP | Paddle-lite demo |
S905D3 | Khadas - VIM3L | S905D3 , BSP |
Create issue on github or email to ML_Support at verisilicon dot com
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。