Aidge Export: TensorRT
The aim of this module is to provide an export to TensorRT SDK via the Aidge framework.
Table of Contents
Requirement
In order to compile the export on your machine, please be sure to have one of these two conditions:
- To have installed Docker (the export compilation chain is able to use docker)
- To have installed the correct packages to support TensorRT 8.6
Install
To install aidge_export_tensorrt
module you have to go in your aidge/aidge/
directory, clone the module, and then install it.
git clone https://gitlab.eclipse.org/eclipse/aidge/aidge_export_tensorrt.git
cd aidge_export_tensorrt/
pip install .
Usage
To use aidge_export_tensorrt
module, you have to import the module in python and call the export function. This function takes as argument the name of the export folder and the onnx file or the graphview of your model.
import aidge_export_tensorrt
aidge_export_tensorrt.export("export_trt", "model.onnx")
The export provides a Makefile with several options to utilize the export on your machine. You can generate either a C++ export or a Python export.
Additionally, you have the option to compile the export and/or the Python library using Docker if your host machine lacks the necessary packages.
The available commands are summarized in the following table:
Command | Description |
---|---|
make / make help |
Display the different options available |
make build_cpp |
Compile the export on host for C++ apps (generate an executable in build/bin) |
make build_lib_python |
Compile the export on host for Python apps (generate a python lib in build/lib) |
make build_image_docker |
Generate the docker image of the tensorrt compiler |
make build_cpp_docker |
Compile the export in a container for C++ apps (generate an executable in build/bin) |
make test_cpp_docker |
Test the executable for C++ apps in a container |
make build_lib_python_docker |
Compile the export in a container for Python apps (generate a python lib in build/lib) |
make test_lib_python_docker |
Test the lib for Python apps in a container |
make clean |
Clean up the build and bin folders |
Here's an example to compile and test the export Python library using Docker:
cd export_trt/
make build_lib_python_docker
make test_lib_python_docker
This will execute the test.py
file within the Docker container, initializing and profiling the selected model.
Known issue
Generation side
Issue related to the generation of the TensorRT export.
Export side
Issue related to the usage of the TensorRT export.
No CMAKE_CUDA_COMPILER could be found
CMake Error at CMakeLists.txt:21 (enable_language):
No CMAKE_CUDA_COMPILER could be found.
Tell CMake where to find the compiler by setting either the environment
variable "CUDACXX" or the CMake cache entry CMAKE_CUDA_COMPILER to the full
path to the compiler, or to the compiler name if it is in the PATH.
This error occur when you try to compile your project without having NVCC to your PATH.
To fix this, add nvcc to the path:
export PATH=<NVCC_PATH>:$PATH;
Where <NVCC_PATH> is the path to the nvcc compiler.
For recent ORIN nvcc is installed at: /usr/local/cuda/bin
.