Skip to content
Snippets Groups Projects
Commit 001ffa6e authored by Cyril Moineau's avatar Cyril Moineau
Browse files

Merge branch 'updatedoc' into 'master'

Updatedoc

See merge request !5
parents 607f23c0 0a07e8a1
No related branches found
No related tags found
No related merge requests found
......@@ -2,6 +2,62 @@
The aim of this module is to provide an export to [TensorRT SDK](https://developer.nvidia.com/tensorrt) via the Aidge framework.
## Table of Contents
- [Requirement](#Requirement)
- [Install](#Install)
- [Usage](#Usage)
- [Known issue](#Known-issue)
## Requirement
In order to compile the export on your machine, please be sure to have one of these two conditions:
- To have installed [Docker](https://docs.docker.com/get-docker/) (the export compilation chain is able to use docker)
- To have installed the correct packages to support TensorRT 8.6
## Install
To install `aidge_export_tensorrt` module you have to go in your `aidge/aidge/` directory, clone the module, and then install it.
```
git clone https://gitlab.eclipse.org/eclipse/aidge/aidge_export_tensorrt.git
cd aidge_export_tensorrt/
pip install .
```
## Usage
To use `aidge_export_tensorrt` module, you have to import the module in python and call the export function. This function takes as argument the name of the export folder and the onnx file or the graphview of your model.
```
import aidge_export_tensorrt
aidge_export_tensorrt.export("export_trt", "model.onnx")
```
The export provides a Makefile with several options to utilize the export on your machine. You can generate either a C++ export or a Python export. <br>
Additionally, you have the option to compile the export and/or the Python library using Docker if your host machine lacks the necessary packages.
The available commands are summarized in the following table:
| Command | Description |
|-------------------------------|--------------------------------------------------------------------------------------------|
| `make / make help` | Display the different options available |
| `make build_cpp` | Compile the export on host for C++ apps (generate an executable in build/bin) |
| `make build_lib_python` | Compile the export on host for Python apps (generate a python lib in build/lib) |
| `make build_image_docker` | Generate the docker image of the tensorrt compiler |
| `make build_cpp_docker` | Compile the export in a container for C++ apps (generate an executable in build/bin) |
| `make test_cpp_docker` | Test the executable for C++ apps in a container |
| `make build_lib_python_docker`| Compile the export in a container for Python apps (generate a python lib in build/lib) |
| `make test_lib_python_docker` | Test the lib for Python apps in a container |
| `make clean` | Clean up the build and bin folders |
Here's an example to compile and test the export Python library using Docker:
```
cd export_trt/
make build_lib_python_docker
make test_lib_python_docker
```
This will execute the `test.py` file within the Docker container, initializing and profiling the selected model.
## Known issue
......
......@@ -23,7 +23,7 @@ void init_Graph(py::module& m)
py::arg("device_id") = 0,
py::arg("nb_bits") = -32)
.def("device", &Graph::device, py::arg("id"))
.def("device", &Graph::device, py::arg("id"))
.def("load", &Graph::load, py::arg("filepath"))
.def("save", &Graph::save, py::arg("filepath"))
.def("calibrate", &Graph::calibrate, py::arg("calibration_folder_path") = "./calibration_folder/", py::arg("cache_file_path") = "./calibration_cache", py::arg("batch_size") = 1)
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment