add support for TensorRT 10.10
Context
This MR aims to add support for newer version of TensorRT 10.10.
Modified files
Added tensorrt_10.10 folder which is a copy of tensorrt_8.6 folder in static/ directory. The following modifications have been applied to match with the 10.10 API version of TensorRT :
- Modified
tools/tensorrt10.10_compiler.Dockerfile&Makefileto use correct version of tensorrt - Updated unsupported dims32 → dims in
src/Graph.cppto match TRT 10 API - Updated deprecated
PlatformHasFast*()→ customcudaHasFast*()fordatamode()mapping ininclude/Utils.cppandsrc/Graph.cpp - Updated
trt_version = 10.10by default inexport()in__init__.py
Detailed major modifications
Added a static bool cudaSupportsDatatype(nvinfer1::Datatype datatype) to confirm that a datatype is supported on the CUDA SM. The support matrix is available at Support Hardware & Precision Matrix
Acknowledged deprecations
- The implicit-quantization API (
IInt8Calibrator) is deprecated in TensorRT 10.10; while aidge integrates ONNX Q/DQ layer support for the recommended explicit-quantization workflow, we’ll keep implicit quantization now and migrate in a future refactor. More informations are available at Quantized types documentation
Edited by matthieu marchal