[Add] benchmark mechanism
What commit version of aidge do you use
-
aidge bundle
: 0.5.1_dev
Issue Summary
Aidge currently lacks a standardized and easy-to-use benchmarking mechanism for evaluating the performance and correctness of different modules. To ensure efficient model execution and accurate inference across various backends, we need a flexible benchmarking script.
Motivation
Aidge provides multiple module options for running deep learning models. A proper benchmarking framework will:
- Help compare inference time across different execution backends.
- Verify that inference outputs are consistent across different configurations.
- Assist in optimizing implementations by identifying performance bottlenecks.
Expected Features
The benchmarking script should:
- Measure Inference Time
- Compare inference results between the assessed modules and ONNXRuntime.
- Support testing
-
individual ONNX operator
- custom input size
- custom attributes
-
Aidge operators
- custom input size
- custom attributes
- multiple-nodes ONNX model
-
individual ONNX operator
- Include configurable options for
- number of warmup and test runs
- set of tests or individual model
- Support JSON output for use of benchmark results in other scripts.
- Allow users to specify the library to assess (Aidge module, PyTorch, ONNXRuntime).
- Be usable for
- Aidge unit-tests
- as independant script