Draft: [Add][WIP] benchmark scripts
Compare changes
The aim is to add an universal system to measure the accuracy and time performances of each Aidge module, wheter they are backend or export modules.
generate_graph.py
python benchmarks/generate_graph.py --operator-config benchmarks/operator_config/add_config.json --ref benchmarks/results/add_onnxruntime.json --libs benchmarks/results/add_torch.json benchmarks/results/add_aidge_backend_cpu.json
This will create a bar plot comparing onnxruntime
, torch
and aidge_backend_cpu
liraries with onnxruntime
performances as a reference
The result JSON files should have the following structure - see this example for the Add
operator for module aidge_backend_cpu
:
{
"library": "aidge_backend_cpu",
"compare": {},
"time": {
"dim size": {
"1": [
1.1659999999968917e-05, // <- one iteration time measured
...
],
"4": [...],
"16": [...],
"64": [...],
"128": [...]
},
"one dim broadcasted (idx)": {
...
},
"two dims broadcasted (idx)": {
...
},
"nb missing axis 1st input": {
...
}
}
}
Extract of operator-config file for Add Operator:
{
"operator": "Add",
"opset_version": 21,
"initializer_rank": 2,
"base_configuration": {
"input_shapes": [
["input_0", [64, 64, 64, 64]],
["input_1", [64, 64, 64, 64]]
],
"attributes": {}
},
"test_configuration": {
"main_parameters": {
"dim size": [
1,4,16,64,128
],
"one dim broadcasted (idx)": [...],
"two dims broadcasted (idx)": [...],
"nb missing axis 1st input": [...]
},
"other_parameters": {
"dim size": {
"1": {
"attributes": {},
"input_shapes": [
["input_0", [1, 1, 1, 1]],
["input_1", [1, 1, 1, 1]]
]
},
"4": {
"attributes": {},
"input_shapes": [
["input_0", [4, 4, 4, 4]],
["input_1", [4, 4, 4, 4]]
]
},
...
Copyright © Eclipse Foundation, Inc. All Rights Reserved. Privacy Policy | Terms of Use | Copyright Agent