Skip to content

Add outputwise comparison to benchmark

Cyril Moineau requested to merge OutputWiseComp into dev

Context

This MR will fix:

In the context of the refactorization of the ONNX unit test, I updated the script compare layer made by @gregkub.

In the end the script was fully re-written in order to better split the logic and integrate it in the benchmark made by @pineapple.

This MR is linked to aidge_onnx!140 (merged).

Modified files

  • output_wise_comparison.py: defines a data structures OutputTensorMap that hold the intermediate results of each output of a graph, A function to generate an OutputTensorMap with any aidge abckend has been made available with run_aidge_inferences. It also defines function to compare two OutputTensorMap, generating for each output an OutputComparison which hold wether or not the outptus are the same. Each output comparison are agregated in a list ComparisonResult which can be parsed to render the result of the test in a tabular fashion.
  • __init__.py: Add the output_wise_comparison.py to the module
  • pybind_GraphView.cpp: Correct minor mistakes in the docstring...

Detailed major modifications

TODO

  • Finish documentation
  • Add unit test on the new functionnalities:
    • run_aidge_inferences()
    • merge_topological_orders()
    • compare_outputs
  • run_aidge_inferences may not be a very good name out of context, a better name will be proposed.
Edited by Cyril Moineau

Merge request reports

Loading