feat: Add missing operators for AIDGE model benchmarking
Compare changes
Files
17+ 18
− 12
@@ -2,16 +2,18 @@
@@ -20,18 +22,22 @@ void batchnorm_forward (
I am working on benchmarking AIDGE models by fully creating them in AIDGE, setting their weights randomly, and exporting them to C++. The goal is to verify if the outputs in C++ match the forward pass results in AIDGE. You can find it here
To achieve this, I have added several missing operators necessary for running models such as:
These operators are now available in the export pipeline to ensure proper benchmarking.
Operators & Kernels
aidge_export_cpp/kernels/batchnorm.hpp
: Added missing batch normalization operator.aidge_export_cpp/kernels/concat.hpp
: Added concatenation operator.aidge_export_cpp/kernels/pad.hpp
: Added padding operator.aidge_export_cpp/kernels/pooling.hpp
: Added AvgPooling2D.aidge_export_cpp/kernels/softmax.hpp
: Added softmax operator.Python Script
aidge_export_cpp/operators.py
: Updated operator handling to include new additions.Configuration Templates
aidge_export_cpp/templates/configuration/*
: Added configurations for batchnorm, concat, pad, and softmax.Kernel Forward Templates
aidge_export_cpp/templates/kernel_forward/*
: Added missing forward implementations for batchnorm, concat, pad, and softmax.Unit Tests
aidge_export_cpp/unit_tests/test_export.py
: Updated tests to validate newly added operators.Copyright © Eclipse Foundation, Inc. All Rights Reserved. Privacy Policy | Terms of Use | Copyright Agent