Add missing operators for basic onnx model exporting
Compare changes
Files
7@@ -65,8 +65,8 @@ void convolution_forward(
@@ -116,4 +116,45 @@ void convolution_forward(
While trying to export a hugging face onnx model to cpp, some operators were missing So in this merge request, I add a reshape operator as well as adding back the matmul operator (that was previously removed because of old api use) to the export_cpp module, to be able to correctly export a basic MLP onnx model to cpp. The onnx model can be found at: https://huggingface.co/EclipseAidge/mobilenet_v1
reshape.h
, reshape.cpp
and matching jinja kernel to add Operator reshape to exportoperator.py
, to add reshape and matmul operator as well as refactor some duplicate codematmul.cpp
, ,matmul.h
, to adapt the operator to the new api interfaceCreated a reshape operator for the export (Basic memcopy of the input buffer to the output buffer)
Adapted the matmul kernel and jinja for the new operator interface (inputs_dims
-> in_dims
for exemple)
Then added back this matmul operator in the export
In operator.py refactor all the shared code for all the composed operator:
setupConv2D
for both Conv2D
and PaddedConv2
setupElemWiseOp
for all the elemwise operators (Add, Sub, Mul, ...)setupPooling
for all the pooling operators (Max, Avg, PaddedMax, ...)Copyright © Eclipse Foundation, Inc. All Rights Reserved. Privacy Policy | Terms of Use | Copyright Agent