Add missing operators for basic onnx model exporting
Context
While trying to export a hugging face onnx model to cpp, some operators were missing So in this merge request, I add a reshape operator as well as adding back the matmul operator (that was previously removed because of old api use) to the export_cpp module, to be able to correctly export a basic MLP onnx model to cpp. The onnx model can be found at: https://huggingface.co/EclipseAidge/mobilenet_v1
Modified files
- Added
reshape.h
,reshape.cpp
and matching jinja kernel to add Operator reshape to export -
operator.py
, to add reshape and matmul operator as well as refactor some duplicate code -
matmul.cpp
,,matmul.h
, to adapt the operator to the new api interface
Detailed major modifications
Created a reshape operator for the export (Basic memcopy of the input buffer to the output buffer)
Adapted the matmul kernel and jinja for the new operator interface (inputs_dims
-> in_dims
for exemple)
Then added back this matmul operator in the export
In operator.py refactor all the shared code for all the composed operator:
-
setupConv2D
for bothConv2D
andPaddedConv2
-
setupElemWiseOp
for all the elemwise operators (Add, Sub, Mul, ...) -
setupPooling
for all the pooling operators (Max, Avg, PaddedMax, ...)
TODO
-
(ref discussion !31 (comment 3330814)) Refactor the reshape operator as a non op for the c++ side (as the reshape doesn't affect the buffer, only the forward dims of the buffer and those are already calculated with the scheduler in python)