Fix some operators
Several operators are currently not functioning or not fully supported on the ARM Cortex-M backend. Below is a categorization of the issues observed:
Operators not fully supported:
DivReshapeSigmoidBatchNorm
Operators without any jinja implementation:
MatMulSoftmax
Operators requiring correction:
Atan
Merge request reports
Activity
@cmoineau @wboussella @pierregaillard
In this merge request, I fixed the operators that were not working as identified in the last time : Div, MatMul, Softmax, Sigmoid, Reshape, BatchNorm, and Atan.
All the operators I modified now compile correctly, except for BatchNorm and Reshape.
The current blocking point with the BatchNorm operator is that some variable names used in the .jinja template are outdated (for example, output_name → out_name, input_name → in_name, etc.). I haven’t found variable names for the following elements:
- running_mean_name
- running_var_name
- bias_name
- weight_name
These are the only missing declarations needed to complete the full support for BatchNorm.
Regarding the Reshape operator: I was not able to confirm if it is working, because I couldn't create a config_reshape.json to generate, compile, and run the benchmark. the code is in place, but I would need a working example json configuration to test and validate it properly.
Aside from that, all other operators are now functional.
Can you explain for each operators the changes you made to make them work?
For BatchNorm these variables are inputs of the node.
@pineapple I let you check for config_reshape.json
Here are the modifications I made:
Operators:
Div :
- Improved the kernel and renamed it for easier usage
- Added
.jinjatemplates forconfigurationandforward_call - Declared the corresponding class in
operators.py
MatMul :
- Fixed the kernel prototype
- Updated the
.jinjatemplates
Softmax :
- Fixed the kernel prototype
- Updated the
.jinjatemplates
Sigmoid :
- Added and improved a new kernel
- Added
.jinjatemplates forconfigurationandforward_call
Atan :
- Modified the
output_sizevalue
Reshape :
- Added and improved the kernel
- Added
.jinjatemplates forconfigurationandforward_call - Declared the corresponding class in
operators.py
BatchNorm :
- Added the
.hfile in the kernel directory - Fixed the Jinja templates for
configurationandforward_call - Declared the corresponding class in
operators.py
Other adjustments:
data_conversion.py :
- Fixed the mapping for
float32, which was incorrectly converted todata<-32> - This incorrect mapping was causing compilation errors.
- Resolved by Cyril Moineau
assigned to @rboumbar
Hi, thanks for the fix.
Since !MR9, many changes have been made, but some operators have not been updated and no longer work. Could you please integrate these changes as soon as possible to help resolve the functionality regression of the current version ?
Hello @oantoni,
Do you have an example script that you can send me (you can transfer it by mail)?
@rboumbar Will not be able to make those updates.
Also with benchmark tools developped recently we are going to plug a STM32 card to our test server which will allow us to add unit test and avoid in the future regression like these.
Regards, Cyril
changed milestone to %aidge v0.7.0
mentioned in issue #39 (closed)