New operators merge request
New files
Added for the export the kernels, the templates of the config, and kernel files for these operators:
- Sigmoid
- Softmax
- BatchNorm2D
- MatMul
- Reshape
- Gather
- Transpose
Modified files
- All element wise operators now support array broadcasting
-
aidge_export_arm_cortexm/_Aidge_Arm/kernels/FullyConnected/aidge_fc_float32.cto better handle different shaped inputs ; -
aidge_export_arm_cortexm/_Aidge_Arm/templates/configuration/fullyconnected.jinjain accordance with the kernel; -
operators.pyto include the new operators and change the shape of the input passed to the FC operator, to reflect the changes made to the kernel;
Major modifications
- For the FC operator, the
nb_inputsparameter is now computed asnb_channels * channel_height * channel_widthwith the input always reshaped as [batch_size (usually == 1), nb_channels, feature_height, feature_width].nb_outputsis now equal tooutputs_dims[0][1]with the output_dims staying unchanged. Before,nb_inputswas equal tonb_channelswith the input reshaped as [nb_channels, H, W] (essentially removing the batch dimension). This created issues for inputs where the nb_channels = 1 and H,W > 1, only computing the result based on the 1st item of the input array. I believe that this could have been solved only by modifying the waynb_inputsis computed, and carry on discarding the batch size, but this could be useful if someone one day wants to do batched inference. - Some operations need to know what are the dimensions of the inputs and outputs (array broadcasting, transpose, etc) while others may have arrays as arguments/attributes (permutations indexes for transpose, the indexes for Gather, etc). Those arrays aren't graph nodes, they are parameters of the operator. In order to store and pass them to functions in C, they are saved in a new folder named
dimensionswhere for each array, a file "layer_name_{nature_of_array}.h" is created, withnature_of_array = { DIMS, INDEXES, PERMUTATIONS}
TO-DO
So far, the reshape operator simply copies the data from a pointer to another. This is not optimal.
Edited by Ilona Lazrak