Skip to content
Snippets Groups Projects

Fix some operators

2 unresolved threads

Several operators are currently not functioning or not fully supported on the ARM Cortex-M backend. Below is a categorization of the issues observed:

Operators not fully supported:

  • Div
  • Reshape
  • Sigmoid
  • BatchNorm

Operators without any jinja implementation:

  • MatMul
  • Softmax

Operators requiring correction:

  • Atan

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
    • @cmoineau @wboussella @pierregaillard

      In this merge request, I fixed the operators that were not working as identified in the last time : Div, MatMul, Softmax, Sigmoid, Reshape, BatchNorm, and Atan.

      All the operators I modified now compile correctly, except for BatchNorm and Reshape.

      The current blocking point with the BatchNorm operator is that some variable names used in the .jinja template are outdated (for example, output_name → out_name, input_name → in_name, etc.). I haven’t found variable names for the following elements:

      • running_mean_name
      • running_var_name
      • bias_name
      • weight_name

      These are the only missing declarations needed to complete the full support for BatchNorm.

      Regarding the Reshape operator: I was not able to confirm if it is working, because I couldn't create a config_reshape.json to generate, compile, and run the benchmark. the code is in place, but I would need a working example json configuration to test and validate it properly.

      Aside from that, all other operators are now functional.

    • Can you explain for each operators the changes you made to make them work?

      For BatchNorm these variables are inputs of the node.

      @pineapple I let you check for config_reshape.json

    • Here are the modifications I made:

      Operators:

      Div :

      • Improved the kernel and renamed it for easier usage
      • Added .jinja templates for configuration and forward_call
      • Declared the corresponding class in operators.py

      MatMul :

      • Fixed the kernel prototype
      • Updated the .jinja templates

      Softmax :

      • Fixed the kernel prototype
      • Updated the .jinja templates

      Sigmoid :

      • Added and improved a new kernel
      • Added .jinja templates for configuration and forward_call

      Atan :

      • Modified the output_size value

      Reshape :

      • Added and improved the kernel
      • Added .jinja templates for configuration and forward_call
      • Declared the corresponding class in operators.py

      BatchNorm :

      • Added the .h file in the kernel directory
      • Fixed the Jinja templates for configuration and forward_call
      • Declared the corresponding class in operators.py

      Other adjustments:

      data_conversion.py :

      • Fixed the mapping for float32, which was incorrectly converted to data<-32>
      • This incorrect mapping was causing compilation errors.
    • Please register or sign in to reply
  • Racim Boumbar added 1 commit

    added 1 commit

    Compare with previous version

    • Hi, thanks for the fix.

      Since !MR9, many changes have been made, but some operators have not been updated and no longer work. Could you please integrate these changes as soon as possible to help resolve the functionality regression of the current version ?

    • Hello @oantoni,

      Do you have an example script that you can send me (you can transfer it by mail)?

      @rboumbar Will not be able to make those updates.

      Also with benchmark tools developped recently we are going to plug a STM32 card to our test server which will allow us to add unit test and avoid in the future regression like these.

      Regards, Cyril

    • Please register or sign in to reply
  • changed milestone to %aidge v0.7.0

  • mentioned in issue #39 (closed)

  • Please register or sign in to reply
    Loading