Skip to content

Aidge cleaner add recipes

This issue track the list of recipes to add to the aidge cleaner modules, ticked recipes are implemented and tested:

  • Constant/Shape folding
  • Adjust matmul slice: if slice is before matmul it can prevent folding? Not sure why
  • EliminateConsecutiveIdempotentOps: idempotent_ops defined by the framework {"Ceil", "Floor", "Round", "Relu", "Reshape"}
  • Eliminate Duplicate Initializer: Remove initializer with the same values.
  • Eliminate Identity: remove identity nodes
  • Eliminate DropOut nodes (should be removed for inference perf)
  • Eliminate if with constant condition
  • nop_cast: Remove cast if the dtype to cast to is already the input dtype
  • nop_concat: Remove concat with one input
  • nop dropout: Remove dropout with ratio at 0
  • nop expand: Remove exand with shape = 1
  • nop flatten: Remove flatten if input is already flatten
  • nop monotone argmax: remove operator beofre argmax if it is a monotone operator. two types of monotomne operator, one that depend on the axis (softmax, LogSoftmax) and ones that doesn't depends on the axis (log, exp, sqrt)
  • nop pad: Pad with 0s
  • nop reshape: Shape = input shape
  • nop split: Split to one output
  • nop transpose: Tranpose in order
  • nop with unit:
    • Remove Add, Or with input tensor 0
    • Remove Mul, And with input tensor 1
    • Remove Sub if input idx = 1 with value = 0
    • Remove Div, Pow if input idx = 1 with value = 1
  • Slice after shape: Precompute sliced shape
  • Unused intializer: Remove initializer connected to nothing (automatically done at aidge import?)
  • Fuse Add bias to conv: Conv followed by Add is fused in Conv bias
  • fuse bn conv
  • Fuse consecutive concat
  • Fuse log -> softmax in LogSoftmax
  • Fuse consecutive Reduce Unsqueeze: If keep dims is false and there is an unsqueeze then set keep dims to true
  • Fuse consecutive Slices
  • Fuse consecutive Squeeze
  • Fuse consecutive Transpose
  • Fuse consecutive Unsqueeze
  • Fuse MatlMul Add => Gemm
  • Fuse Gelu
  • Fuse LayerNorm
  • Fuse pad with conv
  • Fuse pad with pool
  • Fuse QKV
  • Fuse transpose into gemm: Transpose weights instead of input
  • Convert reduce_mean to global_avg_pool
Edited by Cyril Moineau