Aidge cleaner add recipes
This issue track the list of recipes to add to the aidge cleaner modules, ticked recipes are implemented and tested:
-
Constant/Shape folding -
Adjust add: tensorrt have issue fusing matmul add based on the order -
Adjust matmul slice: if slice is before matmul it can prevent folding? Not sure why -
EliminateConsecutiveIdempotentOps: idempotent_ops defined by the framework {"Ceil", "Floor", "Round", "Relu", "Reshape"} -
RemoveDeadEnd: Remove op that are not tagged as output but that are at a leaf -
Eliminate Duplicate Initializer: Remove initializer with the same values. -
Eliminate Identity: remove identity nodes -
Eliminate DopOut nodes (should be removed for inference perf) -
Eliminate if with constant condition -
Eliminate_nop_cast: Remove cast if the dtype to cast to
is already the input dtype -
nop_concat: Remove concat with one input -
nop dropout: Remove dropout with ratio at 0 -
nop expand: Remove exand with shape = 1 -
nop flatten: Remove flatten if input is already flatten -
nop monotone argmax: remove operator beofre argmax if it is a monotone operator. two types of monotomne operator, one that depend on the axis (softmax, LogSoftmax) and ones that doesn't depends on the axis (log, exp, sqrt) -
nop pad: Pad with 0s -
nop reshape: Shape = input shape -
nop split: Split to one output -
nop transpose: Tranpose in order -
nop cast: input.dtype = to() -
nop with unit: - Remove Add, Or with input tensor 0
- Remove Mul, And with input tensor 1
- Remove Sub if input idx = 1 with value = 0
- Remove Div, Pow if input idx = 1 with value = 1
- Remove Concat if one of the input is empty
-
Shape gather: Replace Shape and Gather by a constant (condition for their constant folding) -
shape op: Replace shape operator by a constant with the shape (condition for their graph constant folding...) -
Slice after shape: Precompute sliced shape -
Unused intializer: Remove initializer connected to nothing (automatically done at aidge import?) -
constant to initializer: Transform constant node to initializer (I there really a gain?) -
Fuse Add bias to conv: Conv followed by Add is fused in Conv bias -
fuse bn conv -
fuse concat reshape: Precompute reshape (constant folding) -
Fuse consecutive concat -
Fuse log -> softmax in LogSoftmax -
Fuse consecutive reduce unsqueeze: If keep dims is false and there is an unsqueeze then set keep dims to true -
Fuse consecutive slices -
Fuse consecutive Squeeze -
Fuse consecutive Transpose -
Fuse consecutive Unsqueeze -
Fuse consecutive Reshape -
Fuse MatlMul Add => Gemm -
Fuse Gelu -
Fuse LayerNorm -
Fuse pad with conv -
Fuse pad with pool -
Fuse QKV -
Fuse transpose into gemm: Transpose weights instead of input
Edited by Cyril Moineau