Require ScalingMode::FIXED_MULT16 mode (MUL INT16 + BitShift)
What commit version of aidge do you use
-
aidge_x
: all aidge modules on dev branch, updated yesterday
Problem description
I work on integer 8bits export. No floating operation can be achieved. In the metaop defined bellow, I work on the MUL node following the ADD operation. The ScalingMode::FIXED_MULT16 mode is required for this node but it is a float value instead. The MUL+Quantizer nodes should be replaced by Mul INT16 + BitShift to be compliant with a int8 export.
# Fuse AddQuantizer with FixedPoint Quantizer
fuse_to_metaops(model, "Add->Mul->Quantizer->ReLU?", "MetaAdd")
Reproducible example code
You can find here a mobilenetV2 as example to implement this. model.onnx