Quantization severly degrades performances on acoustic classifier
What commit version of aidge do you use
-
aidge_core
: 0.7.0 -
aidge_quantization
: 0.4.2
Problem description
I try to quantize my model, and produced a concise script adapted to my use case with a small test set.
Quantization severly degrades performances on my model and I don't know how to debug that.
You can easily reproduce with :
Degraded performances
So I first evaluate the base model on the small test_dataset (9938 samples) and then I evalutate a quantified model on the same dataset.
What I print is the class_accuracy
which is the accuracy per class for the 4 output classes of the model. (the classifier head has 4 neurons).
Here is what I get :
# Base model class accuracy
[0.5577956989247311, 0.7319840539711745, 0.721298495645289, 0.9957142857142857]
# Quantif model class accuracy:
[0.8870967741935484, 0.31002759889604414, 0.0, 0.0]
We see that the results for the quantified model are severly degraded. : class_acc is 0.3 for class 1 and 0 for classes 2&3.
How to debug this behavior ?
Quantif setup details
I've tried changing values for NB_CALIB
and NB_BITS
or use float32 target type but the results are still bad.
# Global variables
USE_CUDA = False
BACKEND = "cpu"
## PTQ Variables
NB_CALIB = 500
NB_BITS = 16
TARGET_TYPE = aidge_core.dtype.int32 # or aidge_core.dtype.float32
CLIPPING = aidge_quantization.Clipping.MSE # 'MAX'
NO_QUANT = False
OPTIM_SIGNS = True
FOLD_GRAPH = True
SINGLE_SHIFT = False
Model details
Visualize the model with mermaid
Native operators: 42 (6 types)
- Conv1D: 9
- FC: 1
- Producer: 21
- ReLU: 9
- Reshape: 1
- Softmax: 1
Generic operators: 0 (0 types)
Native types coverage: 100.0% (6/6)
Native operators coverage: 100.0% (42/42)