Aidge Quantization Module
You can find in this folder the library that implements the quantization algorithms. For the moment only Post Training Quantization (PTQ) is available. Its implementation does support multiple branch architectures.
The requirements for installing the library are the followings:
- GCC, Make and CMake for the compilation pipeline
- The AIDGE modules aidge_core, aidge_onnx and aidge_backend_cpu
- Python (> 3.7) if you intend to use the pybind wrapper
Pip installation
In an environment which satisfies the previous requirements, run :
pip install . -v
User guide
In order to perform a quantization, you will need an AIDGE model (that can be loaded from an ONNX). Then, you will have to provide a calibration dataset consisting of AIDGE tensors (that can be loaded from some numpy arrays). And finally, you will have to specify the quantization number of bits.
Performing the PTQ on your model will then be a one liner:
aidge_quantization.quantize_network(aidge_model, nb_of_bits, calibration_set)
Technical insights
The PTQ algorithm consists of 3 main steps:
- Normalization of the parameters, so that each node set of weights fits in the [-1:1] range.
- Normalization of the activations, so that each node output value fits in the [-1:1] range.
- Quantization of the scaling nodes previously inserted
To achieve those steps, one must propagate the scaling factors inside the network. One should also balance the different branches when they are merging. A particular care is needed for the biases rescaling at each step.
Doing quantization step by step
It is possible to perform the PTQ step by step, thank's to the exposed functions of the API. In that case, here is the standard pipeline:
- remove the flatten and dropout nodes
- expand the meta-operators (if there are some)
- insert the scaling nodes
- perform the parameter normalization
- perform the output value normalization, over a calibration dataset
- quantize the normalized network
Further work
- add smart clipping methods for the normalizations.
- add Quantization Aware Training (QAT).