Draft: Add QAT
Implement QAT in Aidge. This MR aims to reach iso-functionalities with N2D2. Beyond simple iso-functionalities, this MR includes many code cleaning, factorization and optimization vs N2D2, and also makes extensive use of graph matching. The eventual goal is to make QAT developement in Aidge simpler and more powerful than ever... and most importantly, provide direct bit-accurate true quantization with optimized scaling strategies for hardware designers
-
Enable CUDA build; -
Implement QAT operators. -
Base LSQ operators; -
Base SAT operators; -
LSQ CPU implementations; -
SAT CPU implementations (for half, float and double precision); -
LSQ CUDA implementations; -
SAT CUDA implementations (for half, float and double precision).
-
-
Implement LSQ graph transformation recipes for QAT; -
Implement SAT graph transformation recipes for QAT; -
Implement LSQ QAT training recipe examples ➡ Requires working CUDA training first; -
Implement SAT QAT training recipe examples ➡ Requires working CUDA training first; - Implement LSQ graph transformations recipes for export
🛑 Did not exist in N2D2; -
Implement SAT graph transformations recipes for export 🚧 WIP.-
Scaling Operator adaptation; -
Clipping Operator.
-
Tests and validation:
-
Pre-req: CUDA MNIST training on LeNet -
Validation on LeNet -
Pre-req: CUDA ImageNet training on MobileNetv1 -
Validation on MobileNetv1 -
Pre-req: CUDA ImageNet training on MobileNetv2 -
Validation on MobileNetv2 -
Pre-req: CUDA ImageNet training on ResNet18 -
Validation on ResNet18
Edited by Olivier BICHLER