Skip to content

Integration of I-ViT operators

Context

Implementation of the 3 I-ViT operators: ShiftMax, ShiftGELU and ILayerNorm, which are quantifiable arithmetic approximations of SoftMax, GELU and LayerNorm. The forward and backward operators are implemented to enable quantization-aware-training of networks containing these layers (notably Transformers). Unit tests are also implemented to illustrate how these operators work, and to measure the difference between these approximators and the original operators.

Modified files

  • include/aidge/backend/cuda.hpp: Import of ShiftMax, ShiftGELU and ILayerNorm;

Added files

  • src/operator/ShiftMaxImpl.cpp and include/aidge/backend/cuda/operator/ShiftMaxImpl.hpp: Definition of the forward and backward for the ShiftMax method (highly inspired by ReLU implementation);

  • src/operator/ShiftMaxImpl_CUDA_kernels.cu and include/aidge/backend/cuda/operator/ShiftMaxImpl_CUDA_kernels.hpp: Implementation of the CUDA kernels which compute the ShiftMax operation. This CUDA code is conform to the Python implementation of I-ViT;

  • unit_tests/Test_ShiftMaxImpl.cpp: Unit test that illustrate how the forward and backward function of ShiftMax work and compare it with SoftMax;

  • src/operator/ShiftGELUImpl.cpp and include/aidge/backend/cuda/operator/ShiftGELUImpl.hpp: Definition of the forward and backward for the ShiftGELU method (highly inspired by ReLU implementation);

  • src/operator/ShiftGELUImpl_CUDA_kernels.cu and include/aidge/backend/cuda/operator/ShiftGELUImpl_CUDA_kernels.hpp: Implementation of the CUDA kernels which compute the ShiftGELU operation. This CUDA code is conform to the Python implementation of I-ViT;

  • unit_tests/Test_ShiftGELUImpl.cpp: Unit test that illustrate how the forward and backward function of ShiftGELU work and compare it with SoftMax;

  • src/operator/ILayerNormImpl.cpp and include/aidge/backend/cuda/operator/ILayerNormImpl.hpp: Definition of the forward and backward for the ILayerNorm method (highly inspired by FC implementation);

  • src/operator/ILayerNormImpl_CUDA_kernels.cu and include/aidge/backend/cuda/operator/ILayerNormImpl_CUDA_kernels.hpp: Implementation of the CUDA kernels which compute the ILayerNorm operation. This CUDA code is conform to the Python implementation of I-ViT;

  • unit_tests/Test_IlayerNormImpl.cpp: Unit test that illustrate how the forward and backward function of ILayerNorm work;

TODO

Merge into main if pipeline successful

  • NOT DONE
  • DONE
  • TO DO

Merge request reports

Loading