Add knowledge distillation (KD) loss to aidge_learning
Compare changes
Files
3+ 11
− 0
@@ -34,6 +34,17 @@ Tensor MSE(std::shared_ptr<Tensor>& prediction,
Adding the Knowledge Distillation (KD) loss to aidge for incremental learning.
aidge_learning/src/loss/distillation/KD.cpp
: definition of the KD loss (highly inspired by BCE loss)aidge_learning/include/aidge/loss/LossList.hpp
: Adding the KD loss the loss listaidge_learning/python_binding/learning/loss/pybind_Loss.cpp
: Add the biding python for the KD LossIntegrate the merge request into the main branch
Copyright © Eclipse Foundation, Inc. All Rights Reserved. Privacy Policy | Terms of Use | Copyright Agent