Gradient compilation
The recipe function compile_gradient()
has been removed in the commit ecc96977 by @olivierbichler
However, it was removed because initGrad
has also been removed. The code was changed to create a gradient Tensor with lazy initialization to solve this issue: aidge_backend_cuda#15 (closed)
However, initGrad()
and all the more compile_gradient()
where created as a first step of seperating the gradient graph from the forward graph and also the gradient Tensor from the Tensor. The link between the two would have been done by the Optimizer
.
These were the first steps of an incomplete work, inpired by https://mxnet.apache.org/versions/1.0.0/architecture/note_memory.html and following the discussions with @thibaultallenet to allow for graph optimizers that would not use gradient descent technics.