Allow in-place operations
In !20 (merged) the case of Reshape
forward kernel is disccussed by @olivierbichler
I think Reshape Operator should not copy any memory! We could in
forward()
usesetRawPtr()
on output's Tensor to input's TensorrawPtr
.
At several occasions for now, memory is wasted in Aidge due to systematic differentiation between input and output Tensor. It would be very useful for computation optimization to allow effective use of in-place computation.
Here is a web page that more deeply talks about it.