My bad, I updated the code while there are multiple Reshape GenericOperators :
for node in graph.get_nodes(): print(node.type()) if node.type() == "Reshape": reshape_op = node.get_operator() reshape_op.set_compute_output_dims("""????""") reshape_op.set_impl(ReshapeImpl(reshape_op))
The thing is, I don't know what to write in argument in the set_compute_output_dims method for a reshape op.
At the : graph.forward_dims() command, the kernel crashes
Diego Barraganchanged the descriptionCompare with previous version
changed the description
Diego Barraganchanged title from Error while implementing Reshape op : `false && "GenericOperator cannot forward dims"' to Kernel crashes with forward_dims method - Which argument in the set_compute_output_dims method ?
changed title from Error while implementing Reshape op : `false && "GenericOperator cannot forward dims"' to Kernel crashes with forward_dims method - Which argument in the set_compute_output_dims method ?
Indeed, the output dims computation is missing, but since the output dims are actually computed in the forward(), there is a problem! In the case of this operator, the static computeOutputDims() mechanim won't work.
We are currently working on a small refactoring to specifically address this issue...
You can check branch forwarddims, which should solve your issue:
graph.forward_dims() becomes optional, and it won't fail if no compute output dims is provided for Generic operator.
In this case, you should be aware that token-based scheduling will be used from this point instead of consumed-produced data quantities. If your are only using operators with full tensor consumption-production implementation (which is currently the case in Aidge), it does not change anything for you.
You can therefore directly set the output tensor dims in your implementation!