This issue is still unresolved. Cleaning the input tensor does not seem to work for @axelfarr either. However, adding a casting node before all model inputs successfully prevents the error.
When I debug, the input tensor is indeed in Float64. However, calling resetInput() does not seem to delete the input:
std::shared_ptr<Aidge::OperatorTensor> opTensor =std::static_pointer_cast<OperatorTensor>(input_node->getOperator()); Aidge::DataType dt = opTensor->getInput(0)->dataType(); Log::debug("node type is {} and name is {} and input 0 datatype is {}",input_node->type(),input_node->name(),dt); dt->resetInput(0); Log::debug("node type is {} and name is {} and input 0 datatype is {}",input_node->type(),input_node->name(),dt);
This give me the following output:
[DEBUG] - node type is PaddedConv2D and name is resnetv15_conv0_fwd_1 and input 0 datatype is[DEBUG] Float64[DEBUG] - node type is PaddedConv2D and name is resnetv15_conv0_fwd_1 and input 0 datatype is[DEBUG] Float64
Maybe the issue only occurs when the first node of the graph is a metaOps.
I can quantize a LeNet (first node is Conv2D) without triggering this issue, however running a Resnet18 (fist node is PaddedConv2D) will trigger the issue on the Pad operator...