Problem with GlobalAveragePool operator inference
Required prerequisites
- Make sure you've read the documentation. Your issue may be addressed there.
- Search the issue tracker and discussions to verify that this hasn't already been reported. +1 or comment there if it has.
What commit version of aidge do you use
-
aidge
: 0.5.0 -
aidge_backend_cpu
: 0.5.0 -
aidge_backend_cuda
: 0.5.0 -
aidge_core
: 0.5.1 -
aidge_export_cpp
: 0.2.1 -
aidge_onnx
: 0.4.1 -
aidge_quantization
: 0.3.0
Problem description
The AvgPool2D and GlobalAveragePooling operators can be considered equivalent when in the case of an AvgPool2D operator, the kernel size is equal to the map size. During an annodin test on a 7x7 map, it turned out that the GlobalAveragePooling does not give the right result while the AvgPool2D does give the right result (see the following python code). The value calculated by hand gives 26.
Input map:
{{ 0, 8, 26, 35, 49, 45, 22},
{ 2, 24, 48, 66, 60, 46, 26},
{ 8, 41, 64, 68, 39, 18, 9},
{ 10, 48, 72, 76, 42, 14, 9},
{ 6, 29, 52, 65, 27, 7, 3},
{ 1, 9, 24, 31, 18, 7, 1},
{ 0, 0, 4, 6, 7, 1, 1}}
GlobalAveragePooling result:
{{ 25}}
AvgPoolingD2 result:
{{ 26}}
Reproducible example code
Python code for AvgPool2D operator
import aidge_core
import aidge_backend_cpu
import numpy as np
# Define a list of tensors
tensors = [
np.array([[[[0, 8, 26, 35, 49, 45, 22],
[2, 24, 48, 66, 60, 46, 26],
[8, 41, 64, 68, 39, 18, 9],
[10, 48, 72, 76, 42, 14, 9],
[6, 29, 52, 65, 27, 7, 3],
[1, 9, 24, 31, 18, 7, 1],
[0, 0, 4, 6, 7, 1, 1]]]]),
]
# Loop through each tensor and perform the same operations
for tensor_array in tensors:
tensor = aidge_core.Tensor(tensor_array)
tensor.set_datatype(aidge_core.dtype.int32)
op = aidge_core.AvgPooling2DOp([7, 7])
print(op)
op.set_datatype(aidge_core.dtype.int32)
op.set_backend("cpu")
op.associate_input(0, tensor)
op.forward_dims()
print(tensor.dims())
print(tensor)
op.forward()
print(op.get_output(0))
Python code for GlobalAvgPool2D operator
import aidge_core
import aidge_backend_cpu
import numpy as np
# Define a list of tensors
tensors = [
np.array([[[[0, 8, 26, 35, 49, 45, 22],
[2, 24, 48, 66, 60, 46, 26],
[8, 41, 64, 68, 39, 18, 9],
[10, 48, 72, 76, 42, 14, 9],
[6, 29, 52, 65, 27, 7, 3],
[1, 9, 24, 31, 18, 7, 1],
[0, 0, 4, 6, 7, 1, 1]]]]),
]
# Loop through each tensor and perform the same operations
for tensor_array in tensors:
tensor = aidge_core.Tensor(tensor_array)
tensor.set_datatype(aidge_core.dtype.int32)
node = aidge_core.globalaveragepooling()
op = node.get_operator()
print(op)
op.set_datatype(aidge_core.dtype.int32)
op.set_backend("cpu")
op.associate_input(0, tensor)
op.forward_dims()
print(tensor.dims())
print(tensor)
op.forward()
print(op.get_output(0))