Create new operators for Aidge Export_Arm_CortexM
main author: @wboussella
Intro
You want to create a new Operator for Aidge Export_Arm_CortexM, this tutorial is for you. We call a new operator an operator not yet supported by Aidge, or a custom operator needed for your export.
Aidge is Open Source, feel free to contribute if we don't support an operator
In this case, Aidge doesn't support the custom operator Normalize. So, an error will be raised if you try to export this model.
graph LR
A[Input] --> B((Conv))
B --> D(Normalize)
D --> C[Output]
We need to create a new Aidge operator, and there are only 3 steps*:
- Create the C/C++ kernel
- Write a new template to generate the code
- Add an export configuration
* four if we create a unit test
C Kernel
This part is the most difficult part, it's to This part is the most difficult part, it's to develop the C algorithm for your new operator. Most of the functions are presented like this:
void functionNormalize(float* input,
unsigned int input_height, //input dims
unsigned int input_width,
unsigned int input_channel,
int power, //parameters needed for the kernel (attributes)
float* output)
{
//algorithm
}
You need to put the results into the output tensor.
Templates
To generate the code of the forward.c, two templates are needed. They are written in Jinja, which is a powerful tool. In our case, we almost just need "{{ X }}", where X will be replaced by a variable during the export. The first one to generate the call of the function:
functionNormalize({{input_name}},
{{ name|upper }}_HEIGHT,
{{ name|upper }}_WIDTH,
{{ name|upper }}_NB_CHANNELS,
{{ name|upper }}_POW,
{{ output_name }}
);
The second one is used to define the layer configuration variables in a .h :
#define {{ name|upper }}_LAYER_H
{# For layer configuration -#}
#define {{ name|upper }}_CHANNELS {{ input_dims[0] }}
#define {{ name|upper }}_HEIGHT {{ input_dims[1] }}
#define {{ name|upper }}_WIDTH {{ input_dims[2] }}
#define {{ name|upper }}_POW {{ pow }}
#endif /* {{ name|upper }}_LAYER_H */
As we see, it's possible to use arrays as variables in Jinja.
If we provide:
- [1,7,8] as input_dims
- pow=2
- input_name = "output_conv"
- output_name = "output_normalize"
- name = "normalize"
The result will be that for the forward :
functionNormalize(output_conv,
NORMALIZE_HEIGHT,
NORMALIZE_WIDTH,
NORMALIZE_NB_CHANNELS,
NORMALIZE_POW,
output_normalize
);
And that for the .h :
#define {{ name|upper }}_LAYER_H
{# For layer configuration -#}
#define NORMALIZE_CHANNELS 1
#define NORMALIZE_HEIGHT 7
#define NORMALIZE_WIDTH 8
#define NORMALIZE_POW 2
#endif /* NORMALIZE_LAYER_H */
The arborescence can be like that :
graph LR
A[aidge_export_arm_cortex_m] --> B[Kernel]
B --> C(normalize.c)
A --> D[Templates]
D --> E[Kernel] --> F(normalize.jinja)
D --> G[Configuration] --> H(normalize.jinja)
Export config
We will add a new class in operator.py so the export can detect and process our new operator.
Step 1
The first step is to register the operator and create a new class
@operator_register("Normalize")
class Normalize_ARMCortexM(ExportNode):
Step 2
In the second step, you need to initialize your class. As you saw before, for normalize, we need the attribute power, which can be obtained by calling node.get_operator().attr.power
. attris a dictionary of all your available attributes in your node.
@operator_register("Normalize")
class Normalize_ARMCortexM(ExportNode):
def __init__(self, node, board, library):
for i in range(1, len(node.inputs())):
producer = node.input(i)[0]
self.producers.append(Producer_ARMCortexM(producer))
super().__init__(node)
self.board = board
self.library = library
self.power = node.get_operator().attr.power #get the attribute power
if len(self.inputs_dims[0]) == 4:
# if dims == [batch, nb_channels, height, width]
# transform to [nb_channels, height, width]
self.inputs_dims[0] = self.inputs_dims[0][1:]
if len(self.outputs_dims[0]) == 4:
self.outputs_dims[0] = self.outputs_dims[0][1:]
Note that we also have producers which can be used for operator with parameters like conv
Step 3
Do you remember our previous templates? That's where we use them now!
We export our layer configuration and our kernel. To copy our kernel to our export folder, we use this line:
copyfile(str(ROOT / "customs" / "kernels" / "aidge_normalize_hwc_fp32.c"),
str(Path(export_folder) / "src" / "kernels"))
To create our configuration layer .h, we use generate_file and we specify which template we want to use, in our case it's "aidge_normalize_hwc_fp32.jinja". If you remember, in our .h, three variables are needed:
- pow
- input_dims
- name That's in generate file that we specify them.
generate_file(
str(export_folder / "layers" / f"{self.name}.h"),
str(ROOT / "customs" / "templates" / "configuration" /
"aidge_normalize_hwc_fp32.jinja"),
name=self.name,
input_dims=self.inputs_dims[0],
pow=self.power)
We generate the forward in the same way, with the three variables input_name, output_name and name:
def forward(self, list_actions:list):
if not self.is_last:
list_actions.append(set_up_output(self.name, self.datatype))
if self.library == "aidge":
list_actions.append(generate_str(
str(ROOT / "customs" / "templates" / "kernel" / "aidge_normalize_hwc_fp32.jinja"),
name=self.name,
input_name=self.inputs[0].name(),
output_name=self.name,
))
return list_actions
The final result will be:
python
@operator_register("Normalize")
class Normalize_ARMCortexM(ExportNode):
def __init__(self, node, board, library):
for i in range(1, len(node.inputs())):
producer = node.input(i)[0]
self.producers.append(Producer_ARMCortexM(producer))
super().__init__(node)
self.board = board
self.library = library
self.power = node.get_operator().get_attr("power")
if len(self.inputs_dims[0]) == 4:
# if dims == [batch, nb_channels, height, width]
# transform to [nb_channels, height, width]
self.inputs_dims[0] = self.inputs_dims[0][1:]
if len(self.outputs_dims[0]) == 4:
self.outputs_dims[0] = self.outputs_dims[0][1:]
def export(self, export_folder:Path, list_configs:list):
list_configs.append(f"layers/{self.name}.h")
if(len(self.producers)>0):
self.producers[0].export(export_folder / "parameters" / f"{self.producers[0].name}.h")
list_configs.append(f"parameters/{self.producers[0].name}.h")
if self.library == "aidge":
copyfile(str(ROOT / "customs" / "kernels" / "aidge_normalize_hwc_fp32.c"),
str(Path(export_folder) / "src" / "kernels"))
copyfile(str(ROOT / "_Aidge_Arm" / "kernels" / "SupportFunctions" / "aidge_supportfunctions.h"),
str(Path(export_folder) / "include"))
generate_file(
str(export_folder / "layers" / f"{self.name}.h"),
str(ROOT / "customs" / "templates" / "configuration" / "aidge_normalize_hwc_fp32.jinja"),
name=self.name,
input_dims=self.inputs_dims[0],
pow=self.power)
def forward(self, list_actions:list):
if not self.is_last:
list_actions.append(set_up_output(self.name, self.datatype))
if self.library == "aidge":
list_actions.append(generate_str(
str(ROOT / "customs" / "templates" / "kernel" / "aidge_normalize_hwc_fp32.jinja"),
name=self.name,
dataformat="float",
input_name=self.inputs[0].name(),
output_name=self.name,
))
return list_actions
````