Skip to content
Snippets Groups Projects
Commit e3655e61 authored by Maxence Naud's avatar Maxence Naud
Browse files

Merge branch 'dev' into 'main'

Update tutorials

See merge request eclipse/aidge/aidge!55
parents 149f025d dbd80d88
No related branches found
No related tags found
No related merge requests found
Showing
with 183 additions and 232 deletions
...@@ -7,5 +7,5 @@ test:ubuntu_python_tutorials_load_and_run: ...@@ -7,5 +7,5 @@ test:ubuntu_python_tutorials_load_and_run:
script: script:
- source venv/bin/activate - source venv/bin/activate
- python -m pip install matplotlib nbconvert ipykernel - python -m pip install matplotlib nbconvert ipykernel
- cd examples/tutorials/Aidge_tutorial/ - cd examples/tutorials/101_first_step/
- ipython load_and_run.ipynb - ipython load_and_run.ipynb
...@@ -80,7 +80,36 @@ Then to run a container, run ...@@ -80,7 +80,36 @@ Then to run a container, run
``` ```
docker run --name mycontainer -it aidge:myenv docker run --name mycontainer -it aidge:myenv
``` ```
#### Build on Windows
On your session,
1) Install Visual Studio Community. Get it from https://visualstudio.microsoft.com/fr/vs/community/)
2) Install MSVC
3) Install GIT (Source code control program). Get it from: https://git-scm.com/download)
4) Install CMake (Solution for managing the software build process). Get it from: https://cmake.org/download/
5) Install Python (Programming language). Get it from: https://www.python.org/download/
6) Install pip
7) Install VS Code. Get it from https://code.visualstudio.com/download => add following extensions (Python, CMake)
8) create and launch virtual environnement (venv)
```
python -m venv myenv
```
```
.\myenv\Scripts\activate
```
9) Place in aidge repository and execute
```
python.exe .\setup.py install
```
You can test your installation by running
```bash
python -c "import aidge_core; import aidge_backend_cpu; print(aidge_core.Tensor.get_available_backends())"
```
You should have the following in your terminal:
```
{'cpu'}
```
## Contributing ## Contributing
If you would like to contribute to the Aidge project, we’re happy to have your help! If you would like to contribute to the Aidge project, we’re happy to have your help!
......
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Aidge demonstration # Aidge demonstration
Aidge is a collaborative open source deep learning library optimized for export and processing on embedded devices. With Aidge, you can create or import a Computational Graph from common Frameworks, apply editing on its structure, train it and export its architecture on many embedded devices. Aidge provides optimized functions for inference as well as training and many custom functionalities for the target device. Aidge is a collaborative open source deep learning library optimized for export and processing on embedded devices. With Aidge, you can create or import a Computational Graph from common Frameworks, apply editing on its structure, train it and export its architecture on many embedded devices. Aidge provides optimized functions for inference as well as training and many custom functionalities for the target device.
This notebook put in perspective the tool chain to import a Deep Neural Network from ONNX model and support its Inference in Aidge. The tool chain demonstrated is : This notebook put in perspective the tool chain to import a Deep Neural Network from ONNX model and support its Inference in Aidge. The tool chain demonstrated is :
![pipeline(1)](./static/pipeline_1.png) ![pipeline(1)](./static/pipeline_1.png)
In order to demonstrate this toolchain, the MNIST digit recognition task is used. In order to demonstrate this toolchain, the MNIST digit recognition task is used.
![MNIST](./static/MnistExamples.png) ![MNIST](./static/MnistExamples.png)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Setting up the notebook ## Setting up the notebook
%% Cell type:markdown id: tags:
### (if needed) Download the model
If you don't have git-lfs, you can download the model and data using this piece of code
%% Cell type:code id: tags:
``` python
import os
import requests
def download_material(path: str) -> None:
if not os.path.isfile(path):
response = requests.get("https://gitlab.eclipse.org/eclipse/aidge/aidge/-/raw/dev/examples/tutorials/101_first_step/"+path+"?ref_type=heads")
if response.status_code == 200:
with open(path, 'wb') as f:
f.write(response.content)
print("File downloaded successfully.")
else:
print("Failed to download file. Status code:", response.status_code)
# Download onnx model file
download_material("MLP_MNIST.onnx")
# Download input data
download_material("input_digit.npy")
# Download output data for later comparison
download_material("output_digit.npy")
```
%% Cell type:markdown id: tags:
### Define mermaid visualizer function ### Define mermaid visualizer function
Aidge save graph using the mermaid format, in order to visualize the graph live in the notebook, we will setup the following function: Aidge save graph using the mermaid format, in order to visualize the graph live in the notebook, we will setup the following function:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import base64 import base64
from IPython.display import Image, display from IPython.display import Image, display
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
def visualize_mmd(path_to_mmd): def visualize_mmd(path_to_mmd):
with open(path_to_mmd, "r") as file_mmd: with open(path_to_mmd, "r") as file_mmd:
graph_mmd = file_mmd.read() graph_mmd = file_mmd.read()
graphbytes = graph_mmd.encode("utf-8") graphbytes = graph_mmd.encode("utf-8")
base64_bytes = base64.b64encode(graphbytes) base64_bytes = base64.b64encode(graphbytes)
base64_string = base64_bytes.decode("utf-8") base64_string = base64_bytes.decode("utf-8")
display(Image(url=f"https://mermaid.ink/img/{base64_string}")) display(Image(url=f"https://mermaid.ink/img/{base64_string}"))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Import Aidge ## Import Aidge
In order to provide a colaborative environnement in the plateform, the structure of Aidge is built on a core library that interfaces with multiple modules binded to python libraries. In order to provide a colaborative environnement in the plateform, the structure of Aidge is built on a core library that interfaces with multiple modules binded to python libraries.
- ``aidge_core`` is the core library and offers all the basic functionnalities to create and manipulate the internal graph representation - ``aidge_core`` is the core library and offers all the basic functionnalities to create and manipulate the internal graph representation
- ``aidge_backend_cpu`` is a C++ module providing a generic C++ implementations for each component of the graph - ``aidge_backend_cpu`` is a C++ module providing a generic C++ implementations for each component of the graph
- ``aidge_onnx`` is a module allowing to import ONNX to the Aidge framework - ``aidge_onnx`` is a module allowing to import ONNX to the Aidge framework
- ``aidge_export_cpp`` is a module dedicated to the generation of optimized C++ code - ``aidge_export_cpp`` is a module dedicated to the generation of optimized C++ code
This way, ``aidge_core`` is free of any dependencies and the user can install what he wants depending on his use case. This way, ``aidge_core`` is free of any dependencies and the user can install what he wants depending on his use case.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import aidge_core import aidge_core
# Conv2D Operator is available but no implementation has been loaded # Conv2D Operator is available but no implementation has been loaded
print(f"Available backends:\n{aidge_core.get_keys_ConvOp2D()}") print(f"Available backends:\n{aidge_core.get_keys_ConvOp2D()}")
# note: Tensor is a special case as 'cpu' backend is provided in the core # note: Tensor is a special case as 'cpu' backend is provided in the core
# module to guarantee basic functionalities such as data accesss # module to guarantee basic functionalities such as data accesss
print(f"Available backends for Tensor:\n{aidge_core.Tensor.get_available_backends()}") print(f"Available backends for Tensor:\n{aidge_core.Tensor.get_available_backends()}")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
As you can see, no backends are availables for the class ``Conv2D``. As you can see, no backends are availables for the class ``Conv2D``.
We need to import the ``aidge_backend_cpu`` module which will register itself automatically to ``aidge_core``. We need to import the ``aidge_backend_cpu`` module which will register itself automatically to ``aidge_core``.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import aidge_backend_cpu import aidge_backend_cpu
print(f"Available backends:\n{aidge_core.get_keys_ConvOp2D()}") print(f"Available backends:\n{aidge_core.get_keys_ConvOp2D()}")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
For this tutorial, we will need to import ``aidge_onnx`` in order to load ONNX files, numpy in order to load data and matplotlib to display images. For this tutorial, we will need to import ``aidge_onnx`` in order to load ONNX files, numpy in order to load data and matplotlib to display images.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import aidge_onnx import aidge_onnx
import numpy as np import numpy as np
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## ONNX Import ## ONNX Import
Import an ONNX model into Aidge internal graph representation. Import an ONNX model into Aidge internal graph representation.
![pipeline(2)](./static/pipeline_2.png) ![pipeline(2)](./static/pipeline_2.png)
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
model = aidge_onnx.load_onnx("MNIST_model/MLP_MNIST.onnx") model = aidge_onnx.load_onnx("MLP_MNIST.onnx")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
As you can see in the logs, aidge imported a Node as a ``GenericOperator``: As you can see in the logs, aidge imported a Node as a ``GenericOperator``:
``` ```
- /Flatten_output_0 (Flatten | GenericOperator) - /Flatten_output_0 (Flatten | GenericOperator)
``` ```
This is a fallback mechanism which allow aidge to load ONNX graph without failing even when encountering a node which is not available. This is a fallback mechanism which allow aidge to load ONNX graph without failing even when encountering a node which is not available.
The ``GenericOperator`` act as a stub retrieving node type and attributes from ONNX. This allow to provide an implementation in a user script or as we will see to remove/replace them using aidge recipes. The ``GenericOperator`` act as a stub retrieving node type and attributes from ONNX. This allow to provide an implementation in a user script or as we will see to remove/replace them using aidge recipes.
You can visualize the graph using the ``save`` method and the mermaid visualizer we have setup. You can visualize the graph using the ``save`` method and the mermaid visualizer we have setup.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
model.save("myModel") model.save("myModel")
visualize_mmd("myModel.mmd") visualize_mmd("myModel.mmd")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Graph transformation ## Graph transformation
![pipeline(3)](./static/pipeline_3.png) ![pipeline(3)](./static/pipeline_3.png)
In order to support the graph for inference we need to support all operators. In order to support the graph for inference we need to support all operators.
The imported model contains ```Flatten``` before the ```Gemm``` operator. The ```aidge.FC``` operator already supports the flatten operation. The imported model contains ```Flatten``` before the ```Gemm``` operator. The ```aidge.FC``` operator already supports the flatten operation.
Graph transformation is required to support the graph for inference, i.e. remove the ```Flatten``` operator. Graph transformation is required to support the graph for inference, i.e. remove the ```Flatten``` operator.
Aidge graph transformation toolchain is the following process : Aidge graph transformation toolchain is the following process :
**1. Describe the graph pattern** **1. Describe the graph pattern**
In order to find specific patterns inside a graph, there is first a need to describe those patterns. Aidge introduces an innovative way to describe graph patterns, **Graph Regular Expression**, inspired by regular expression from the formal language theory. In order to find specific patterns inside a graph, there is first a need to describe those patterns. Aidge introduces an innovative way to describe graph patterns, **Graph Regular Expression**, inspired by regular expression from the formal language theory.
In this example the GraphRegEx used would be simple: In this example the GraphRegEx used would be simple:
``` ```
"Flatten->FC;" "Flatten->FC;"
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
graph_regex = aidge_core.GraphRegex() graph_regex = aidge_core.GraphRegex()
graph_regex.set_node_key("Flatten", "getType($) =='Flatten'") graph_regex.set_node_key("Flatten", "getType($) =='Flatten'")
graph_regex.set_node_key("FC", "getType($) =='FC'") graph_regex.set_node_key("FC", "getType($) =='FC'")
graph_regex.add_query("Flatten -> FC") graph_regex.add_query("Flatten -> FC")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
**2. Match the described pattern** **2. Match the described pattern**
Once the graph pattern is described with a graph regular expression, we apply an innovative graph matching algorithm to find patterns corresponding to the description. Once the graph pattern is described with a graph regular expression, we apply an innovative graph matching algorithm to find patterns corresponding to the description.
This alogrithm will return all the matched patterns described with a graph regular expression in a [match](https://eclipse-aidge.readthedocs.io/en/latest/source/API/Core/graphMatching.html#match) class. One matched pattern is the combinaison of the graph pattern start nodes and all the nodes in the matched pattern (including the start nodes). This alogrithm will return all the matched patterns described with a graph regular expression in a [match](https://eclipse-aidge.readthedocs.io/en/latest/source/API/Core/graphMatching.html#match) class. One matched pattern is the combinaison of the graph pattern start nodes and all the nodes in the matched pattern (including the start nodes).
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
all_match = graph_regex.match(model) all_match = graph_regex.match(model)
print('Number of match : ', len(all_match)) print('Number of match : ', len(all_match))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
In this case, we have one match : In this case, we have one match :
- List of one list containing the start node : [[Flatten node]] - List of one list containing the start node : [[Flatten node]]
- List of one set containing all the matched nodes : [{Flatten node, FC node}] - List of one set containing all the matched nodes : [{Flatten node, FC node}]
Let's visualize the match : Let's visualize the match :
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
print('The start node : ') print('The start node : ')
for match in all_match: for match in all_match:
print('\t', match.get_start_node()[0].type()) print('\t', match.get_start_node()[0].type())
print('All the matched nodes for', match.get_query() , ':') print('All the matched nodes for', match.get_query() , ':')
for n in match.get_all(): for n in match.get_all():
print('\t', n.type()) print('\t', n.type())
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
**3. Apply graph transformations on the matched patterns** **3. Apply graph transformations on the matched patterns**
Now that we have matched the desired patterns we can apply graph transformation on it. The main graph transformation functions (currently under dev) are : Now that we have matched the desired patterns we can apply graph transformation on it. The main graph transformation functions (currently under dev) are :
- Replace the current GraphView with a set of given Nodes if possible : [replace](https://eclipse-aidge.readthedocs.io/en/latest/source/API/Core/graph.html#aidge_core.GraphView.replace) - Replace the current GraphView with a set of given Nodes if possible : [replace](https://eclipse-aidge.readthedocs.io/en/latest/source/API/Core/graph.html#aidge_core.GraphView.replace)
- Insert a node (newParentNode) as a parent of the passed node (childNode) : [insert_parent](https://eclipse-aidge.readthedocs.io/en/latest/source/API/Core/graph.html#_CPPv4N5Aidge9GraphView12insertParentE7NodePtr7NodePtr9IOIndex_t9IOIndex_t9IOIndex_t) - Insert a node (newParentNode) as a parent of the passed node (childNode) : [insert_parent](https://eclipse-aidge.readthedocs.io/en/latest/source/API/Core/graph.html#_CPPv4N5Aidge9GraphView12insertParentE7NodePtr7NodePtr9IOIndex_t9IOIndex_t9IOIndex_t)
- Remove a node : remove() - Remove a node : remove()
In this example we remove the ```Flatten``` operator from the graph using replace. In this example we remove the ```Flatten``` operator from the graph using replace.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
g = aidge_core.GraphView() g = aidge_core.GraphView()
g.add(next(iter(all_match)).get_start_node()[0]) g.add(next(iter(all_match)).get_start_node()[0])
aidge_core.GraphView.replace(g.get_nodes(), set()) aidge_core.GraphView.replace(g.get_nodes(), set())
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The flatten is removed, let's visualize the model : The flatten is removed, let's visualize the model :
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
model.save("mySupportedModel") model.save("mySupportedModel")
visualize_mmd("mySupportedModel.mmd") visualize_mmd("mySupportedModel.mmd")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
All of these steps are embedded inside ``recipes`` functions. These recipes are available in ``aidge_core``, some recipes are: All of these steps are embedded inside ``recipes`` functions. These recipes are available in ``aidge_core``, some recipes are:
- *fuse_batchnorm*: Fuse BatchNorm inside Conv or FC operator; - *fuse_batchnorm*: Fuse BatchNorm inside Conv or FC operator;
- *fuse_mul_add*: Fuse MatMul and Add operator into a FC operator; - *fuse_mul_add*: Fuse MatMul and Add operator into a FC operator;
- *remove_flatten*: Remove Flatten if it is before an FC operator. - *remove_flatten*: Remove Flatten if it is before an FC operator.
Let's do it again with the *remove_flatten* recipie : Let's do it again with the *remove_flatten* recipie :
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Import model again # Import model again
model = aidge_onnx.load_onnx("MNIST_model/MLP_MNIST.onnx") model = aidge_onnx.load_onnx("MLP_MNIST.onnx")
# Use remove_flatten recipie # Use remove_flatten recipie
aidge_core.remove_flatten(model) aidge_core.remove_flatten(model)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
This time the flatten is removed with the recipie, let's visualize the model : This time the flatten is removed with the recipie, let's visualize the model :
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
model.save("mySupportedModel") model.save("mySupportedModel")
visualize_mmd("mySupportedModel.mmd") visualize_mmd("mySupportedModel.mmd")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Inference ## Inference
We now have a graph fully supported by aidge, we are ready to do some inference ! We now have a graph fully supported by aidge, we are ready to do some inference !
![pipeline(4)](./static/pipeline_4.png) ![pipeline(4)](./static/pipeline_4.png)
### Create an input tensor & its node in the graph ### Create an input tensor & its node in the graph
In order to perform an inferencewe will load an image from the MNIST dataset using Numpy. In order to perform an inferencewe will load an image from the MNIST dataset using Numpy.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
## Load input data & its output from the MNIST_model ## Load input data & its output from the MNIST_model
digit = np.load("MNIST_model/digit.npy") digit = np.load("input_digit.npy")
plt.imshow(digit[0][0], cmap='gray') plt.imshow(digit[0][0], cmap='gray')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
And in order to validate the result our model will provide, we will also load the output the PyTorch model povided for this image And in order to validate the result our model will provide, we will also load the output the PyTorch model povided for this image
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
output_model = np.load("MNIST_model/output_digit.npy") output_model = np.load("output_digit.npy")
print(output_model) print(output_model)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Thanks to the Numpy interoperability we can create an Aidge ``Tensor`` using directly the numpy array storing the image. Thanks to the Numpy interoperability we can create an Aidge ``Tensor`` using directly the numpy array storing the image.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
input_tensor = aidge_core.Tensor(digit) input_tensor = aidge_core.Tensor(digit)
print(f"Aidge Input Tensor dimensions: \n{input_tensor.dims()}") print(f"Aidge Input Tensor dimensions: \n{input_tensor.dims()}")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
To add an input to the graph we can create a ``Producer`` node, insert it in the ``GraphView`` and set its output with the ``Tensor`` we have just created, or data can simply be fed to the ``GraphView`` via the ``scheduler`` ```forward()``` call. To add an input to the graph we can create a ``Producer`` node, insert it in the ``GraphView`` and set its output with the ``Tensor`` we have just created, or data can simply be fed to the ``GraphView`` via the ``scheduler`` ```forward()``` call.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Configure the model for inference ### Configure the model for inference
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
At the moment the model has no implementation, it is only a datastructure. To set an implementation we will set a dataype and a backend. At the moment the model has no implementation, it is only a datastructure. To set an implementation we will set a dataype and a backend.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Configure the model # Configure the model
model.compile("cpu", aidge_core.DataType.Float32, dims=[[1,1,28,28]]) model.compile("cpu", aidge_core.DataType.Float32, dims=[[1,1,28,28]])
# equivalent to set_datatype(), set_backend() and forward_dims() # equivalent to set_datatype(), set_backend() and forward_dims()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Create a scheduler and run inference ### Create a scheduler and run inference
The graph is ready to run ! We just need to schedule the execution, to do this we will create a ``Scheduler`` object, which will take the graph and generate an optimized scheduling using a consummer producer heuristic. The graph is ready to run ! We just need to schedule the execution, to do this we will create a ``Scheduler`` object, which will take the graph and generate an optimized scheduling using a consummer producer heuristic.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Create SCHEDULER # Create SCHEDULER
scheduler = aidge_core.SequentialScheduler(model) scheduler = aidge_core.SequentialScheduler(model)
# Run inference ! # Run inference !
scheduler.forward(data=[input_tensor]) scheduler.forward(data=[input_tensor])
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Assert results # Assert results
for outNode in model.get_output_nodes(): for outNode in model.get_output_nodes():
output_aidge = np.array(outNode.get_operator().get_output(0)) output_aidge = np.array(outNode.get_operator().get_output(0))
print(output_aidge) print(output_aidge)
print('Aidge prediction = ', np.argmax(output_aidge[0])) print('Aidge prediction = ', np.argmax(output_aidge[0]))
assert(np.allclose(output_aidge, output_model,rtol=1e-04)) assert(np.allclose(output_aidge, output_model,rtol=1e-04))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
It is possible to save the scheduling in a mermaid format using: It is possible to save the scheduling in a mermaid format using:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
scheduler.save_scheduling_diagram("schedulingSequential") scheduler.save_scheduling_diagram("schedulingSequential")
visualize_mmd("schedulingSequential.mmd") visualize_mmd("schedulingSequential.mmd")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Export ## Export
Now that we have tested the imported graph we can look at one of the main feature of Aidge, the export of computationnal graph to an hardware target using code generation. Now that we have tested the imported graph we can look at one of the main feature of Aidge, the export of computationnal graph to an hardware target using code generation.
![pipeline(5)](./static/pipeline_5.png) ![pipeline(5)](./static/pipeline_5.png)
### Generate an export in C++ ### Generate an export in C++
In this example we will generate a generic C++ export. In this example we will generate a generic C++ export.
This export is not based on the `cpu` backend we have set before. This export is not based on the `cpu` backend we have set before.
In this example we will create a standalone export which is abstracted from the Aidge platform. In this example we will create a standalone export which is abstracted from the Aidge platform.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
! rm -r myexport ! rm -r myexport
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
!ls myexport !ls myexport
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Generating a ``cpu`` export recquires the ``aidge_export_cpp`` module. Generating a ``cpu`` export recquires the ``aidge_export_cpp`` module.
Once the module is imported you just need one line to generate an export of the graph. Once the module is imported you just need one line to generate an export of the graph.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import aidge_export_cpp import aidge_export_cpp
# Freeze the model by setting constant to parameters producers # Freeze the model by setting constant to parameters producers
for node in model.get_nodes(): for node in model.get_nodes():
if node.type() == "Producer": if node.type() == "Producer":
node.get_operator().set_attr("Constant", True) node.get_operator().set_attr("Constant", True)
# Create Producer Node for the Graph # Create Producer Node for the Graph
input_node = aidge_core.Producer([1, 1, 28, 28], "input") input_node = aidge_core.Producer([1, 1, 28, 28], "input")
input_node.add_child(model) input_node.add_child(model)
model.add(input_node) model.add(input_node)
# Configuration for the model + forward dimensions # Configuration for the model + forward dimensions
model.compile("cpu", aidge_core.DataType.Float32) model.compile("cpu", aidge_core.DataType.Float32)
# Export the model in C++ standalone # Export the model in C++ standalone
aidge_export_cpp.export("myexport", model, scheduler) aidge_export_cpp.export("myexport", model, scheduler)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The export function will generate : The export function will generate :
- **dnn/layers** layers configuration; - **dnn/layers** layers configuration;
- **dnn/parameters** folder with parameters; - **dnn/parameters** folder with parameters;
- **dnn/include/dnn.h** API to use the export; - **dnn/include/dnn.h** API to use the export;
- **dnn/include/network_functions.h** header file for kernels; - **dnn/include/network_functions.h** header file for kernels;
- **dnn/memory** memory management information; - **dnn/memory** memory management information;
- **dnn/src** kernel source code + forward function; - **dnn/src** kernel source code + forward function;
- **main.cpp** This file is an export of the scheduler, it allows - **main.cpp** This file is an export of the scheduler, it allows
- **Makefile** To compile the main.cpp - **Makefile** To compile the main.cpp
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
!tree myexport !tree myexport
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Generate an input file for tests ### Generate an input file for tests
To test the export we need to provide data, to do so we will export the numpy array using: To test the export we need to provide data, to do so we will export the numpy array using:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
aidge_export_cpp.generate_input_file(array_name="inputs", array=digit.reshape(-1), export_folder="myexport") aidge_export_cpp.generate_input_file(array_name="inputs", array=digit.reshape(-1), export_folder="myexport")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Compile the export ### Compile the export
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
!cd myexport && make !cd myexport && make
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Run the export ### Run the export
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
!./myexport/bin/run_export !./myexport/bin/run_export
``` ```
......
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Backend CUDA Inference # Backend CUDA Inference
This tutorial demonstrate the inference of MNIST model using Aidge Backend CUDA. This tutorial demonstrate the inference of LeNet model using Aidge Backend CUDA. <br>
For the sake of simplicity, we will not train the model. Feel free to replace the onnx model with one already trained. <br>
- we sart by creating a LeNet model:
%% Cell type:code id: tags:
``` python
import torch
import torch.nn as nn
import torch.onnx
class LeNet(nn.Module):
def __init__(self, num_classes):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(1, 6, kernel_size=5)
self.conv2 = nn.Conv2d(6, 16, kernel_size=5)
self.fc1 = nn.Linear(16 * 4 * 4, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, num_classes)
def forward(self, x):
x = torch.relu(self.conv1(x))
x = torch.max_pool2d(x, kernel_size=2, stride=2)
x = torch.relu(self.conv2(x))
x = torch.max_pool2d(x, kernel_size=2, stride=2)
x = x.view(-1, 16 * 4 * 4)
x = torch.relu(self.fc1(x))
x = torch.relu(self.fc2(x))
x = self.fc3(x)
return x
# Instantiate the model
num_classes = 10 # Assuming you're working with MNIST dataset
model = LeNet(num_classes)
# Set the model to evaluation mode
model.eval()
# Example input shape (batch_size, channels, height, width)
dummy_input = torch.randn(1, 1, 28, 28)
# Export the model to ONNX
torch.onnx.export(model, dummy_input, "lenet.onnx", verbose=True)
```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
- import the needed libraries - import the needed libraries
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import aidge_core import aidge_core
import aidge_backend_cuda import aidge_backend_cuda
import aidge_onnx import aidge_onnx
import numpy as np import numpy as np
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
- load onnx model on AIdge - load onnx model on Aidge
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
model = aidge_onnx.load_onnx("../Aidge_tutorial/MNIST_model/MLP_MNIST.onnx") model = aidge_onnx.load_onnx("lenet.onnx")
aidge_core.remove_flatten(model) aidge_core.remove_flatten(model)
# Configure the model # Configure the model
model.set_datatype(aidge_core.DataType.Float32) model.set_datatype(aidge_core.DataType.Float32)
model.set_backend("cuda") model.set_backend("cuda")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
- add input - add input
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Create an input node # Create an input node
input = np.random.randn(1, 1, 28, 28).astype(np.float32) input = np.random.randn(1, 1, 28, 28).astype(np.float32)
input_tensor = aidge_core.Tensor(input) input_tensor = aidge_core.Tensor(input)
input_node = aidge_core.Producer(input_tensor, "input") input_node = aidge_core.Producer(input_tensor, "input")
input_node.get_operator().set_datatype(aidge_core.DataType.Float32) input_node.get_operator().set_datatype(aidge_core.DataType.Float32)
input_node.get_operator().set_backend("cuda") input_node.get_operator().set_backend("cuda")
# Link node to model # Link node to model
input_node.add_child(model) input_node.add_child(model)
model.add(input_node) model.add(input_node)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
- create a scheduler and run inference - create a scheduler and run inference
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Define the scheduler # Define the scheduler
scheduler = aidge_core.SequentialScheduler(model) scheduler = aidge_core.SequentialScheduler(model)
# Run inference ! # Run inference !
scheduler.forward() scheduler.forward()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
- get the ouput: <br> - get the ouput: <br>
Before getting the output we need to set it to backend cpu Before getting the output we need to set it to backend cpu
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
for outNode in model.get_output_nodes(): for outNode in model.get_output_nodes():
outNode.get_operator().get_output(0).set_backend('cpu') outNode.get_operator().get_output(0).set_backend('cpu')
output_aidge = np.array(outNode.get_operator().get_output(0)) output_aidge = np.array(outNode.get_operator().get_output(0))
print("Aidge output: {}".format(output_aidge)) print("Aidge output: {}".format(output_aidge))
# Make sure to set the output back to "cuda" otherwise the model will not be usable # Make sure to set the output back to "cuda" otherwise the model will not be usable
outNode.get_operator().get_output(0).set_backend('cuda') outNode.get_operator().get_output(0).set_backend('cuda')
``` ```
......
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Database MNIST # Database MNIST
This tutorial demonstrate the usage of databases and data providers to perform an evaluation on a model using Aidge. This tutorial demonstrate the usage of databases and data providers to perform an evaluation on a model using Aidge.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Installation and Requirements ## Installation and Requirements
- Python packages : aidge_core, aidge_backend_cpu, aidge_backend_opencv - Python packages : aidge_core, aidge_backend_cpu, aidge_backend_opencv
- Download MNIST database - Download MNIST database
- Download MLP onnx model from git-lfs - Download MLP onnx model from git-lfs
- Define model visualization function and top-1 accuracy metric - Define model visualization function and top-1 accuracy metric
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
!pip install git-lfs !pip install git-lfs
!git lfs pull !git lfs pull
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import os import os
import urllib.request import urllib.request
import gzip import gzip
import shutil import shutil
mnist_dir = 'MNIST_test' mnist_dir = 'MNIST_test'
os.makedirs(mnist_dir, exist_ok=True) os.makedirs(mnist_dir, exist_ok=True)
base_url = 'https://ossci-datasets.s3.amazonaws.com/mnist/' base_url = 'https://ossci-datasets.s3.amazonaws.com/mnist/'
files = ['t10k-images-idx3-ubyte.gz', 't10k-labels-idx1-ubyte.gz'] files = ['t10k-images-idx3-ubyte.gz', 't10k-labels-idx1-ubyte.gz']
for file in files: for file in files:
url = base_url + file url = base_url + file
file_path = os.path.join(mnist_dir, file) file_path = os.path.join(mnist_dir, file)
decompressed_file_path = os.path.splitext(file_path)[0] decompressed_file_path = os.path.splitext(file_path)[0]
if not os.path.exists(decompressed_file_path): if not os.path.exists(decompressed_file_path):
print("Downloading", file) print("Downloading", file)
urllib.request.urlretrieve(url, file_path) urllib.request.urlretrieve(url, file_path)
print("Download complete") print("Download complete")
decompressed_file_path = os.path.splitext(file_path)[0] decompressed_file_path = os.path.splitext(file_path)[0]
print("Decompressing", file) print("Decompressing", file)
raw = gzip.open(file_path, 'rb').read() raw = gzip.open(file_path, 'rb').read()
open(decompressed_file_path, 'wb').write(raw) open(decompressed_file_path, 'wb').write(raw)
print("Decompression complete") print("Decompression complete")
else: else:
print(f"{file} already exists. Skipping download and decompression.") print(f"{file} already exists. Skipping download and decompression.")
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import base64 import base64
from IPython.display import Image, display from IPython.display import Image, display
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
def visualize_mmd(path_to_mmd): def visualize_mmd(path_to_mmd):
with open(path_to_mmd, "r") as file_mmd: with open(path_to_mmd, "r") as file_mmd:
graph_mmd = file_mmd.read() graph_mmd = file_mmd.read()
graphbytes = graph_mmd.encode("ascii") graphbytes = graph_mmd.encode("ascii")
base64_bytes = base64.b64encode(graphbytes) base64_bytes = base64.b64encode(graphbytes)
base64_string = base64_bytes.decode("ascii") base64_string = base64_bytes.decode("ascii")
display(Image(url=f"https://mermaid.ink/img/{base64_string}")) display(Image(url=f"https://mermaid.ink/img/{base64_string}"))
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def top1_accuracy(predictions, labels): def top1_accuracy(predictions, labels):
total = len(predictions) total = len(predictions)
predicted_class = predictions.argmax(axis=1) predicted_class = predictions.argmax(axis=1)
correct_pred = (predicted_class == labels).sum() correct_pred = (predicted_class == labels).sum()
accuracy = correct_pred / total accuracy = correct_pred / total
return accuracy return accuracy
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Perform an evaluation of the LeNet-like on Aidge ## Perform an evaluation of the LeNet-like on Aidge
- Import Aidge libraries - Import Aidge libraries
- Import ONNX model - Import ONNX model
- Configure the model for inference - Configure the model for inference
- Create the Database and DataProvider - Create the Database and DataProvider
- Perform the evaluation - Perform the evaluation
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import aidge_core import aidge_core
import aidge_backend_opencv import aidge_backend_opencv
import aidge_backend_cpu import aidge_backend_cpu
import aidge_onnx import aidge_onnx
import numpy as np import numpy as np
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
model = aidge_onnx.load_onnx("../Aidge_tutorial/MNIST_model/MLP_MNIST.onnx") model = aidge_onnx.load_onnx("../101_first_step/MLP_MNIST.onnx")
aidge_core.remove_flatten(model) aidge_core.remove_flatten(model)
model.save("mySupportedModel") model.save("mySupportedModel")
visualize_mmd("mySupportedModel.mmd") visualize_mmd("mySupportedModel.mmd")
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Configure the model # Configure the model
model.set_datatype(aidge_core.DataType.Float32) model.set_datatype(aidge_core.DataType.Float32)
model.set_backend("cpu") model.set_backend("cpu")
# Define the scheduler # Define the scheduler
scheduler = aidge_core.SequentialScheduler(model) scheduler = aidge_core.SequentialScheduler(model)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
val_mnist = aidge_backend_opencv.MNIST(dataPath="./MNIST_test", val_mnist = aidge_backend_opencv.MNIST(dataPath="./MNIST_test",
train=False, train=False,
load_data_in_memory=False) load_data_in_memory=False)
val_dataprovider = aidge_core.DataProvider(val_mnist, val_dataprovider = aidge_core.DataProvider(val_mnist,
batch_size=200, batch_size=200,
shuffle=True, shuffle=True,
drop_last=False) drop_last=False)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
val_acc = 0 val_acc = 0
for i, (data_batch, lbl_batch) in enumerate(val_dataprovider): for i, (data_batch, lbl_batch) in enumerate(val_dataprovider):
data_batch.set_datatype(aidge_core.DataType.Float32) data_batch.set_datatype(aidge_core.DataType.Float32)
lbl_batch.set_datatype(aidge_core.DataType.Float32) lbl_batch.set_datatype(aidge_core.DataType.Float32)
# Run inference ! # Run inference !
scheduler.forward(data=[data_batch]) scheduler.forward(data=[data_batch])
# Get output and label in a numpy array # Get output and label in a numpy array
output_aidge = np.array(list(model.get_output_nodes())[0].get_operator().get_output(0)) output_aidge = np.array(list(model.get_output_nodes())[0].get_operator().get_output(0))
lbl = np.array(lbl_batch) lbl = np.array(lbl_batch)
# Compute the top-1 accuracy # Compute the top-1 accuracy
val_acc += top1_accuracy(output_aidge, lbl.flatten()) val_acc += top1_accuracy(output_aidge, lbl.flatten())
val_acc = val_acc / len(val_dataprovider) val_acc = val_acc / len(val_dataprovider)
print(val_acc) print(val_acc)
``` ```
......
# MNIST model
This repository is a tool for the Aidge demonstration (Import ONNX -> Graph transformation -> Inference).
```generate_model.sh``` export an ONNX of a Multi Layer Perceptron learned on MNIST with the following steps :
- Create a virtual envieronnement with pytorch CPU & ONNX
- Call torch_MLP.py that download the MNIST data, learn an MLP on MNIST digit recognition & export the model as an ONNX
- Remove the virtual environnement & MNIST data
\ No newline at end of file
#!/bin/bash
script_directory="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
echo "Script directory: $script_directory"
ONNX="$script_directory/MLP_MNIST.onnx"
DIGIT="$script_directory/digit.npy"
OUTPUT="$script_directory/output_digit.npy"
if [ ! -e "$ONNX" ] || [ ! -e "$DIGIT" ] || [ ! -e "$OUTPUT" ]
then
virtualenv -p python3 "$script_directory/py3"
source "$script_directory/py3/bin/activate"
pip3 install --quiet -U torch torchvision --index-url https://download.pytorch.org/whl/cpu
pip install --quiet -U onnx
python ./MNIST_model/torch_MLP.py --epochs=2
deactivate
rm -r "$script_directory/py3"
rm -r "$script_directory/data"
else
echo "$ONNX $DIGIT $OUTPUT exist."
fi
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import datasets, transforms
import numpy as np
import onnx
import argparse
import os
## Model
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.fc1 = torch.nn.Linear(28*28,50)
self.fc1_drop = nn.Dropout(0.2)
self.fc2 = torch.nn.Linear(50,50)
self.fc2_drop = nn.Dropout(0.2)
self.fc3 = torch.nn.Linear(50,10)
def forward(self, x):
# Flatten input
x = torch.flatten(x,1)
#FC1 - ReLU - Dropout
x = self.fc1(x)
x = F.relu(x)
x = self.fc1_drop(x)
#FC2 - ReLU - Dropout
x = self.fc2(x)
x = F.relu(x)
x = self.fc2_drop(x)
#FC3
x = self.fc3(x)
return x
def train(model, train_loader, epoch, optimizer, criterion, log_interval=200):
# Set model to training mode
model.train()
# Loop over each batch from the training set
for batch_idx, (data, target) in enumerate(train_loader):
# Zero gradient buffers
optimizer.zero_grad()
# Pass data through the network
output = model(data)
# Calculate loss
loss = criterion(output, target)
# Backpropagate
loss.backward()
# Update weights
optimizer.step()
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.data.item()))
def validate(model, validation_loader, criterion):
model.eval()
val_loss, correct = 0, 0
for data, target in validation_loader:
output = model(data)
val_loss += criterion(output, target).data.item()
pred = output.data.max(1)[1] # get the index of the max log-probability
correct += pred.eq(target.data).cpu().sum()
val_loss /= len(validation_loader)
accuracy = 100. * correct.to(torch.float32) / len(validation_loader.dataset)
print('\nValidation set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
val_loss, correct, len(validation_loader.dataset), accuracy))
return accuracy
def main():
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=32, metavar='N',
help='input batch size for training (default: 32)')
parser.add_argument('--epochs', type=int, default=15, metavar='N',
help='number of epochs to train (default: 15)')
parser.add_argument('--test', action='store_true', default=False,
help='test the model Best_mnist_MLP.pt if it exists')
args = parser.parse_args()
folder_path = os.path.dirname(os.path.abspath(__file__))
data_path = os.path.join(folder_path, "data")
model_path = os.path.join(folder_path, "Best_mnist_MLP.pt")
onnx_path = os.path.join(folder_path, "MLP_MNIST.onnx")
digit_path = os.path.join(folder_path, "digit")
output_path = os.path.join(folder_path, "output_digit")
trf=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
train_dataset = datasets.MNIST(data_path,
train=True,
download=True,
transform=trf)
validation_dataset = datasets.MNIST(data_path,
train=False,
transform=trf)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=args.batch_size,
shuffle=True)
validation_loader = torch.utils.data.DataLoader(dataset=validation_dataset,
batch_size=args.batch_size,
shuffle=False)
####### TRAIN ########
model = M().cpu()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
criterion = nn.CrossEntropyLoss()
if args.test:
os.path.isfile(model_path)
model.load_state_dict(torch.load(model_path))
validate(model, validation_loader, criterion)
else:
best_acc = 0
for epoch in range(1, args.epochs + 1):
train(model, train_loader, epoch, optimizer, criterion)
acc = validate(model, validation_loader, criterion)
if acc > best_acc:
best_acc = acc
print('New best accuracy : ', best_acc)
torch.save(model.state_dict(), model_path)
print('-------------- Best model saved --------------\n')
# Find one digit correctly predicted
not_found = True
i=0
while not_found:
if i > train_dataset.__len__():
raise RuntimeError('No correctly predicted digits')
x, t = train_dataset.__getitem__(i)
out = model(x)
pred = out.data.max(1)[1] # get the index of the max
not_found = pred.eq(t)
i+=1
# Save digit & the model output
x,_ = train_dataset.__getitem__(i)
model.load_state_dict(torch.load(model_path))
output = model(x)
np.save(digit_path, np.expand_dims(x.numpy(),0))
np.save(output_path, output.detach().numpy())
####### EXPORT ONNX ########
torch.onnx.export(model, x, onnx_path, verbose=True, input_names=[ "actual_input" ], output_names=[ "output" ])
if __name__ == '__main__':
main()
...@@ -22,17 +22,19 @@ ...@@ -22,17 +22,19 @@
#include "aidge/backend/cpu/operator/HardmaxImpl_forward_kernels.hpp" #include "aidge/backend/cpu/operator/HardmaxImpl_forward_kernels.hpp"
void Aidge::HardmaxImpl_cpu::forward() { void Aidge::HardmaxImpl_cpu::forward() {
const Hardmax_Op& op_ = dynamic_cast<const Hardmax_Op&>(mOp);
// Check if input is provided // Check if input is provided
assert(std::static_pointer_cast<Tensor>(mOp.getRawInput(0)) && "missing input"); assert(op_.getInput(0) && "missing input");
// Create the forward kernal with the wanted types // Create the forward kernal with the wanted types
auto kernelFunc = Registrar<HardmaxImplForward_cpu>::create({ auto kernelFunc = Registrar<HardmaxImplForward_cpu>::create({
std::static_pointer_cast<Tensor>(mOp.getRawInput(0))->dataType(), op_.getInput(0)->dataType(),
std::static_pointer_cast<Tensor>(mOp.getRawOutput(0))->dataType()}); op_.getOutput(0)->dataType()});
// Call kernel // Call kernel
kernelFunc(dynamic_cast<const Hardmax_Op&>(mOp).getStaticAttributes(), kernelFunc(dynamic_cast<const Hardmax_Op&>(mOp).getStaticAttributes(),
std::static_pointer_cast<Tensor>(mOp.getRawInput(0))->dims(), op_.getInput(0)->dims(),
std::static_pointer_cast<Tensor>(mOp.getRawInput(0))->getImpl()->rawPtr(), op_.getInput(0)->getImpl()->rawPtr(),
std::static_pointer_cast<Tensor>(mOp.getRawOutput(0))->getImpl()->rawPtr()); op_.getOutput(0)->getImpl()->rawPtr());
} }
...@@ -26,11 +26,21 @@ namespace Aidge { ...@@ -26,11 +26,21 @@ namespace Aidge {
// We template the kernel on the input and ouput types // We template the kernel on the input and ouput types
// Change the arguments according to the inputs and outputs of the operator // Change the arguments according to the inputs and outputs of the operator
class HardmaxImplForward_cpu class HardmaxImplForward_cpu
: public Registrable<HardmaxImplForward_cpu, std::tuple<DataType, DataType>, void(const typename Hardmax_Op::Attrs&, const std::vector<DimSize_t>&, const void*, void*)> { : public Registrable<HardmaxImplForward_cpu,
}; std::tuple<DataType, DataType>,
void(const typename Hardmax_Op::Attrs&,
const std::vector<DimSize_t>&,
const void*,
void*)> {};
class HardmaxImplBackward_cpu class HardmaxImplBackward_cpu
: public Registrable<HardmaxImplBackward_cpu, std::tuple<DataType, DataType>, void(const typename Hardmax_Op::Attrs&, const std::vector<DimSize_t>&, const void*, void*)> { : public Registrable<HardmaxImplBackward_cpu,
}; std::tuple<DataType, DataType>,
void(const typename Hardmax_Op::Attrs&,
const std::vector<DimSize_t>&,
const void*,
void*)> {};
// Then we declare the Impl class for the operator // Then we declare the Impl class for the operator
class HardmaxImpl_cpu : public OperatorImpl { class HardmaxImpl_cpu : public OperatorImpl {
public: public:
......
...@@ -9,8 +9,39 @@ ...@@ -9,8 +9,39 @@
* *
********************************************************************************/ ********************************************************************************/
#include "aidge/operator/Hardmax.hpp"
#include <string> #include <string>
#include "aidge/operator/Hardmax.hpp" #include "aidge/data/Tensor.hpp"
#include "aidge/operator/Producer.hpp"
#include "aidge/utils/ErrorHandling.hpp"
#include "aidge/utils/Registrar.hpp"
#include "aidge/utils/Types.h"
const std::string Aidge::Hardmax_Op::Type = "Hardmax";
Aidge::Hardmax_op::Hardmax_Op(const Hardmax_Op& op)
: OperatorTensor(op),
Attributes_(op)
{
// mImpl is the implementation of the operator. It contains its data, memory size and backend.
// Since each tensorOperator has a unique id in the registrar we cannot simply copy op.mImpl.
// Hence here we retrieve the backend of the output tensor of op to create a new implementation object.
// The output tensor is chosen by convention to get the backend since not all operators have inputs but all have at least one output.
if (op.mImpl) {
SET_IMPL_MACRO(Hardmax_Op, *this, op.backend());
} else {
nullptr;
}
}
std::shared_ptr<Operator> clone() const override {
return std::make_shared<Hardmax_Op>(*this);
}
const std::string Aidge::Hardmax_Op::Type = "Hardmax"; void Aidge::Hardmax_Op::setBackend(const std::string &name, Aidge::DeviceIdx_t device) {
\ No newline at end of file SET_IMPL_MACRO(Hardmax_Op, *this, name);
mOutputs[0]->setBackend(name, device);
}
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment