feat/release_pip
Context
see issue : aidge#67 (closed)
Merge request reports
Activity
assigned to @gregkub
added 2 commits
- Resolved by Cyril Moineau
added 1 commit
- 8c682e30 - fix : missing stage & upd ref for gitlab yml files
added CI ⚡ Compilation ⚙ labels
added 2 commits
added 1 commit
- 9c4f7342 - fix : added missing dependency (because onnx simplifier cannot do it by himself)
added 16 commits
-
4a6b7683...9d8f2a7f - 14 commits from branch
dev - 0046fe5e - Merge branch 'dev' into feat/release_pip
- a77ab8b0 - feat : override before_script to retrieve test dependency
-
4a6b7683...9d8f2a7f - 14 commits from branch
@pineapple Do you have any idea about the origin of this error ?
https://gitlab.eclipse.org/eclipse/aidge/aidge_interop_torch/-/jobs/406724#L968
Also locally I get an error on this line here saying that the function
aidge_core.compile_gradientdoesn't exist. This seems right bc I haven't been able to find it anywhere. If you happen to know where it is, hit me up \orequested review from @pineapple
151 151 152 152 @staticmethod 153 153 def backward(ctx, grad_output): 154 154 if not self.grad_compiled: aidge_core.compile_gradient(self._graph_view) Running the tests in local to try to solve these issues returns an error on this line : aidge_core doesn't seem to have any compile_gradients function, I lloked for it in the whole codebase but cannot find it anywhere.
Since aidge_core@ecc96977 grad is lazy init thus copile_gradient and init_grad have been removed.
Ok now the real issue : /
https://gitlab.eclipse.org/eclipse/aidge/aidge_interop_torch/-/jobs/406724#L968
It seems that the imported model contains 3 input nodes instead of 1 :
Warning: Displaying this diagram might cause performance issues on this page.$ p aidge_model.get_input_nodes() {Node(name='input_24', optype='Conv', parents: [1, 1, 0], children: 1), Node(name='input_12', optype='Conv', parents: [1, 1, 0], children: 1), Node(name='input', optype='Conv', parents: [0, 1, 0], children: 1)}here is the model after being exported to onnx & re imported to aidge. We can clearly see that the export/import phase added 3 inputs & 2 inptuts which is not normal.
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%% flowchart TB FC_1("data_37\n<sub><em>(FC#1)</em></sub>") Producer_10("layer_15_bias\n<sub><em>(Producer#10)</em></sub>"):::producerCls ReLU_0("Relu_1\n<sub><em>(ReLU#0)</em></sub>") Producer_8("layer_13_bias\n<sub><em>(Producer#8)</em></sub>"):::producerCls Producer_9("layer_15_weight\n<sub><em>(Producer#9)</em></sub>"):::producerCls Producer_6("layer_10_weight\n<sub><em>(Producer#6)</em></sub>"):::producerCls Conv_2("input_24\n<sub><em>(Conv#2)</em></sub>") Producer_2("layer_2_bias\n<sub><em>(Producer#2)</em></sub>"):::producerCls Producer_7("layer_13_weight\n<sub><em>(Producer#7)</em></sub>"):::producerCls Producer_3("layer_5_weight\n<sub><em>(Producer#3)</em></sub>"):::producerCls Producer_0("layer_0_weight\n<sub><em>(Producer#0)</em></sub>"):::producerCls_rootCls MaxPooling_0("input_8\n<sub><em>(MaxPooling#0)</em></sub>") Conv_1("input_12\n<sub><em>(Conv#1)</em></sub>") ReLU_2("Relu_6\n<sub><em>(ReLU#2)</em></sub>") BatchNormalization_1("BatchNormalization_7\n<sub><em>(BatchNormalization#1)</em></sub>"):::genericCls ReLU_3("Relu_8\n<sub><em>(ReLU#3)</em></sub>") MaxPooling_1("input_20\n<sub><em>(MaxPooling#1)</em></sub>") ReLU_4("Relu_11\n<sub><em>(ReLU#4)</em></sub>") FC_0("input_28\n<sub><em>(FC#0)</em></sub>") ReLU_5("Relu_14\n<sub><em>(ReLU#5)</em></sub>") Producer_5("layer_7_bias\n<sub><em>(Producer#5)</em></sub>"):::producerCls Producer_4("layer_7_weight\n<sub><em>(Producer#4)</em></sub>"):::producerCls Conv_0("input\n<sub><em>(Conv#0)</em></sub>") BatchNormalization_0("BatchNormalization_2\n<sub><em>(BatchNormalization#0)</em></sub>"):::genericCls ReLU_1("Relu_3\n<sub><em>(ReLU#1)</em></sub>") Producer_1("layer_2_weight\n<sub><em>(Producer#1)</em></sub>"):::producerCls Producer_10-->|"0 [10]→2"|FC_1 ReLU_0-->|"0→0"|BatchNormalization_0 Producer_8-->|"0 [84]→2"|FC_0 Producer_9-->|"0 [10, 84]→1"|FC_1 Producer_6-->|"0 [120, 16, 5, 5]→1"|Conv_2 Conv_2-->|"0→0"|ReLU_4 Producer_2-->|"0 [6]→2"|BatchNormalization_0 Producer_2-->|"0 [6]→2"|BatchNormalization_0 Producer_7-->|"0 [84, 120]→1"|FC_0 Producer_3-->|"0 [16, 6, 5, 5]→1"|Conv_1 Producer_0-->|"0 [6, 1, 5, 5]→1"|Conv_0 MaxPooling_0-->|"0→0"|Conv_1 Conv_1-->|"0→0"|ReLU_2 ReLU_2-->|"0→0"|BatchNormalization_1 BatchNormalization_1-->|"0→0"|ReLU_3 ReLU_3-->|"0→0"|MaxPooling_1 MaxPooling_1-->|"0→0"|Conv_2 ReLU_4-->|"0→0"|FC_0 FC_0-->|"0→0"|ReLU_5 ReLU_5-->|"0→0"|FC_1 Producer_5-->|"0 [16]→2"|BatchNormalization_1 Producer_5-->|"0 [16]→2"|BatchNormalization_1 Producer_4-->|"0 [16]→1"|BatchNormalization_1 Producer_4-->|"0 [16]→1"|BatchNormalization_1 Conv_0-->|"0→0"|ReLU_0 BatchNormalization_0-->|"0→0"|ReLU_1 ReLU_1-->|"0→0"|MaxPooling_0 Producer_1-->|"0 [6]→1"|BatchNormalization_0 Producer_1-->|"0 [6]→1"|BatchNormalization_0 input0((in#0)):::inputCls--->|"→0"|Conv_0 input1((in#1)):::inputCls--->|"→2"|Conv_0 input2((in#2)):::inputCls--->|"→2"|Conv_1 input3((in#3)):::inputCls--->|"→2"|Conv_2 FC_1--->|"0→"|output0((out#0)):::outputCls BatchNormalization_1--->|"1→"|output1((out#1)):::outputCls BatchNormalization_1--->|"2→"|output2((out#2)):::outputCls BatchNormalization_0--->|"1→"|output3((out#3)):::outputCls BatchNormalization_0--->|"2→"|output4((out#4)):::outputCls classDef inputCls fill:#afa classDef outputCls fill:#ffa classDef externalCls fill:#ccc classDef producerCls fill:#ccf classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls stroke-width:5px classDef rootCls stroke:#f00 classDef producerCls_rootCls stroke:#f00,fill:#ccf classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls_rootCls stroke:#f00,stroke-width:5pxI dont know enough of pytorch & aidge_interop torch as of now to unerstand where is the issue, i'll sort this out tomorrow.
Clearly this module is not a priority. I would be in favor of merging this in dev with the broken CI and mark this module as broken. The reasoning behind this being that this MR doesn't break the CI and just highlight issues. These issues are not on the roadmap

@pierregaillard do you agree with this prioritization ?
changed this line in version 8 of the diff
Totaly agree @cmoineau !