Fixed issues for LSTM
Fixes issues related to eclipse/aidge/aidge_core#167.
- Added maxElements input to
Stack
: @jeromeh I made some change to allow to specify maxElements also through an input. This is actually a necessity to enable static forward dims of a graph with theStack
operator, like in the example above forLSTM
;
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
flowchart TB
LSTM_0("lstm_output<br/><sub><em>(LSTM#0)</em></sub>"):::metaCls_rootCls
Pop_0(<em>Pop#0</em>)
Stack_0(<em>Stack#0</em>)
Identity_0(<em>Identity#0</em>)
Shape_0(<em>Shape#0</em>)
LSTM_0-->|"0→0"|Stack_0
LSTM_0-->|"0→0"|output1((out#1)):::outputCls
LSTM_0-->|"1→0"|output2((out#2)):::outputCls
Pop_0-->|"0→0"|LSTM_0
Identity_0-->|"0→0"|Pop_0
Identity_0-->|"0→0"|Shape_0
Shape_0-->|"0→1"|Stack_0
input0((in#0)):::inputCls--->|"→0"|Identity_0
Stack_0--->|"0→"|output0((out#0)):::outputCls
classDef inputCls fill:#afa
classDef outputCls fill:#ffa
classDef externalCls fill:#ccc
classDef producerCls fill:#ccf
classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
classDef metaCls stroke-width:5px
classDef rootCls stroke:#f00
classDef producerCls_rootCls stroke:#f00,fill:#ccf
classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
classDef metaCls_rootCls stroke:#f00,stroke-width:5px
- Make
Shape
working withforwardDims()
.
Merge request reports
Activity
assigned to @olivierbichler
added Fix 🔥🔥 label
added 4 commits
-
4d0c9f38...269145ad - 2 commits from branch
dev
- 3d9f7779 - Make Shape working with forwardDims()
- b6ef490e - Added maxElements input to Stack
-
4d0c9f38...269145ad - 2 commits from branch
enabled an automatic merge when all merge checks for b6ef490e pass
mentioned in commit 5d0baf0b
@olivierbichler this commit is annoying.
Indeed this force Shape to have an implementation in order to run forward_dims, otherwise mOutputs[0]->getImpl() -> 0x0.
Is it possible to revert it? If so I would like ot do it in !291 (merged)
@cmoineau But Shape is supposed to have a default implementation, in which case this is an issue?
In case we do the import of an ONNX followed by a forward dims followed by an ONNX export. In this case we don't have access to any backend of intermediate tensor.
Edited by Cyril Moineau