Skip to content
Snippets Groups Projects

Matmul rework

Merged Houssem ROUIS requested to merge hrouis/aidge_core:matmul_rework into dev

[ADD] support for Multi-dimensional Matrix Multiplication, following numpy MatMul operation specificities.

In the context of Multi-dimensional MatMul, we consider an N-Dim Tensor as a stack of 2-D matrices having as shape the last two dimensions of the Tensor.

From Numpy documentation:

The behavior depends on the arguments in the following way.

  • If both arguments are 2-D they are multiplied like conventional matrices.
  • If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.
  • If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.
  • If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.
Edited by Maxence Naud

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • Maxence Naud
  • Maxence Naud approved this merge request

    approved this merge request

  • 41 41 AIDGE_ASSERT(matmulNode->getParent(1), "No weight detected to produce the fuseMulAdd recipe.");
    42 42
    43 43 std::shared_ptr<Node> weight = matmulNode->getParent(1)->cloneSharedOperators();
    44 const DimSize_t outSize = std::dynamic_pointer_cast<MatMul_Op>(matmulNode->getOperator()) -> getAttr<DimSize_t>("OutChannels");
    44 // TODO: find another way to get OutChannels for FC operator.
    45 // This poor fix supposes that one of Add inputs is a const and has the same outChannels as the output
    46 DimSize_t outSize = 0;
    47 const auto& op = std::dynamic_pointer_cast<OperatorTensor>(addNode->getOperator());
    48 for (size_t i = 0; i < op->nbInputs(); i++)
    49 {
    50 const auto& inTensor = op->getInput(i);
    51 if(inTensor->nbDims() > 0) {
    52 outSize = inTensor->dims()[inTensor->nbDims()-1];
    53 break;
    54 }
    55 }
    • Comment on lines +48 to +55

      This fix might change with the broadcasting MR !65 (merged)

    • Author Developer

      The broadcasting MR doesn't change the inputs of Add so I don't think it will have an influence on this fix. However, I am not really convinced by the fix I just found no other way since the output of both Add and MatMul is dynamic so we can't retrieve the outChannels from it.

    • Author Developer

      @cmoineau I don't know if you discussed with @pineapple but this is a bad fix and I can't find a proper one to get outChannels because both add and matmul outputs are dynamics so I can't access the shape.

      Edited by Houssem ROUIS
    • We discussed about it. This fix is good for now but could be changed if needed later. It does not prevent a merge.

    • Please register or sign in to reply
  • Houssem ROUIS added 3 commits

    added 3 commits

    Compare with previous version

  • Maxence Naud added 41 commits

    added 41 commits

    Compare with previous version

  • Maxence Naud enabled an automatic merge when the pipeline for 887249ac succeeds

    enabled an automatic merge when the pipeline for 887249ac succeeds

  • Maxence Naud aborted the automatic merge because target branch was updated

    aborted the automatic merge because target branch was updated

  • Houssem ROUIS added 2 commits

    added 2 commits

    Compare with previous version

  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Please register or sign in to reply
    Loading