Skip to content
Snippets Groups Projects

Reshape forward dims

Merged Cyril Moineau requested to merge ReshapeForwardDims into dev
3 files
+ 40
24
Compare changes
  • Side-by-side
  • Inline
Files
3
@@ -22,7 +22,13 @@
namespace Aidge {
void constantFolding(std::shared_ptr<GraphView> graph);
/**
* @brief Retrieve part of the graph that can be pre-computed and replace them by a Producer.
*
* @param graph Graph to fold the constant
* @param constant_shape If true Shape operators are considered to be constant
*/
void constantFolding(std::shared_ptr<GraphView> graph, bool constantShape=false);
// FUSE MATMUL + ADD -> FC
@@ -139,16 +145,16 @@ void expandMetaOps(std::shared_ptr<GraphView> graph, bool recursive = false);
/**
* @brief Tile any :cpp:function:`Aidge::MatMul` operator to several fixed size matrix multiplications.
* For instance, for a MatMul of size 80x80 and a tiling of 16x16, this will tile
* For instance, for a MatMul of size 80x80 and a tiling of 16x16, this will tile
* the MatMul operator to 25 (5 by 5) MatMul operators of size 16x16, with Slice
* operators inserted at the inputs and Concat operators inserted at the outputs.
*
* This is especially useful when matrix multiplication must be mapped to fixed
* maximum size hardware TPU (Tensor Processing Unit) or MMA (Matrix Multiplication
* Accelerator). This recipe can be combined with the :cpp:function:`Aidge::convToMatMul` recipe in
* order to convert convolutions to matrix multiplication beforehand, and
*
* This is especially useful when matrix multiplication must be mapped to fixed
* maximum size hardware TPU (Tensor Processing Unit) or MMA (Matrix Multiplication
* Accelerator). This recipe can be combined with the :cpp:function:`Aidge::convToMatMul` recipe in
* order to convert convolutions to matrix multiplication beforehand, and
* :cpp:function:`Aidge::constantFolding` recipe to fold sliced constant tensors.
*
*
* @param matMul MatMul operator to be tiled.
* @param maxDims Maximum output dimensions of the tiled MatMul operators.
*/
@@ -181,7 +187,7 @@ size_t convToMatMul(std::shared_ptr<GraphView> graph);
/**
* @brief Adapt a graph to the available kernels of a backend.
*
*
* @param graph Graph to manipulate
*/
void adaptToBackend(std::shared_ptr<GraphView> graph);
@@ -189,18 +195,18 @@ void adaptToBackend(std::shared_ptr<GraphView> graph);
/**
* @brief Create a GenericOp from an Operator and replace it
*
*
* @param node Node which Operator will be changed into a generic Operator
*/
void toGenericOp(std::shared_ptr<Node> node);
/**
* @brief The node passed contains an operator which input of index 1 is supposed be be weights of type Int4, Int3, Int2, binary.
* This recipie only operates memory transformations on the weight tensor.
* This recipie only operates memory transformations on the weight tensor.
* First, permutes the dimensions to match the dataformat NHWC
* Second, compact the last dimension of the weights (Channel dimension) into 8bits
*
* @param node Node
* Second, compact the last dimension of the weights (Channel dimension) into 8bits
*
* @param node Node
*/
void applyWeightInterleaving(std::shared_ptr<Node> node);
Loading