Skip to content

Refactor OperatorImpl for backend/export

Olivier BICHLER requested to merge backend_export into dev

Refactor OperatorImpl for backend/export.

  • Changed Registrable class template to be more general (using Func instead of std::function<Func>)
  • Fixed a bug in Log::log() that did not handle well formatted string for file log.
  • Moved producer consumer model in a separate class ProdConso, that is returned by the prodConso() method in OperatorImpl.
  • Added new ImplSpec struct, that allows to specify an implementation (I/O type, format, dims...).
  • Added new Impl struct, that contains the implementation details (producer consumer model, forward and backward kernels).
  • Added getRequiredSpec(), getBestMatch(), getAdaptation() and getBestAdaptation() to OperatorImpl;
  • Added adaptToBackend() recipe ;
  • Proposal for fixing #153;
  • Add Operator::getAvailableBackend();
  • Fixed bug with Identity implementation and Scheduler. To work correctly, the output tensor cannot be the input, as this implies that values are the same even before next forward() call.
  • Allow to set a list of backends by order of preference;
  • Adapt operator implementation registrars for CPU backend;
  • Adapt operator implementation registrars for CUDA backend.

TODO

  • Double check with @cmoineau that it fits nicely with export.

TODO maybe later

  • Add Operator registry that lists all available Operators this has some issues: it needs an additional registrar for Operators that conflicts with the operators backend registrars, it is somewhat redundant with registries in ONNX import/export, it requires a value (with current map-based registry system) which is unclear...;
  • Add a base OperatorWithImpl class with CRTP that factor backend methods? may be nice to factorize some code that is currently repeated for each operator, without requiring any API change. Out of the scope of this MR;

Usage as export backend

# Export class for "arm_cortexm". The same class needs to be registered for
# all operators supported by the export.
class OperatorExport_arm_cortexm(aidge_core.OperatorImpl):
    # Registry for all "arm_cortexm" operator exports
    registry = dict()

    def __init__(self, operator):
        super(OperatorExport_arm_cortexm, self).__init__(operator, "export_arm_cortexm")

    # Override the virtual OperatorImpl method, in order to provide available 
    # implementation specifications
    def get_available_impl_specs(self):
        if self.get_operator().type() in self.registry:
            return list(self.registry[self.get_operator().type()].keys())
        else:
            return []

    # Decorator to register kernels for this export
    @staticmethod
    def register(type, spec):
        def decorator(operator):
            def wrapper(*args, **kwargs):
                return operator(*args, **kwargs)

            if (type not in OperatorExport_arm_cortexm.registry):
                OperatorExport_arm_cortexm.registry[type] = dict()
            OperatorExport_arm_cortexm.registry[type][spec] = operator
            return wrapper
        return decorator


# Register kernels: for a given type and a given implementation specification
# TODO: the FCOpKernel could inherit from a generic base class for this export or all export types...
@OperatorExport_arm_cortexm.register(aidge_core.FCOp.Type, aidge_core.ImplSpec(aidge_core.IOSpec(aidge_core.dtype.int16)))
class FCOpKernel():
    def generate():
        print("Gen code for FCOp")

# TODO: register other kernels...

# Register all operator types to the same export class
aidge_core.register_FCOp("export_arm_cortexm", OperatorExport_arm_cortexm)
aidge_core.register_ReLUOp("export_arm_cortexm", OperatorExport_arm_cortexm)
# TODO: register other operators... 

# Use the CPU backend for Tensor in the "export_arm_cortexm" backend
aidge_core.register_Tensor(["export_arm_cortexm", aidge_core.dtype.float32],
                           aidge_core.get_key_value_Tensor(["cpu", aidge_core.dtype.float32]))
Edited by Cyril Moineau

Merge request reports