[Scheduler] Add Scheduler.getNodeScheduling method and fix sequential function.
Feature
- Scheduling porvide a list of nodes sorted by the scheduling order
Fix
-
parallel
,sequential
andresidual
function now work on python, issue with the binding and thepy::explicitly_convertible
function.
Refactor:
- Separate scheduling & forward in
Scheduler
object (linked with aidge_backend_cpu!5 (merged))
Merge request reports
Activity
changed milestone to %v0.1.0
requested review from @pineapple
assigned to @cmoineau
@vtemplier I think you can use it as is to develop the export plugin :)
@pineapple I will add unittest on the python API based on this code sample later on:
import aidge_core import aidge_backend_cpu import numpy as np input_data = np.array([-1.0, 0.0, 1.0, -2.0]).astype(np.float32) input_tensor = aidge_core.Tensor(input_data) input_node = aidge_core.Producer(input_tensor, "X") relu_node = aidge_core.ReLU() graph_view = aidge_core.sequential([relu_node]) input_node.add_child(graph_view) input_node.get_operator().set_datatype(aidge_core.DataType.Float32) input_node.get_operator().set_backend("cpu") graph_view.set_datatype(aidge_core.DataType.Float32) graph_view.set_backend("cpu") scheduler = aidge_core.SequentialScheduler(graph_view) scheduler.forward() print("Result :") print(relu_node.get_operator().output(0)) print("Node Scheduling :") print(scheduler.get_node_scheduling()) # return [relu_node]
After reflexion, I will add this test to aidge_backend_cpu!5 (merged) as it recquires a backend to run !
added 1 commit
- 36994a1c - [Scheduling] Refactor to separate scheduling & forward phase.
Linked with aidge_backend_cpu!5 (merged)
requested review from @vtemplier
@vtemplier Can you review and merge this so that I can run unittest on the aidge_backend_cpu MR ?
Thanks in advance !
38 38 Aidge::NbElts_t Aidge::Operator::getNbProducedData(Aidge::IOIndex_t outputIdx) const { 39 39 return mImpl->getNbProducedData(outputIdx); 40 40 } 41 void Aidge::Operator::updateConsummerProducer(){ 42 mImpl->updateConsummerProducer(); Yes it is mandatory to do so, however we may be able to give a defualt implementation:
for (IOIndex_t inputIdx = 0; static_cast<NbElts_t>(inputIdx) < mNbConsumedData.size(); ++inputIdx) mNbConsumedData[inputIdx]+= getNbRequiredData(inputIdx); // each input is consumed by the minimum amount for a forward pass mNbProducedData[0]+= getRequiredMemory(0, {}); }
Which is every input is consummed and every output is created. This can then be overloaded depending of the backend.
mentioned in commit a3f646d2