Multiple fixes to enable multi-GPUs forward execution
Compare changes
Files
46+ 12
− 0
@@ -885,6 +885,10 @@ public:
@@ -904,6 +908,10 @@ public:
@@ -941,6 +949,10 @@ public:
Some of these fix allow multi-GPU forward execution of cloned graphs.
GraphView::save()
;const
version of refCastFrom()
;GraphView::add()
from another GraphView preserve ordered inputs/outputs;aidge_model0 = aidge_onnx.load_onnx("mymodel.onnx")
aidge_model1 = aidge_model0.clone()
aidge_model0.set_backend("cuda", 0)
aidge_model1.set_backend("cuda", 1)
aidge_models = aidge_core.GraphView()
aidge_models.add(aidge_model0)
aidge_models.add(aidge_model1)
scheduler = aidge_core.ParallelScheduler(aidge_models)
scheduler.forward(True, [in_model0, in_model1])
The models aidge_model0
and aidge_model1
will be run in parallel on 2 GPUs, yet in a synchronized fashion (the same layers are executed at the same time). This should be generalizable to any number of device/backend combinations!
aidge_model0 = aidge_onnx.load_onnx("mymodel.onnx")
aidge_model1 = aidge_model0.clone()
aidge_model0.set_backend("cuda", 0)
aidge_model1.set_backend("cuda", 1)
scheduler0 = aidge_core.SequentialScheduler(aidge_model0)
scheduler1 = aidge_core.SequentialScheduler(aidge_model1)
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as e:
e.submit(scheduler0.forward, True, [in_model0])
e.submit(scheduler1.forward, True, [in_model1])
Here models aidge_model0
and aidge_model1
are run in parallel fully asynchronously.
Copyright © Eclipse Foundation, Inc. All Rights Reserved. Privacy Policy | Terms of Use | Copyright Agent