Rewrite memory manager log_info in Python.
Context
This MR fix !276 (merged)
It began by looking at the memory management of the dinov2 model
Realizing that it is not readable, I rewrited it in python in order to easily tweak it.
Here is the new memory info after rewritting it:
Add display_names
parameter to aidge_core.generate_optimized_memory_info
which allows to diplay the names or not:
Benefice
This MR remove a dependencie to gnuplot
Log function is easy to use now!
Miscellaneous
After using MetaOperators to simplify the graph we get the follwing memory placement strategy which surprised me as it produces a lower memory peak!
This make sens since by fusing operation we load in RAM data and do not need to save intermediate output.
A little sad to not have found this out before webinar
For reproductibility the code used:
import aidge_core
import aidge_backend_cpu
import aidge_onnx
dinov2_model = aidge_onnx.load_onnx("hf_dinov2_sim.onnx")
aidge_core.fuse_to_metaops(dinov2_model, "MatMul-*>Add", "Linear")
aidge_core.fuse_to_metaops(dinov2_model, "ReduceMean-*>Sub#1~>(Pow#1->ReduceMean-*>Add#1->Sqrt)-*>Div#1-*>Mul#1-*>Add#2;"
"Sub#1~*>Div#1;"
"Pow#1<1~Producer;"
"Add#1<*~Producer;"
"Mul#1<*~Producer;"
"Add#2<*~Producer;"
"Sub#1~>$", "LayerNorm")
aidge_core.fuse_to_metaops(dinov2_model, "MatMul->Div#1->Softmax-*>MatMul;"
"Div#1<1~Producer", "ScaledDotProductAttention")
aidge_core.fuse_to_metaops(dinov2_model, "ScaledDotProductAttention#1->Transpose->Reshape#1->Linear;"
"Reshape#1<1~Producer;"
"ScaledDotProductAttention#1<0-(Transpose<-Reshape#2<-Add#1);"
"ScaledDotProductAttention#1<1-(Transpose<-Reshape#3<-Add#2);"
"ScaledDotProductAttention#1<2-(Transpose<-Reshape#4<-Add#3);"
"Reshape#2<1~Producer;"
"Add#1<*-0-Split#1;"
"Add#2<*-1-Split#1;"
"Add#3<*-2-Split#1;"
"Split#1<-MatMul;"
"Split#1<1~Producer", "MultiHeadAttention")
aidge_core.fuse_to_metaops(dinov2_model, "Div#1->Erf->Add#1-*>Mul->Mul#2;"
"Div#1<1~Producer;"
"Add#1<*~Producer;"
"Mul#2<*~Producer", "GeLU")
dinov2_model.set_ordered_outputs([dinov2_model.get_ordered_outputs()[0][0].inputs()[0], dinov2_model.get_ordered_outputs()[0]])
dinov2_model.set_backend("cpu")
dinov2_model.set_datatype(aidge_core.dtype.float32)
dinov2_model.forward_dims([[1,3,224,224]], True)
s = aidge_core.SequentialScheduler(dinov2_model)
s.generate_scheduling()
aidge_core.generate_optimized_memory_info(s, "stats_dyno", wrapping=False, display_names=False)
## Major modifications
- enhance structure: rewrite `generate_optimized_memory_info`
- add `display_names` parameter to `generate_optimized_memory_info` which allows to diplay the names or not
- add matplotlib dependence for Python package i `pyproject.toml`
- remove a dependency to gnuplot
Merge request reports
Activity
requested review from @olivierbichler
assigned to @cmoineau
@idealbuq This MR may interest you since you work on memory mapping
@olivierbichler I did not fully reproduced the log method as I don't handle memory wrapping.
However, to be honnest I don't understand what the code represent ...
I commented the previous code but I think that I will completely remove the log function from CPP if we were to merge this.
Is it this code that you did not reproduce?
fmt::print(gnuplot.get(), "set arrow from {},{} to {},{} nohead\n", startX, (contiguousOffset / 1024.0), (startX + 0.1), (contiguousOffset / 1024.0)); fmt::print(gnuplot.get(), "set arrow from {},{} to {},{} nohead\n", (startX + 0.05), ((contiguousOffset + contiguousSize) / 1024.0), (startX + 0.05), (wrappedOffset / 1024.0));
It is not critical. It actually shows where the wrapping is by displaying an arrow from the end of the first section to the beginning of the wrapped section. It was not that clear anyway, there is probably a clever way to materialize that in the graph...
Edited by Olivier BICHLER
added 1 commit
- 55ce8814 - [Min] MemManager plot, remove axis and center more the fig.
- Edited by Cyril Moineau
added PriorityLow Refactoring🎨 StatusWork in Progress labels
mentioned in issue #211 (closed)
requested review from @idealbuq
@idealbuq @olivierbichler I will remove gnuplot dependencies and push this once @pineapple merged the 0.4
mentioned in merge request aidge!86 (merged)
added 2 commits
added 93 commits
-
1be99fe8...a983179d - 88 commits from branch
dev
- 59bffdf1 - Rewrite memory manager log_info in Python.
- 80ea57b4 - [Min] MemManager plot, remove axis and center more the fig.
- 4f0ceb4d - Add matplotlib as dependance.
- 28470824 - Fix error with meminfo save fig.
- 61e9bf2a - Remove MemoryManager.log() method.
Toggle commit list-
1be99fe8...a983179d - 88 commits from branch
enabled an automatic merge when all merge checks for 61e9bf2a pass
aborted the automatic merge because the source branch was updated. Learn more.