Commit 2f0ae471 authored by Boris Baldassari's avatar Boris Baldassari
Browse files

Fix aura article.

parent 7efd8d97
Pipeline #3577 passed with stage
......@@ -142,13 +142,14 @@ Also by being able to run the process everywhere, we could execute it on several
* A high-range server (label: SDIA), HDD disks and 2 x Xeon (48 threads).
* With a single container for data preparation vs. multiple containers executed in parallel (label: Mono / Multi).
The following plot shows the evolution of performance in various situations:
{{< grid/div isMarkdown="false" >}}
<img src="/images/articles/aice_aura_demonstrator/benchmark_perf.png" alt="Execution time benchmark" class="img-responsive">
<br />
{{</ grid/div >}}
We could identify different behaviours regarding performance. The data preparation step relies heavily on IOs, and improving the disk throughput (e.g. SSD + NVMe instead of a classic HDD) shows a 30% gain. The ML training on the other hand is very CPU- and memory- intensive, and running it on a node with a large number of threads (e.g. 48 in our case) brings a stunning 10x performance improvement compared to a laptop equipped with an Intel i7. The following plot shows the evolution of performance in various situations:
We could identify different behaviours regarding performance. The data preparation step relies heavily on IOs, and improving the disk throughput (e.g. SSD + NVMe instead of a classic HDD) shows a 30% gain. The ML training on the other hand is very CPU- and memory- intensive, and running it on a node with a large number of threads (e.g. 48 in our case) brings a stunning 10x performance improvement compared to a laptop equipped with an Intel i7.
### Visualisation process
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment