Commit 6f00d3f6 authored by Florent Zara's avatar Florent Zara 💬
Browse files

Merge branch 'cherry-pick-a543666a' into 'dev-onepager'

Update aice aura demonstrator following fza review.

See merge request !10
parents e70444f2 46f7e55a
Pipeline #3139 passed with stage
...@@ -137,10 +137,10 @@ Also by being able to run the process everywhere, we could execute it on several ...@@ -137,10 +137,10 @@ Also by being able to run the process everywhere, we could execute it on several
![Workflow benchmarking](/images/articles/aice_aura_demonstrator/benchmark_perf.png) ![Workflow benchmarking](/images/articles/aice_aura_demonstrator/benchmark_perf.png)
On three different machines: On three different machines:
* A middle-range laptop, HDD disks and i7 CPU. * A middle-range laptop (label: Laptop), HDD disks and i7 CPU.
* A high-range Station, SSD disks and (a better) i7 CPU. * A high-range station (label: Station), SSD disks and (a better) i7 CPU.
* A high-range server (SDIA), HDD disks and 2 x Xeon (48 threads). * A high-range server (label: SDIA), HDD disks and 2 x Xeon (48 threads).
* With a single container for data preparation vs. multiple containers executed in parallel. * With a single container for data preparation vs. multiple containers executed in parallel (label: Mono / Multi).
We could identify different behaviours regarding performance. The data preparation step relies heavily on IOs, and improving the disk throughput (e.g. SSD + NVMe instead of a classic HDD) shows a 30% gain. The ML training on the other hand is very CPU- and memory- intensive, and running it on a node with a large number of threads (e.g. 48 in our case) brings a stunning 10x performance improvement compared to a laptop equipped with an Intel i7. We could identify different behaviours regarding performance. The data preparation step relies heavily on IOs, and improving the disk throughput (e.g. SSD + NVMe instead of a classic HDD) shows a 30% gain. The ML training on the other hand is very CPU- and memory- intensive, and running it on a node with a large number of threads (e.g. 48 in our case) brings a stunning 10x performance improvement compared to a laptop equipped with an Intel i7.
...@@ -148,7 +148,7 @@ We could identify different behaviours regarding performance. The data preparati ...@@ -148,7 +148,7 @@ We could identify different behaviours regarding performance. The data preparati
AURA uses Grafana to display the ECG signals and the associated annotations, both for the creation of annotated data sets and for their exploitation. In order to build this workflow we need to import the rr-intervals files and their associated annotations in a PostgreSQL database, and configure Grafana to read and display the corresponding time series. AURA uses Grafana to display the ECG signals and the associated annotations, both for the creation of annotated data sets and for their exploitation. In order to build this workflow we need to import the rr-intervals files and their associated annotations in a PostgreSQL database, and configure Grafana to read and display the corresponding time series.
An example of rr-interval with the associated annotations (blue/red bottom line), is shown below: An example of rr-interval plot with the associated annotations (blue/red bottom line) is shown below:
![ECG and annotations](/images/articles/aice_aura_demonstrator/ecg_annotations.png) ![ECG and annotations](/images/articles/aice_aura_demonstrator/ecg_annotations.png)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment