Commit 70796793 authored by Boris Baldassari's avatar Boris Baldassari
Browse files

Merge branch 'dev-onepager' into 'main'

onepager

See merge request !8
parents 47bd595b e20c745b
Pipeline #3547 passed with stage
in 0 seconds
......@@ -4,11 +4,12 @@ title = "Eclipse AI, Cloud & Edge (AICE) Working Group"
theme = "eclipsefdn-hugo-solstice-theme"
metaDataFormat = "yaml"
#googleAnalytics = "UA-910670-30"
disableKinds = []
themesDir = "node_modules/"
enableRobotsTXT = true
pluralizeListTitles = false
disableKinds = []
[Params]
google_tag_manager = "GTM-5WLCZXC"
description = "AICE Working Group: providing resources to promote the advancement, implementation, and verification of open source software for AI, Cloud, and Edge computing."
......@@ -51,19 +52,45 @@ pluralizeListTitles = false
plainIDAnchors = true
hrefTargetBlank = true
[[menu.main]]
name = "Home"
url = "/"
weight = 5
identifier = "home"
[[menu.main]]
name = "News & Events"
url = "/news"
weight = 1
weight = 10
[[menu.main]]
name = "Participate"
url = "/#participating"
weight = 20
identifier = "participate"
[[menu.main]]
name = "Resources"
url = "/resources"
weight = 2
weight = 30
identifier = "resources"
[[menu.main]]
name = "Articles"
url = "/articles"
weight = 1
parent = "resources"
\ No newline at end of file
name = "Presentations"
url = "/resources/presentations"
weight = 4000
parent = "resources"
[[menu.main]]
name = "Use cases"
url = "/use_cases/"
weight = 4010
parent = "resources"
#[[menu.main]]
# name = "White Paper"
# url = "https://newsroom.eclipse.org/news/community-news/why-industry-needs-open-ecosystem-ai-cloud-and-edge-technologies"
# weight = 4020
# parent = "resources"
---
title: "Eclipse AI, Cloud & Edge (AICE) Working Group"
date: 2021-05-04T10:00:00-04:00
type: "coming-soon"
date: 2022-03-09T10:00:00+01:00
layout: "single"
footer_class: "footer-darker"
---
The Eclipse AI, Cloud & Edge (AICE) is an Eclipse Working Group, currently in construction, to promote the advancement, development and experimentation of open source software for AI, Cloud & Edge technologies. It also manages and operates an open lab (the “AICE OpenLab”) that provides a set of resources to achieve these goals.
## Activities
The AICE WG and the associated AICE OpenLab achieve this by:
- Fostering open and neutral collaboration amongst members for the adoption of open source technologies.
- Defining, publishing and promoting reference architectures, blueprints and distributions of open source software that have been verified for industry AI, Cloud, and Edge standards, requirements, and use cases.
- Developing and providing open source verification test suites, test tools, calibrated datasets and hosted test infrastructure for industry AI, Cloud, and Edge standards, requirements and use cases.
- Ensuring that key requirements regarding privacy, security and ethics are integrated into all the OpenLab activities.
- Partnering with industry organizations to assemble and verify open source software for their standards, requirements, and use cases.
- Promoting the AICE OpenLab in the marketplace and engaging with the larger open source community.
- Managing and funding the lab infrastructure resources to support this work.
----
## Participating
### How to join?
Are you interested in joining AICE? It's easy, the working group is open to everyone.
First step to join us is to subscribe to the AICE mailing list that can be found on the [Eclipse mailing lists page](https://accounts.eclipse.org/mailing-list/aice-wg). Archives can be browsed [here](http://www.eclipse.org/lists/aice-wg).
If you would like to more actively engage with the initiative, please read more:
- [The Working Group Charter](https://www.eclipse.org/org/workinggroups/aice-charter.php)
- [The Participation Agreement](https://www.eclipse.org/org/workinggroups/wgpa/aice-working-group-participation-agreement.pdf)
And contact [Florent Zara](mailto:florent.zara-at-eclipse-foundation.org) or [Gaël Blondelle](mailto:gael.blondelle-at-eclipse-foundation.org).
### Monthly meetings
We also hold monthly meetings to discuss actions, organise events and monitor progress of our various tasks.
Meetings are announced on the mailing list.
----
## Supporters
[![AURA logo](images/partner-logos/aura-logo.png)](https://en.aura.healthcare/)
----
## Testimonial
> We worked with AICE to audit our current setup and improve the performance of our MLOps pipeline for epileptic seizure detection. We started from a working, although non-optimal, prototype and turned it into a fully optimised, reproducible and scalable workflow that can be seamlessly deployed almost anywhere.
>
> We now have a clean and well-structured repository that we can reuse for future developments, and a set of established best practices to help us build and deliver better software solutions.
- Alexis Comte, data scientist at AURA.
----
---
title: "AICE: The AURA Demonstrator"
date: 2022-01-20T10:00:00-04:00
layout: "single"
title: "The AURA use case"
date: 2022-01-27T10:00:00+01:00
footer_class: "footer-darker"
aliases:
- /articles/aice_aura_demonstrator/index.html
---
## Introduction
This document describes the work done on the AICE Working Group demonstrator with the AURA use case. This project constitues the first iteration of the AICE OpenLab and intends to demonstrate the benefits of a shared, common platform to collaboratively work on AI workflows.
This document describes the work done on the AICE Working Group demonstrator with the AURA use case. This project constitutes the first iteration of the AICE OpenLab and intends to demonstrate the benefits of a shared, common platform to collaboratively work on AI workflows.
### About the AICE working group
......@@ -15,7 +17,7 @@ The Eclipse AI, Cloud & Edge (AICE) Working Group is a special interest working
The AICE OpenLab has been initiated to provide a common shared platform to test, evaluate and demonstrate AI workflows developed by partners. This enables an open collaboration and discussion on AI solutions, and fosters portability and standardisation. The AICE OpenLab is currently working on two use cases: AURA, as described in this document, and Eclipse Graphene, a general-purpose scheduler for AI workflows.
More information about AICE:
More information:
* AICE Working Group wiki: https://wiki.eclipse.org/AICE_WG/
* Eclipse Graphene / AI4EU Experiments: https://ai4europe.eu.
......@@ -23,7 +25,7 @@ More information about AICE:
AURA is a non-profit French organisation that designs and develops a patch to detect epileptic seizures before they happen and warns patients ahead for safety purposes. For this AURA is creating a multidisciplinary community integrating open source and open hardware philosophies with the health and research worlds. The various partners of the initiative (patients, neurologists, data scientists, designers) each bring their experience and expertise to build an open, science-backed workflow that can actually help the end-users. In the end, this device could be a life-changer for the 10 million people with drug-resistant epilepsy worldwide.
More information about Aura:
More information:
* AURA Healthcare official website: https://en.aura.healthcare
* GitHub: https://github.com/Aura-healthcare/
......@@ -32,21 +34,45 @@ More information about Aura:
The epileptic seizure is the cornerstone for the management of the disease by health professionals. A precise mapping of seizures in daily life is a way to better qualify the effectiveness of treatments and care. This is a first step towards the forecasting of epileptic seizures which would allow people to better have control over their epilepsy and regain autonomy in their daily lives.
There are a myriad of different forms and origins of epilepsy and epileptic seizures. The symptoms and physical signs broadly differ according to each patient: research has been conducted on electroencephalograms (EEGs), electrocardiograms (ECGs), movement detection, electrodermal activity, and even using dogs. As a result it is impossible — as of today at least — to draw a generic-purpose diagnostic or prediction method, and machine learning methods have been extensively used in the recent years to draw viable seizure detection and forecasting, and tackle the variability across patients.
There are a myriad of different forms and origins of epilepsy and epileptic seizures. The symptoms and physical signs broadly differ according to each patient: research has been conducted on electroencephalograms (EEGs), electrocardiograms (ECGs), movement detection, electrodermal activity, and even using dogs. As a result it is impossible — as of today at least — to draw a generic-purpose diagnostic or prediction method.
Machine Learning (ML) algorithms have been extensively used in the recent years to tackle this variability across patients and draw viable seizure detection and forecasting. Generally speaking, ML methods rely on a (large) set of examples, in our case datasets of ECGs with their associated seizure annotations, to predict specific outputs (epileptic seizures) on new input data (e.g. live ECGs). Neurophysiologists use a visualisation tool like Grafana to enter the annotations and define time ranges as either normal activity (noise) or epileptic activity (seizures), and store them in dedicated (`.tse_bi`) files.
For annotations, neurophysiologists use a visualisation tool like Grafana to enter the annotations and define time ranges as either normal activity (noise) or epileptic activity (seizures). Annotations are stored in .tse_bi files with a 1-to-1 association with the EDF signal files. The annotations are used as a reference dataset for training various ML models. Available datasets are usually split so as to set one part for the training and another one to verify the trained model. A typical workflow is to then try to predict epileptic seizures according to an ECG signal and check if the human annotations confirm the seizure.
These annotations are used as a reference dataset for training various ML models. Available datasets are usually split so as to set one part for the training and another one to verify the trained model. A typical workflow is to then try to predict epileptic seizures according to an ECG signal and check if the human annotations confirm the seizure.
More information on epileptic seizure detection:
* Methods for seizure detection: https://en.aura.healthcare/analyse-des-donn%C3%A9es
More information:
* Methods for seizure detection: <https://en.aura.healthcare/analyse-des-données>
* Seizure dogs: https://www.epilepsy.com/living-epilepsy/seizure-first-aid-and-safety/seizure-dogs
### Existing workflow
We started from the workflow already developed by the AURA data scientists. As usual the first step is to prepare the data before using it (cleaning, selection and extraction of features, and formatting). The resulting dataset is subsequently fed to a ML model to predict future seizures.
The data inputs of the workflow come from two different file types:
* The raw ECG signal data stored as European Data Format (EDF).
* The annotations, which describe if the signal pattern is actually an epileptic seizure or normal activity, are stored in `.tse_bi` files with a 1-to-1 association with the EDF signal files.
The data preparation step is achieved through a series of Python scripts developed by the AURA scientists, which extract the rr-intervals (i.e. the time between two heart beats), cardiac features and annotations, then build a simplified dataset that can be used to train a Random-Forest algorithm:
{{< grid/div isMarkdown="false" >}}
<img src="/images/articles/aice_aura_demonstrator/ecg_workflow.png" alt="The AURA AI process - before" class="img-responsive">
{{</ grid/div >}}
There are many parameters involved in the process, with some of them having a huge impact on performance, like the time window for the rr-interval. In order to fine-tune these parameters one needs to try and run various combinations, which is not practical and can be prohibitive with long executions.
More information:
* European Data Format (EDF): https://www.edfplus.info
* AURA GitHub repository for seizure detection: https://github.com/Aura-healthcare/seizure_detection_pipeline
## Objectives of the project
In this context, our first practical goal was to train the model on a large dataset of EEGs/ECGs from Temple University Hospital (TUH). The TUH dataset is composed of EDF files recording the electrocardiogram signal, along with their annotation files that classify the time ranges as either noise or as an epileptic seizure. The full dataset has 5600+ EDF files and as many annotations, representing 692 Patients, 1074 Hours of Recordings and 3500+ Seizures. Its size on disk is 67GB.
In this context, our first practical goal was to train the model on a large dataset of ECGs from the Temple University Hospital (TUH). The TUH dataset is composed of EDF files recording the electrocardiogram signal, along with their annotation files that classify the time ranges as either noise or as an epileptic seizure. The full dataset has 5600+ EDF files and as many annotations, representing 692 Patients, 1074 Hours of Recordings and 3500+ Seizures. Its size on disk is 67GB.
AI-related research and development activities, even if they rely on smaller datasets for the early stages of the set up, require a more complete dataset to run when it comes to the fine-tuning and exploitation of the model. The TUH database was not used with the previous AURA workflow, as its full execution would take more than 20 hours on the developer's computers. Executions often failed because of wrong input data, and switching to more powerful computers was difficult because of the complex setup.
AI-related research and development activities, even if they rely on smaller datasets for the early stages of the set up, require a more complete dataset to run when it comes to the fine-tuning and exploitation of the model. The TUH database was not used often with the previous AURA workflow, as its full execution would take more than 20 hours on the developer's computers. Executions often failed because of wrong input data, and switching to more powerful computers was difficult because of the complex setup.
Established objectives of the project were to:
In this context, the established objectives of the project were to:
* Propose a proper, industrial-like process to open up and improve collaboration on the work done in the lab:
* improve sustainability for better collaborative work -- both in and outside of the lab,
* improve reliability regarding missing/incomplete data,
......@@ -57,41 +83,23 @@ In this context, the established objectives of the project were to:
More information:
* Temple university dataset homepage: https://isip.piconepress.com/projects/tuh_eeg/html/downloads.shtml#c_tusz
* Temple university dataset reference: Obeid I., Picone J. (2016). The temple university hospital EEG data corpus. Front. Neurosci. 10:196. 10.3389/fnins.2016.00196.
* Temple university dataset reference: Obeid I., Picone J. (2016). The temple university hospital EEG data corpus. Front. Neurosci. 10:196.10.3389/fnins.2016.00196.
## Areas of improvement
### Industrialisation: Cleaning/Refactoring of the repository
Considering the above situation and objectives, we identified four area of improvements:
The work done by the AURA researchers and data scientists on ECGs had been organised in a bunch of GitHub repositories, with different people using different tools and structures. The first step was to identify the parts required to run the complete workflow, and extract them from the various repositories and branches to build a unified structure. The requirements of this repository structure are:
* Re-use common scripts into each process automatically (no redundancy).
* Provide up-to-date documentation and passing tests.
* Set up a process to automatically build the Docker images to allow multiple execution methods: Airflow/pure python, Docker/Compose, Kubernetes or Eclipse Graphene.
Building upon the current resources in use at AURA for Ai workflow, the following directory structure was adopted:
```
├── data => Data samples for tests
├── graphene => All Docker images
│ ├── aura_dataprep => - Data processing Docker image
│ ├── aura_ml_trainer => - ML training Docker image
│ └── ...
├── resources => Documentation, images..
├── scripts => Repo-related scripts for builds, integration..
├── src => AI-related scripts and source code
│ ├── domain
│ ├── infrastructure
│ └── usecase
└── tests => Tests for scripts and source code
```
We defined and enforced a contributing guide, making tests and documentation mandatory in the repository. We also set up a Travis job to execute the full python test suite at every commit, and made it visible through a badge in the repository's README. Regarding Git we used a simple Git workflow to maintain a clean branching structure. The newly agreed development process definitely helped clean up the repository. Each time a set of scripts was added, we knew exactly where they should go and how to reuse them in the overall workflow.
* **Portability**: make the workflow executable on any machine.
* **Performance**: optimise execution time and resources.
* **Visualisation**: help with the massive import of ECG files in database.
* **Industrialisation**: make sure that the next development can seamlessly reuse our work.
### Portability: Building the AURA Containers
One key aspect of the work achieved was to make the AI workflow easy to run anywhere, from the researcher's computers to our Kubernetes cluster. This implies to have a set of scripts and resources to automatically build a set of Docker images for each identified step of the process. On top of drastically improving portability, it also means that the very same workflow can be identically reproduced on different datasets.
We developed three Docker images to easily execute the full workflow or specific steps:
* Simple direct Python execution, using either the command line or a ML tracking tool like Airflow.
* Simple Docker images, executed independently with: \
`docker run -v $(pwd)/data/:/data bbaldassari/aura_dataprep bash /aura/scripts/run_bash_pipeline.sh`
......@@ -120,19 +128,19 @@ The images have been imported into our instance of AI4EU Experiments for further
Another step was to refactor the scripts to identify and remove performance bottlenecks. Things that work well on a small dataset can become unusable on a larger scale. By running it on larger datasets, up to thousands of files (i.e. the TUH dataset) we encountered unexpected cases and fixed them along the way. We now have a set of scripts that 1. can run on the entire TUH dataset (67GB) without major issue, and 2. is compatible with the two data formats most used by the AURA researchers: TUH and La Teppe.
The performance gain enabled us to run more precise and resource-consuming operations in order to refine the training. For example we modified the length of the sliding window when computing the rr-intervals from 9 seconds to 1 second, which generates a substancial amount of computations while seriously improving predictions from the ML training.
The performance gain enabled us to run more precise and resource-consuming operations in order to refine the training. For example we modified the length of the sliding window when computing the rr-intervals from 9 seconds to 1 second, which generates a substantial amount of computations while seriously improving predictions from the ML training.
We identified atomic steps that could be executed independently and built them as parallel execution jobs. As an example, the cleaning and preparation of data files can be executed simultaneously on different directories to accelerate the overall step. By partitionning the dataset in subsets of roughly 10GB and running concurrently 6 data preparation containers we went down from almost 17h to 4h on the same host.
We identified atomic steps that could be executed independently and built them as parallel execution jobs. As an example, the cleaning and preparation of data files can be executed simultaneously on different directories to accelerate the overall step. By partitioning the dataset in subsets of roughly 10GB and running concurrently 6 data preparation containers we went down from almost 17h to 4h on the same reference host.
Also by being able to run the process everywhere, we could execute it on several hardwares with different capabilities. This allowed us to check (and fix) portability while getting a better understanding of the resource requirements of each step. The following plot shows the evolution of performance in various situations:
![Workflow benchmarking](/images/articles/aice_aura_demonstrator/benchmark_perf.png)
On three different machines:
* A middle-range laptop, HDD disks and i7 CPU.
* A high-range Station, SSD disks and (a better) i7 CPU.
* A high-range server (SDIA), HDD disks and 2 x Xeon (48 threads).
* With a single container for data preparation vs. multiple containers executed in parallel.
* A middle-range laptop (label: Laptop), HDD disks and i7 CPU.
* A high-range station (label: Station), SSD disks and (a better) i7 CPU.
* A high-range server (label: SDIA), HDD disks and 2 x Xeon (48 threads).
* With a single container for data preparation vs. multiple containers executed in parallel (label: Mono / Multi).
We could identify different behaviours regarding performance. The data preparation step relies heavily on IOs, and improving the disk throughput (e.g. SSD + NVMe instead of a classic HDD) shows a 30% gain. The ML training on the other hand is very CPU- and memory- intensive, and running it on a node with a large number of threads (e.g. 48 in our case) brings a stunning 10x performance improvement compared to a laptop equipped with an Intel i7.
......@@ -140,24 +148,43 @@ We could identify different behaviours regarding performance. The data preparati
AURA uses Grafana to display the ECG signals and the associated annotations, both for the creation of annotated data sets and for their exploitation. In order to build this workflow we need to import the rr-intervals files and their associated annotations in a PostgreSQL database, and configure Grafana to read and display the corresponding time series.
An example of rr-interval with the associated annotations (blue/red bottom line), is shown below:
An example of rr-interval plot with the associated annotations (blue/red bottom line) is shown below:
![ECG and annotations](/images/articles/aice_aura_demonstrator/ecg_annotations.png)
The process of importing the rr-intervals and annotations is time- and resource- consuming, so we decided to apply the same guidelines as for the training workflow and built a dedicated container for the mass import of ECG signals with their annotations. By partitionning the dataset and setting up multiple containers we are able to run several import threads in parallel, thus massively improving the overall performance of the import. It enabled us to:
The process of importing the rr-intervals and annotations is time- and resource- consuming, so we decided to apply the same guidelines as for the training workflow and built a dedicated container for the mass import of ECG signals with their annotations. By partitioning the dataset and setting up multiple containers we are able to run several import threads in parallel, thus massively improving the overall performance of the import. It enabled us to:
* execute the import on a powerful machine thanks to the container's portability, and
* drastically reduce the import time thanks to the parallel runs.
It is also very important to interpret visually and discuss the outcomes of the AI-based seizure detector with the healthcare professionals in order to build trust and assess limitation of the algorithm. Having an easy way to import ECGs to easily visualise and annotate them is a major benefit in this context, especially in healthcare centers where teams do not always have the resources and knowledge to set up a complex software stack. We are now working on a database dump that will enable end-users to import specific datasets into their own Postgres / Grafana instance in a few clicks, thus fostering the usage and research on open datasets.
### Industrialisation: Cleaning/Refactoring of the repository
## Benefits
The work done by the AURA researchers and data scientists on ECGs had been organised in a bunch of GitHub repositories, with different people using different tools and structures. The first step was to identify the parts required to run the complete workflow, and extract them from the various repositories and branches to build a unified structure. The requirements of this repository structure are:
* Re-use common scripts into each process automatically (no redundancy).
* Provide up-to-date documentation and passing tests.
* Set up a process to automatically build the Docker images to allow multiple execution methods: Airflow/pure python, Docker/Compose, Kubernetes or Eclipse Graphene.
### Industrialisation of the solution
Building upon the current resources in use at AURA for Ai workflow, the following directory structure was adopted:
The new repository has a sound and clean structure, with passing tests, a complete documentation to exploit and run the various steps, and has everything needed for further developments. All scripts are stored under the src/ directory and are copied to the docker images during the build, thus always relying on a single source of tested truth.
```
├── data => Data samples for tests
├── graphene => All Docker images
│ ├── aura_dataprep => - Data processing Docker image
│ ├── aura_ml_trainer => - ML training Docker image
│ └── ...
├── resources => Documentation, images..
├── scripts => Repo-related scripts for builds, integration..
├── src => AI-related scripts and source code
│ ├── domain
│ ├── infrastructure
│ └── usecase
└── tests => Tests for scripts and source code
```
We defined and enforced a contributing guide, making tests and documentation mandatory in the repository. We also set up a Travis job to execute the full python test suite at every commit, and made it visible through a badge in the repository's README. Regarding Git we used a simple Git workflow to maintain a clean branching structure. The newly agreed development process definitely helped clean up the repository. Each time a set of scripts was added, we knew exactly where they should go and how to reuse them in the overall workflow.
Furthermore, the automatic building of containers for multiple execution targets (Airflow, Docker, Kubernetes) can easily be reproduced. As a result the new, improved structure will be reused and is set to become the reference implementation for the next developments.
## Benefits
### Portability and deployment
......@@ -165,21 +192,22 @@ Once the new Docker images are built and pushed to a Docker registry, they can b
We also installed a fresh instance of AI4EU Experiments on our dedicated hardware for the onboarding of the models, and plan to make stable, verified images available on the marketplace in the upcoming months.
### Better performances
The major performance gain was achieved by setting up dedicated containers to run atomic tasks (e.g. data preparation, visualisation imports) in parallel. Most computers, both in the lab and for high-end execution platforms, have multiple threads and enough memory to manage several containers simultaneously, and we need to take advantage of the full computing power we have. Another major gain was obviously to run the process on a more powerful system, with enough memory, CPUs and disk throughput.
All considered we were able to scale down the full execution time on the TUH dataset from 17 hours on the lab's laptop to roughly 4 hours in our cluster.
All considered we were able to scale down the full execution time on the TUH dataset from 20 hours on the lab's laptop to roughly 4 hours in our cluster.
## Conclusion
### Industrialisation of the solution
It has been a fantastic collaborative work, building upon the expertise of the AURA data scientists and AICE MLOps practionners to deliver exciting and pragmatic outcomes. The result is a set of optimised, reliable processes, with new perspectives and possibilities, and a better confidence in the developed pipeline. All actors learned a lot and the sequels of the work will be replicated in the forthcoming projects in both teams.
The new repository has a sound and clean structure, with passing tests, a complete documentation to exploit and run the various steps, and has everything needed for further developments. All scripts are stored under the src/ directory and are copied to the docker images during the build, thus always relying on a single source of tested truth.
Besides the team benefits, the project itself hugely benefited from the various improvements and optimisation. It is now very easy to run the full stack on different datasets for development, and the new container deployment method will be extended to partners and healthcare centers (L'Institut La Teppe).
Furthermore, the automatic building of containers for multiple execution targets (Airflow, Docker, Kubernetes) can easily be reproduced. As a result the new, improved structure will be reused and is set to become the reference implementation for the next developments.
We identified a few areas of improvement, though. One aspect that we lacked in this experience was a precise benchmarking process and framework for the various steps, at each optimisation round. We are currently working on a monitoring solution based on Prometeus, Node exporter and Grafana to solve the issue, and we will be publishing soon a more detailed report on the performance gains.
## Conclusion
It has been a fantastic collaborative work, building upon the expertise of the AURA data scientists and AICE MLOps practitioners to deliver exciting and pragmatic outcomes. The result is a set of optimised, reliable processes, with new perspectives and possibilities, and a better confidence in the developed pipeline. All actors learned a lot and the sequels of the work will be replicated in the forthcoming projects in both teams.
Besides the team benefits, the project itself hugely benefited from the various improvements and optimisation. It is now very easy to run the full stack on different datasets for development, and the new container deployment method will be extended to partners and healthcare centers (L'Institut La Teppe).
We identified a few areas of improvement, though. One aspect that we lacked in this experience was a precise benchmarking process and framework for the various steps, at each optimisation round. We are currently working on a monitoring solution based on Prometheus, Node exporter and Grafana to solve the issue, and we will be publishing soon a more detailed report on the performance gains.
---
title: "News & Events"
seo_title: "News & Events - AICE"
date: 2022-03-08T10:00:00+01:00
keywords: ["AICE"]
---
......
---
title: "Presentations"
seo_title: "Presentations - AICE"
keywords: ["AICE"]
---
This section lists some material that was produced during our meetings, along with links and external references.
## AICE April 2022 Monthly Meeting (2022-04-07)
We were glad to welcome [AURA healthcare NPO](https://en.aura.healthcare/) that presented us the results of the [use case implemented using AICE](/articles/aice_aura_demonstrator/) OpenLab. We also provided the latest news on AICE.
Direct access to the replay:
* [Introduction to the AICE Working Group](https://www.youtube.com/watch?v=zXM0sHN0nKU&t=0s)
* [News of the AICE Working Group](https://www.youtube.com/watch?v=zXM0sHN0nKU&t=467s)
* [First AICE Use Case presentation with AURA Healthcare](https://www.youtube.com/watch?v=zXM0sHN0nKU&t=1275s)
* [Coming next on the AICE Working Group](https://www.youtube.com/watch?v=zXM0sHN0nKU&t=2650s)
* [Q&A session](https://www.youtube.com/watch?v=zXM0sHN0nKU&t=2807s)
You may also [download the slides](https://drive.google.com/file/d/1arYqtRiHmA5WNA-51mMcGPkD2rYksyGD/view?usp=sharing).
## AICE Meetup (2021-11-24)
This AICE OpenLab meetup was the first opportunity we had to join together after COVID to build the community around pilots and testbeds for AI, Cloud and Edge Computing. As the topic was also of interest to people who could not travel yet, we also offered the option to join online.
The meeting was held in Brussels, Belgium, at the Huawei office, 180 Chaussée d'Etterbeek, 1040 Etterbeek.
* Welcome speech and agenda -- Gaël Blondelle, Vice President, Ecosystem Development (Eclipse Foundation) \
[Download slides](https://docs.google.com/presentation/d/13bY9lRCBVrBq8uuSKTjfkmH1SqbXy3Wi/edit?usp=sharing&ouid=101921549416138425666&rtpof=true&sd=true)
* EC OSS Study and AI Policy -- Paula Grzegorzewska, Senior Policy Advisor (OpenForum Europe) \
[Download slides](https://drive.google.com/file/d/1ekBZOK_bjbVZA4NCo19kbEtffvNqTEWJ/view?usp=sharing) - [See video](https://youtu.be/tgSR7qZwifM)
* Gaia-X and AI -- Pierre Gronlier, Chief Technology Officer (Gaia-X) \
[See video](https://youtu.be/raFQ5mW28Nw)
* The AICE OpenLab + Pilot -- Gaël Blondelle, Vice President, Ecosystem Development & Boris Baldassari (Eclipse Foundation) \
[Download slides](https://drive.google.com/file/d/1bHlRx59LY4Y3UF0IO90Kacb0_jqLSwp1/view?usp=sharing)
* Beyond AIOps - How to "open source" operations and create free data -- Marcel Hild, Manager, AIOps, AI CoE CTO Office (RedHat) \
[Download slides](https://drive.google.com/file/d/1bHlRx59LY4Y3UF0IO90Kacb0_jqLSwp1/view?usp=sharing) - [See video](https://youtu.be/DhA-halyroU)
* AI4EU Experiments -- Martin Welss, Senior Architect of AI4EU (Fraunhofer IAIS) \
[Download slides](https://drive.google.com/file/d/1MLjTXHPgkUGcGtfgKG4SGPMmwEzssCkd/view?usp=sharing) - [See video](https://youtu.be/hmQld8Rep3A)
* AIPlan4EU -- Andrea Micheli, Post-Doctoral Researcher (Fundazione Bruno Kessler) \
[Download slides](https://drive.google.com/file/d/1Llv4PSyirjr6KdRID0znV3tcrVZ9_6H3/view?usp=sharing) - [See video](https://youtu.be/dvPQcWuNZU0)
* MindSpore -- Jean-Baptiste Onofre, Open Source Specialist (Huawei) \
[Download slides](https://drive.google.com/file/d/1uZYeI7nsvMJcnAG4x4oo_zi0FRL0aKzF/view?usp=sharing) - [See video](https://youtu.be/88yJceeklfg)
* Solid - secure decentralised data storage -- Philip Leroux, IOF Innovation Officer AI (IMEC) \
[Download slides](https://drive.google.com/file/d/1PHLHLRck3y98OcaM5a9_fWwcY6kFlpXj/view?usp=sharing)
* Open Services Cloud -- Bryan Che, Chief Strategy Officer (Huawei) \
[Download slides](https://drive.google.com/file/d/1oZOIi6YNaxIF9jQwl_AHOiBP-RsbnuE-/view?usp=sharing) - [See video](https://youtu.be/hrsmnPEBTL8)
* Open Euler -- Mauro Carvalho Chehab, Operating Systems Senior Engineer, Roberto Sassu, Senior Security Engineer Trusted Computing (Huawei) \
[Download slides]() - [See video]()
* The brain needs a nervous system - Supporting Cloud to Thing AI -- Luca Cominardi, PhD, Senior Technologist (AdLink) \
[Download slides](https://drive.google.com/file/d/17WQza9ryVjEm3tmcwjN0JcmhyBEEru8s/view?usp=sharing) - [See video](https://youtu.be/CkoC_KfdGqM)
* Conclusion & Wrap-up - Join the Working Group! -- Gaël Blondelle, Vice President, Ecosystem Development (Eclipse Foundation) \
[See video](https://drive.google.com/file/d/1k-EcAfmBgAioF3SmreKwBFmN2d9jLxLh/view?usp=sharing)
## Eclipse Open Source AI Workshop S1E2 (2020-09-30)
* Video: [Welcome Message | Gaël Blondelle | Open Source AI Workshop S1E2](https://www.youtube.com/watch?v=cGQJ9BLichk&list=PLy7t4z5SYNaS5YMnhtOhZyLdMDxsQFNZz&index=7).
* Video: [Trustworthy AI & Open Source | Eclipse Open Source AI Workshop S1E2](https://www.youtube.com/watch?v=QU-0NvkVxHc&list=PLy7t4z5SYNaS5YMnhtOhZyLdMDxsQFNZz&index=8).
* Video: [Introduction to Pixano: an Open Source Tool to Assist Annotation of Image Databases | Open Source AI](https://www.youtube.com/watch?v=8W1l9TOHeu8&list=PLy7t4z5SYNaS5YMnhtOhZyLdMDxsQFNZz&index=9).
## Eclipse Open Source AI Workshop #1 (2020-06-11)
* Video: [Towards an open source AI initiative at the Eclipse Foundation](https://www.youtube.com/watch?v=7h2DT2Xn0No&list=PLy7t4z5SYNaS5YMnhtOhZyLdMDxsQFNZz).
* Video: [Political challenges and opportunities in making open source AI mainstream](https://www.youtube.com/watch?v=Isr3diSB30c&list=PLy7t4z5SYNaS5YMnhtOhZyLdMDxsQFNZz&index=3).
* Video: [Eclipse Deeplearning4j: How to run AI workloads on Jakarta EE compliant servers](https://www.youtube.com/watch?v=B3a7ceU3API&list=PLy7t4z5SYNaS5YMnhtOhZyLdMDxsQFNZz&index=4).
* Video: [Meet MindSpore, the new open source AI framework!](https://www.youtube.com/watch?v=pqgQaVOcVV0&list=PLy7t4z5SYNaS5YMnhtOhZyLdMDxsQFNZz&index=5)
* Video: [Questions and Answers](https://www.youtube.com/watch?v=ajhubu-LVBk&list=PLy7t4z5SYNaS5YMnhtOhZyLdMDxsQFNZz&index=6)
---
title: "AICE Use Cases"
seo_title: "Use Cases - AICE"
keywords: ["AICE", "Use cases"]
---
## The AICE OpenLab
Status: Running.
The AICE OpenLab is a platform where partners can discuss and share AI-related resources, experiences and benchmarks.
It relies on [AI4EU Experiments](https://ai4europe.eu/) to provide a marketplace and visual editor to execute complex AI workflows, and also provides a Kubernetes cluster for the execution, demonstration and benchmarking of these workflows.
The first project to use the OpenLab platform is the AURA use case.
## The AURA Healthcare use case
Status: Complete.
> Check out the article we wrote at [The AURA demonstrator](/articles/aice_aura_demonstrator/)
We worked with the [AURA healthcare](https://aura.healthcare) association on an ML workflow that detects epileptic seizures before they happen, based on ECGs.
* Work with the team on the portability and industrialisation of the prototype.
* Bring the AURA ML workflow to the AICE OpenLab marketplace to foster exchange and collaboration on their solution.
* Provide a k8s cluster to execute resource-intensive tasks and demonstrate the workflow.
> See the [GitHub repository](https://github.com/borisbaldassari/aice-aura).
## Eclipse Graphene
Status: Running.
Eclipse Graphene provides an extensible marketplace of reusable solutions for AI and machine learning, sourced from a variety of AI toolkits and languages that ordinary developers, who are not machine-learning experts or data scientists, can easily use to create their own applications.
> [Eclipse Graphene](https://projects.eclipse.org/projects/technology.graphene)
<div class="container section-home-launched text-center" id="members">
<div class="members">
<h2 id="members">Supporters</h2>
<ul class="text-center list-inline">
<li>
<a href="https://www.atb-bremen.de/">
<img src="{{ "images/partner-logos/ATB-Logo-300x55.png" | absURL}}" alt="ATB logo" class="members-img" /></a>
</li>
<li>
<a href="https://castalia.solutions/">
<img src="{{ "images/partner-logos/castalia_solutions_logo-transparent.png" | absURL}}" alt="Castalia Solutions logo" class="members-img" /></a>
</li>
<li>
<a href="https://www.huawei.com/">
<img src="{{ "images/partner-logos/huawei-logo.png" | absURL}}" alt="Huawei logo" class="members-img" /></a>
</li>
<li>
<a href="https://www.continental.com">
<img src="{{ "images/partner-logos/continental-vector-logo.png" | absURL}}" alt="Continental logo" class="members-img" /></a>
</li>
<li>
<a href="https://www.eng.it/en/">
<img src="{{ "images/partner-logos/Engineering.png" | absURL}}" alt="Engineering logo" class="members-img" /></a>
</li>
<li>
<a href="https://www.eurotech.com/">
<img src="{{ "images/partner-logos/eurotech-logo.png" | absURL}}" alt="Eurotech logo" class="members-img" /></a>
</li>
</ul>
</div>
......@@ -14,6 +14,22 @@
@import '~eclipsefdn-solstice-assets/less/quicksilver/styles.less';
@import '_variables.less';
.ospo-zone-members .members-img {
max-height: 95px;
max-width: 170px;
padding: 10px;
}
.ospo-zone-members ul li {
margin-bottom: 20px;
}
@media (min-width: @screen-sm-min) {
#header-wrapper {
padding-top:20px;
}
}
.header-wrapper-coming-soon,
.header-wrapper {
background: #02154f top center;
......
/articles/aice_aura_demonstrator/ /use_cases/aura/ 301
......@@ -90,8 +90,6 @@
*
* SPDX-License-Identifier: EPL-2.0
*/.website-coming-soon h1{color:#fff;font-weight:700}.website-coming-soon__container{background:#000;color:#fff;margin-bottom:8rem;margin-top:2rem;padding:2.2rem 2rem}.website-coming-soon__content a{color:#f7941e}.discover-search{background:#efefef}.discover-search h2{color:#545454;margin-bottom:.1em;margin-top:1.3em;padding-bottom:0}.discover-search .form-search-projects{margin-bottom:1.4em}.discover-search>.container{min-height:267px}@media (min-width:992px){.discover-search>.container{background:url(../images/vendor/eclipsefdn-solstice-components/discover-search/discover-search-bg.jpg?4ea2caca91f7bff636a3caf8412871c5) 100% no-repeat}}.drag_installbutton{clear:both;display:inline;position:relative}.drag_installbutton .tooltip{background:url(../images/vendor/eclipsefdn-solstice-components/drag-drop/mpcdrag.png?777ad5db4a5fd4291dd35234a1a057ce) no-repeat scroll 110% 60% #a285c5;border:1px solid #ae00ce;color:#000;display:none;left:64px;opacity:1;padding:5px 50px 5px 5px;position:absolute;text-align:left;top:0;width:325px;z-index:99}.drag_installbutton .tooltip h3{color:#000;margin-top:0}.drag_installbutton .tooltip.show-right{left:-335px}.drag_installbutton a.drag:hover .tooltip{display:block}.drag_installbutton.drag_installbutton_v2 .btn:hover{cursor:move}.drag_installbutton.drag_installbutton_v2 .tooltip{background-color:#eee;border:1px solid #777;left:100px;margin-top:-6px}.drag_installbutton.drag_installbutton_v2 .tooltip.tooltip-below-right{left:auto;right:0;top:40px}.drag_installbutton.drag_installbutton_v2 .tooltip h3{font-size:18px}.eclipsefdn-video{background-color:#000;display:block;position:relative;width:100%}.eclipsefdn-video:before{background-image:url(//www.eclipse.org/eclipse.org-common/themes/solstice/public/images/vendor/eclipsefdn-solstice-components/youtube/yt_icon_red.png);background-position:50%;background-repeat:no-repeat;background-size:20%;content:"";display:block;padding-top:50%;width:100%}.eclipsefdn-video-with-js:before{position:absolute}footer#solstice-footer{background:#fff;border-top:none;color:#404040;font-family:Roboto,Libre Franklin,Helvetica Neue,Helvetica,Arial,sans-serif;font-size:14px;padding-bottom:26px;padding-top:60px}footer#solstice-footer h2{color:#000;font-size:18px;font-weight:400;margin-top:0;max-width:auto}footer#solstice-footer a:active,footer#solstice-footer a:focus,footer#solstice-footer a:link,footer#solstice-footer a:visited{color:#404040;font-weight:400}footer#solstice-footer a:hover{color:#000}footer#solstice-footer .logo-eclipse-white{margin-bottom:15px;max-width:161px}footer#solstice-footer .nav{margin-bottom:25px;margin-left:-15px}footer#solstice-footer .nav a{padding:6px 15px}footer#solstice-footer .nav a:hover{background:none;color:#000}@media (max-width:767px){footer#solstice-footer{text-align:center}footer#solstice-footer .nav{margin-left:0}}footer#solstice-footer li{padding-bottom:0}@media (max-width:450px){footer#solstice-footer section.col-xs-11,footer#solstice-footer section.col-xs-14{float:left;min-height:1px;padding-left:15px;padding-right:15px;position:relative;width:95.83333333%}}@media (min-width:451px) and (max-width:767px){footer#solstice-footer #footer-useful-links{clear:left}footer#solstice-footer #copyright{clear:both}}#copyright{padding-top:15px}#copyright img{clear:both;float:left;margin-right:15px;margin-top:10px}@media (max-width:991px){#copyright-text{margin-bottom:20px}}@media (min-width:992px){.social-media{text-align:right}}#footer-eclipse-foundation,#footer-legal,#footer-other,#footer-useful-links{z-index:99}.footer-other-working-groups{font-size:11px;font-weight:300}.footer-other-working-groups .logo-eclipse-default,.footer-other-working-groups .social-media{margin-bottom:20px;margin-top:0}.footer-other-working-groups .img-responsive{max-width:175px}.footer-other-working-groups .footer-working-group-col{padding:0}@media (min-width:1200px){.footer-other-working-groups{background:url(../images/vendor/eclipsefdn-solstice-template/footer-working-group-separator.png?e9b9ff4c965177e7a88f4dc0c77538cb) 50% repeat-y}.footer-other-working-groups .img-responsive{max-width:200px}}.footer-min{background:#ececec;border-top:1px solid #acacac;bottom:0;padding:10px 0;width:100%}.footer-min a{font-size:.8em;font-weight:400}.footer-min p,.footer-min ul{font-size:.8em;margin-bottom:0}.footer-min ul{text-align:right}.footer-min ul li{padding-bottom:0}@media screen and (max-width:767px){.footer-min p,.footer-min ul{text-align:center}}body.solstice-footer-min{display:flex;flex-direction:column;min-height:100vh;position:static}body.solstice-footer-min main{flex:1 0 auto}footer#solstice-footer.footer-darker{background:#000;color:#fff}footer#solstice-footer.footer-darker h2{color:#fff}footer#solstice-footer.footer-darker a:active,footer#solstice-footer.footer-darker a:focus,footer#solstice-footer.footer-darker a:link,footer#solstice-footer.footer-darker a:visited{color:#fff;font-weight:400}footer#solstice-footer.footer-darker a:hover{color:hsla(0,0%,100%,.788)}footer#solstice-footer.footer-darker .nav a:hover{background:none;color:hsla(0,0%,100%,.788)}@media (max-width:767px){#main-menu-wrapper{margin:0;padding:0}#main-menu{background:transparent;margin-bottom:0}#main-menu .navbar-header{padding-bottom:15px;padding-top:15px}#main-menu .navbar-brand{height:auto;padding:0 0 0 15px}#main-menu #navbar-main-menu{float:none}#main-menu.navbar{border:0;border-bottom:1px solid #ccc}#main-menu .navbar-toggle{margin:0;padding:10px 15px 10px 0}#main-menu .navbar-toggle .icon-bar{background:#f7941e;height:3px}#main-menu .nav{background:#666;margin:0;padding:0}#main-menu .nav>li.open .dropdown-toggle,#main-menu .nav>li.open a.dropdown-toggle{background:#787878;color:#fff}#main-menu .nav>li>a{border-bottom:1px solid #525252;color:#fff;padding:18px 15px;text-transform:none}#main-menu .nav>li .dropdown-menu{background:#525252;border-bottom:none;border-radius:0;padding:0}#main-menu .nav>li .dropdown-menu>li.active a:link,#main-menu .nav>li .dropdown-menu>li.active a:visited{background:#f7941e;color:#fff}#main-menu .nav>li .dropdown-menu>li.active a:focus,#main-menu .nav>li .dropdown-menu>li.active a:hover{background:#f5f5f5;color:#fff}#main-menu .nav>li .dropdown-menu>li>a{color:#afafaf;padding:18px 15px}#main-menu .nav>li .dropdown-menu>li>a:focus,#main-menu .nav>li .dropdown-menu>li>a:hover{background:#f5f5f5;color:#7c7c7c}#main-menu .nav>li.main-menu-search .dropdown-toggle{display:none}#main-menu .nav>li.main-menu-search .dropdown-menu{background-color:transparent;border:0;box-shadow:none;display:block;float:none;margin-top:0;position:static;width:auto}#main-menu .nav>li.main-menu-search .dropdown-menu p{color:#fff}#main-menu .nav>li.main-menu-search .dropdown-menu .yamm-content{padding:15px}#main-menu .nav>li.main-menu-search .dropdown-menu .gsc-input{background-color:#fff}#main-menu .nav>li.main-menu-search .dropdown-menu .gsc-input-box{border:none}}@media (max-width:1199px){#breadcrumb .container,#header-wrapper .container,.region-breadcrumb .container,.toolbar-container-wrapper .container,main .container{width:auto}}@media (min-width:768px){#main-menu{font-size:14px;margin-bottom:5px}#main-menu .dropdown li,#main-menu ul li{text-transform:none}#main-menu li a{color:#fff;margin-right:0}#main-menu li a:active,#main-menu li a:hover{color:#ccc}#main-menu li.dropdown .dropdown-menu{left:auto;right:auto}#main-menu li.dropdown.eclipse-more .dropdown-menu{left:0;right:auto;width:600px}#main-menu .navbar-right li.dropdown:last-child .dropdown-menu{left:auto;right:0}#main-menu .navbar-right li.dropdown.eclipse-more .dropdown-menu{width:600px}#main-menu .dropdown-menu a{color:#6b655f}#main-menu .dropdown-menu a:active,#main-menu .dropdown-menu a:hover{color:#f7941e}#main-menu .dropdown-menu .yamm-content a{margin:0}}@media (min-width:992px){#main-menu{font-size:17px}#main-menu .dropdown-menu{max-width:630px}#main-menu .dropdown-menu li{padding-bottom:2px}}#main-menu{margin-bottom:0}#main-menu li{padding-bottom:0}#main-menu a{font-weight:400}#main-menu a:active,#main-menu a:focus{color:#ccc}#main-menu .nav .open a,#main-menu .nav .open a:focus,#main-menu .nav .open a:hover,#main-menu .nav>li>a:focus,#main-menu .nav>li>a:hover{background-color:transparent}.dropdown-toggle:hover{cursor:pointer}.ul-left-nav{margin-left:0;padding-left:0}.ul-left-nav>li{list-style:none;margin-bottom:.45em}.ul-left-nav>li.active a{font-weight:600}.ul-left-nav>li.about,.ul-left-nav>li.separator{font-weight:700;padding-left:0}.ul-left-nav>li.about img,.ul-left-nav>li.separator img{position:absolute;top:6px}.ul-left-nav>li.separator{border-top:1px solid #d4d4dd;padding-top:8px}.ul-left-nav>li.separator a{font-weight:700}.ul-left-nav>li.separator:first-child{border-top:none}.ul-left-nav>li>a{color:#545454;font-weight:400}.ul-left-nav>li>a:hover{color:#35322f}.logo-eclipse-default-mobile{max-width:130px}@media (min-width:768px){.alternate-layout #main-menu{font-size:14px}.alternate-layout #main-menu ul li{text-transform:none}.alternate-layout #main-menu li a{color:#6b655f}.alternate-layout #main-menu li a:active,.alternate-layout #main-menu li a:hover{color:#35322f}}@media (min-width:992px){.alternate-layout #main-menu{font-size:17px}}@media (max-width:767px){.alternate-layout #main-menu{background:#404040 50% no-repeat}}main #bigbuttons{left:auto;min-height:1px;padding:1.65em 15px 2.2em;position:relative;text-align:center;top:auto}@media (min-width:768px){main #bigbuttons{float:left;margin-left:58.33333333%;width:41.66666667%}}@media (min-width:992px){main #bigbuttons{float:left;margin-left:37.5%;width:62.5%}}@media (min-width:1200px){main #bigbuttons{float:left;margin-left:25%;width:66.66666667%}}main #bigbuttons h3{display:none}main #bigbuttons:after,main #bigbuttons:before{content:" ";display:table}main #bigbuttons:after{clear:both}main #bigbuttons ul{list-style:none;margin-left:-5px;padding-left:0}main #bigbuttons ul>li{display:inline-block;padding-left:5px;padding-right:5px}main #bigbuttons ul li{background:none}@media (min-width:768px){main #bigbuttons ul li{float:right}}main #bigbuttons a{left:auto;margin:0;position:relative;top:auto}main #bigbuttons a:hover{text-decoration:none}div#novaContent{background-position:0 0;padding-top:0}@media (max-width:767px){div#novaContent{background-image:none}}@media (min-width:1200px){div#novaContent{background-position:top}}.legacy-page #midcolumn{min-height:1px;padding-left:15px;padding-right:15px;position:relative}@media (min-width:992px){.legacy-page #midcolumn{float:left;width:70.83333333%}}.legacy-page #midcolumn #maincontent,.legacy-page #midcolumn #midcolumn{width:100%}.legacy-page #midcolumn.no-right-sidebar{min-height:1px;padding-left:15px;padding-right:15px;position:relative}@media (min-width:992px){.legacy-page #midcolumn.no-right-sidebar{float:left;width:100%}}.legacy-page #rightcolumn{min-height:1px;padding-left:15px;padding-right:15px;position:relative}@media (min-width:992px){.legacy-page #rightcolumn{float:left;width:29.16666667%}}.logo-eclipse-default{margin:0}.header_nav{padding-bottom:35px}.header_nav img{margin:20px auto}.header_nav ul{background:#f4f4f4;color:#7b778e;font-size:16px;margin:0;padding:0;text-transform:uppercase}.header_nav ul li{clear:right;list-style:none;padding-bottom:0}.header_nav ul li:nth-child(odd){clear:left}.header_nav ul a{display:block;font-weight:600;padding:20px}.header_nav ul a:active,.header_nav ul a:link,.header_nav ul a:visited{color:#7b778e}.header_nav ul a:hover{color:#f7941e}.header_nav ul a i{font-size:30px;font-weight:700;padding:4px 0 0;text-align:center}.header_nav ul span{padding:0 0 0 5px}.header_nav ul span p{font-size:11px;font-weight:400;margin:0;text-transform:none}.icon-sidebar-menu h3{font-size:16px;margin-bottom:5px;margin-top:0}.icon-sidebar-menu p{font-size:13px}.icon-sidebar-menu .circle-icon{display:block;height:80px;width:80px}.icon-sidebar-menu .circle-icon i{font-size:37px;margin-top:20px}.step-by-step .intro{text-align:center}.step-by-step .intro h2{margin-top:1.5em}.step-by-step .step-by-step-timeline{margin-top:1.5em;text-align:center}.step-by-step .step-by-step-timeline .step-icon,.step-by-step .step-by-step-timeline .step-icon:hover,.step-by-step .step-by-step-timeline .step-icon:visited{color:#4c4d4e}.step-by-step .step-by-step-timeline .feather{height:50px;margin-bottom:15px;width:50px