Skip to content
Snippets Groups Projects
CODECO Project

CODECO Project

Cognitive, cross-layer and highly adaptive Edge-Cloud management framework

This Readme file aims to provide a from-scratch guide that:

  1. Installs any software prerequisites
  2. Installs the CODECO framework in a Kubernetes cluster (User Story 1)
  3. Deploys a simple Codeco Application (User Story 2)
  4. Provides some tests to verify the correct installation

Additionally, it contains:

  • the way to Uninstall CODECO
  • a way to test CODECO inside a Docker container.

Minimum System Requirements of Kubernetes Technologies

Technology Node Type Operating System (OS) CPU/vCPU RAM (GB) Disk (GB) Node Size
OpenShift/OCM/ACM Master Red Hat Enterprise Linux (RHEL) 7.5, or RHEL Atomic Host 7.4.5 or later 4 CPU; 4 vCPU 16 42 Medium
Worker - 1 vCPU 8 32 Medium
External etcd - - - 20 Medium
Ansible Controller - - 0.075 (75MB) - Small
MicroShift - Red Hat Enterprise Linux (RHEL) Extended Update Support (EUS) (9.2 or later) x86_64 or aarch64 CPU architecture, 2 CPU cores 2 2 Small
K3s - RHEL9, Ubuntu, openSUSE Leap, Oracle Linux, Rocky Linux, SLES, Red Hat/CentOS Enterprise Linux, Raspberry Pi OS (with additional setup) x86_64 or armhf or arm64/aarch64 or s390x CPU architecture, 1 CPU core 0.512 GB (512MB) - Small
KinD - Ubuntu 20.04 LTS, Ubuntu 22.04 LTS, Ubuntu 24 LTS 4 CPU cores 8 (3 nodes in one machine) - Medium
MicroK8s - Ubuntu 1 CPU core 1 (Recommended 4) Recommended 20 GB Small
Bare metal K8s - Ubuntu 22.04 (tested) Recommended 4 CPU cores Recommended 4 GB Recommended 40 GB Small

4CPU 4GB RAM 40GB Disk και Ubuntu 22.04

Step 0: Installation of software prerequisites

(skip if already installed)
Install basic apt dependencies:
sudo apt update 
sudo apt install -y git nano curl wget make sudo rsync jq
Install yq
sudo wget https://github.com/mikefarah/yq/releases/download/v4.34.2/yq_linux_amd64 -O /usr/bin/yq && sudo chmod +x /usr/bin/yq
Install docker

curl -fsSL https://get.docker.com | sh

Docker after installation steps
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker
docker run hello-world
Install kind:
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.23.0/kind-linux-amd64
sudo chmod +x ./kind 
sudo mv ./kind /usr/local/bin/kind
Install helm:
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh 
./get_helm.sh
Install golang:
wget https://go.dev/dl/go1.21.10.linux-amd64.tar.gz 
rm -rf /usr/local/go 
sudo tar -C /usr/local -xzf go1.21.10.linux-amd64.tar.gz
echo 'export PATH="$PATH:/usr/local/go/bin" ' >> ~/.bashrc
source ~/.bashrc
Install kubectl:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client   #verify installation
Activate br_filter:

Ubuntu kernel MUST have br_filter active (flannel requirement), which does not happen by default in some Ubuntu versions. As sudo, the following commands will activate br_filter:

tee /etc/modules-load.d/containerd.conf <<EOF br_netfilter EOF
modprobe br_netfilter

Step 1: Cluster creation

CODECO is designed to work in any kubernetes tool (kind, microk8s, kubeadm, K3s etc.). However, installation of CODECO is tested on a 3-node kind cluster on an Ubuntu 20.04 machine.

Clone the ACM repository:
git clone https://gitlab.eclipse.org/eclipse-research-labs/codeco-project/acm.git
Create a kind cluster based on the appropriate kind config file
kind delete cluster    #this will delete any existing kind cluster named 'kind'
kind create cluster --config ./acm/config/cluster/kind-config.yaml

Step 2: Install the CODECO Framework

Export Dockerhub credentials
DOCKERHUB_USER=<user>
DOCKERHUB_PASS=<pass>
Deploy ACM (and all CODECO Components)
  • This will deploy the ACM component
  • Deploying ACM will automatically deploy all codeco components according to the acm/scripts/post_deploy.sh script
cd acm
echo $DOCKERHUB_PASS | docker login -u $DOCKERHUB_USER --password-stdin
make docker-build docker-push IMG=$DOCKERHUB_USER/codecoapp-operator:2.0.0
make deploy IMG=$DOCKERHUB_USER/codecoapp-operator:2.0.0
cd ..

Step 3: Deploy a Codeco App

This deploys a simple codeco app consisting of a frontend and a backend, as described in the file acm/config/samples/codeco_v1alpha1_codecoapp_ver3.yaml (exists in the ACM repo)

kubectl apply -f ./acm/config/samples/codeco_v1alpha1_codecoapp_ver3.yaml

The user can replace the path in the command for a custom Codeco Application Model yaml file.

Create a custom CodecoApp

Follow this guide to the CodecoApp attributes to create your custom Codeco Application Model.


Step 4: Verify correct deployment

Test that CodecoApp pods are Running (The following commands are used as integration tests)

If the Codeco Application Pods are not running within a timeout of 20 minutes, then the test is Failed and the script returns error. I the pods are Running with time, the script terminates giving a success signal.

if kubectl wait --for=condition=Ready pod --all -n he-codeco-acm --timeout=20m; then 
  kubectl get pods -A; 
  exit 0; # success
else
  kubectl get pods -A; 
  exit 1; # error
fi
Test that NetMA component correctly outputs network metrics

User should see a CR that has successfully populated the underlay-topology and overlay-topology sections.

  • underlay-topology metrics: metrics regarding the underlay cluster topology
  • overlay-topology metrics: metrics regarding the running application (pod to pod links)
kubectl get netma-topology netma-sample -o yaml -n he-codeco-netma
Test that PDLC correctly provides nodeRecommendations to SWM

kubectl get applications.qos-scheduler.siemens.com -n he-codeco-acm acm-swm-app -o yaml

The output CR (in yaml format) should contain a nodeRecommendations section.


Uninstall CODECO

For the kind cluster, one can simply delete the cluster as a whole:

kind delete cluster

Uninstalling CODECO components without deleting the cluster will be available soon.


How a CODECO application can add metrics to Prometheus

All the user has to do is configure the deployment (CodecoApp), so prometheus get the metric to pull it:

...
spec:
  replicas: 1
  selector:
    matchLabels:
      app: <your service app>
  template:
    metadata:
      labels:
        app: <your service app>
      annotations:
        prometheus.io/port: <port>
        prometheus.io/scrape: "true"
        prometheus.io/path: <metric path>
   ...
   ..
   .

Deploy the entire CODECO framework inside a Docker container

If a user wishes to install CODECO in a protected and reversible environment, one can do all previous steps inside a Docker Container, supposing one already has Docker installed.

In the gitlab-profile repo of CODECO, one can find:

Please note:

  • User must add valid Docker credentials
  • If α cluster already exists, run the integration-script.sh script without the --create-cluster flag.

Deploy a custom Codeco Application Model using the --codeco-app argument to provide the path to your file. Without this argument a demo CodecoApp by RHT will be deployed: integration-script.sh --create-cluster --codeco-app <path to CodecoApp yaml>


export DOCKERHUB_USER=<some-dockerhub-user>
export DOCKERHUB_PASS=<password>

git clone https://gitlab.eclipse.org/eclipse-research-labs/codeco-project/gitlab-profile.git
cd gitlab-profile/
docker build -t integration:test -f Dockerfile-integration .
docker rm integ --force
docker run -t -v /var/run/docker.sock:/var/run/docker.sock -e DOCKERHUB_USER=$DOCKERHUB_USER -e DOCKERHUB_PASS=$DOCKERHUB_PASS --network host --name integ --rm integration:test bash -c -i "/integration-script.sh --create-cluster"

Known Issues

Multus pods error with KinD deployment, too many files open

The following is a known issue when deploying CODECO with KinD (observed in the deployment with Ubuntu 24.04, kernel 6.8-053-generic); Multus pods crash when deploying netma, l2sm, possibly because of too many open files.

Use the following command as a workaround:

sudo sysctl fs.inotify.max_user_watches=524288
sudo sysctl fs.inotify.max_user_instances=512