Skip to content
Snippets Groups Projects
CODECO Project

CODECO Project

Cognitive, cross-layer and highly adaptive Edge-Cloud management framework

This Readme file aims to provide a from-scratch guide that:

  1. Installs any software prerequisites
  2. Installs the CODECO framework in a Kubernetes cluster (User Story 1)
  3. Deploys a simple Codeco Application (User Story 2)
  4. Provides some tests to verify the correct installation

Additionally, it contains:

  • the way to Uninstall CODECO
  • a way to test CODECO inside a Docker container.

Minimum System Requirements of Kubernetes Technologies

Technology Node Type Operating System (OS) CPU/vCPU RAM (GB) Disk (GB) Node Size
OpenShift/OCM/ACM Master Red Hat Enterprise Linux (RHEL) 7.5, or RHEL Atomic Host 7.4.5 or later 4 CPU; 4 vCPU 16 42 Medium
Worker - 1 vCPU 8 32 Medium
External etcd - - - 20 Medium
Ansible Controller - - 0.075 (75MB) - Small
MicroShift - Red Hat Enterprise Linux (RHEL) Extended Update Support (EUS) (9.2 or later) x86_64 or aarch64 CPU architecture, 2 CPU cores 2 2 Small
K3s - RHEL9, Ubuntu, openSUSE Leap, Oracle Linux, Rocky Linux, SLES, Red Hat/CentOS Enterprise Linux, Raspberry Pi OS (with additional setup) x86_64 or armhf or arm64/aarch64 or s390x CPU architecture, 1 CPU core 0.512 GB (512MB) - Small
KinD - Ubuntu 20.04 LTS, Ubuntu 22.04 LTS, Ubuntu 24 LTS 4 CPU cores 8 (3 nodes in one machine) - Medium
MicroK8s - Ubuntu 1 CPU core 1 (Recommended 4) Recommended 20 GB Small
Bare metal K8s - Ubuntu 22.04 (tested) Recommended 4 CPU cores Recommended 4 GB Recommended 40 GB Small

Step 0: Installation of software prerequisites

(skip if already installed)
Install basic apt dependencies:
sudo apt update 
sudo apt install -y git nano curl wget make sudo rsync jq
Install yq

Note we need yq version >= 4.18.1

sudo wget https://github.com/mikefarah/yq/releases/download/v4.34.2/yq_linux_amd64 -O /usr/bin/yq && sudo chmod +x /usr/bin/yq
Install docker
curl -fsSL https://get.docker.com | sh
Docker after installation steps
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker
docker run hello-world
Install kind (only if we plan to use a KinD cluster):
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.23.0/kind-linux-amd64
sudo chmod +x ./kind 
sudo mv ./kind /usr/local/bin/kind
Install helm:
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh 
./get_helm.sh
Install golang:
wget https://go.dev/dl/go1.21.10.linux-amd64.tar.gz 
rm -rf /usr/local/go 
sudo tar -C /usr/local -xzf go1.21.10.linux-amd64.tar.gz
echo 'export PATH="$PATH:/usr/local/go/bin" ' >> ~/.bashrc
source ~/.bashrc
Install kubectl:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client   #verify installation

Install cmctl:

OS=$(go env GOOS); ARCH=$(go env GOARCH); curl -fsSL -o cmctl https://github.com/cert-manager/cmctl/releases/latest/download/cmctl_${OS}_${ARCH}
chmod +x cmctl
sudo mv cmctl /usr/local/bin
Activate br_filter:

Ubuntu kernel MUST have br_filter active (flannel requirement), which does not happen by default in some Ubuntu versions. As sudo, the following commands will activate br_filter:

sudo tee /etc/modules-load.d/containerd.conf <<EOF 
br_netfilter
EOF
modprobe br_netfilter

Step 1: Cluster creation (only for KinD)

CODECO is designed to work with any kubernetes flavour (kind, microk8s, kubeadm, K3s etc.). However, installation of CODECO is tested on a 3-node kind cluster on an Ubuntu 20.04 machine.

Clone the ACM repository:
git clone https://gitlab.eclipse.org/eclipse-research-labs/codeco-project/acm.git
Create a kind cluster based on the appropriate kind config file
kind delete cluster    #this will delete any existing kind cluster named 'kind'
kind create cluster --config ./acm/config/cluster/kind-config.yaml

If this is your first installation of CODECO on this machine, include the following commands:

# Download binaries for CNI Plugins
mkdir -p plugins/bin
wget https://github.com/containernetworking/plugins/releases/download/v1.6.0/cni-plugins-linux-amd64-v1.6.0.tgz
tar -xf cni-plugins-linux-amd64-v1.6.0.tgz -C ./plugins/bin
# copy necessary plugins into all nodes
docker cp ./plugins/bin/. kind-control-plane:/opt/cni/bin
docker cp ./plugins/bin/. kind-worker:/opt/cni/bin
docker cp ./plugins/bin/. kind-worker2:/opt/cni/bin


# fix by Alex UC3M
docker exec -it kind-control-plane modprobe br_netfilter
docker exec -it kind-worker modprobe br_netfilter
docker exec -it kind-worker2 modprobe br_netfilter
# fix
docker exec -it kind-control-plane sysctl -p /etc/sysctl.conf
docker exec -it kind-worker sysctl -p /etc/sysctl.conf
docker exec -it kind-worker2 sysctl -p /etc/sysctl.conf
# File limit workaround
docker exec -it kind-control-plane bash -c "sysctl -w fs.inotify.max_user_watches=2099999999; sysctl -w fs.inotify.max_user_instances=2099999999; sysctl -w fs.inotify.max_queued_events=2099999999"
docker exec -it kind-worker bash -c "sysctl -w fs.inotify.max_user_watches=2099999999; sysctl -w fs.inotify.max_user_instances=2099999999; sysctl -w fs.inotify.max_queued_events=2099999999"
docker exec -it kind-worker2 bash -c "sysctl -w fs.inotify.max_user_watches=2099999999; sysctl -w fs.inotify.max_user_instances=2099999999; sysctl -w fs.inotify.max_queued_events=2099999999"
sysctl -w fs.inotify.max_user_watches=2099999999
sysctl -w fs.inotify.max_user_instances=2099999999
sysctl -w fs.inotify.max_queued_events=2099999999

Step 2: Install the CODECO Framework

Export Dockerhub credentials
DOCKERHUB_USER=<user>
DOCKERHUB_PASS=<pass>
Deploy ACM (and all CODECO Components)
  • This will deploy the ACM component
  • Deploying ACM will automatically deploy all codeco components according to the acm/scripts/post_deploy.sh script
# clone ACM repo if not present
[ -d acm ] || git clone https://gitlab.eclipse.org/eclipse-research-labs/codeco-project/acm.git

# build and deploy the ACM component
cd acm
echo $DOCKERHUB_PASS | docker login -u $DOCKERHUB_USER --password-stdin
make docker-buildx IMG=$DOCKERHUB_USER/codecoapp-operator:2.0.0 PLATFORMS="linux/amd64,linux/arm64,linux/ppc64le,linux/s390x"
make deploy IMG=$DOCKERHUB_USER/codecoapp-operator:2.0.0
cd ..

# If you want to deploy the pre-built image just run
make deploy

Step 3: Deploy a Codeco App

This deploys a simple codeco app consisting of a frontend and a backend, as described in the file acm/config/samples/codeco_v1alpha1_codecoapp_ver3.yaml (exists in the ACM repo)

kubectl apply -f ./acm/config/samples/codeco_v1alpha1_codecoapp_ver3.yaml

You can define minBandwidth and maxDelay.

serviceChannels:
    - advancedChannelSettings:
        minBandwidth: "5M"
        maxDelay: "1s"

If you want to specify sendInterval and frameSize do not combine it with minBandwidth. For example if you want to specify sendInterval of 30ms for a camera and frameSize of 64ki this translates to 2.112M Bandwidth

serviceChannels:
    - advancedChannelSettings:
        frameSize: "64"
        sendInterval: "30s"
        maxDelay: "1s"

The user can replace the path in the command for a custom Codeco Application Model yaml file.

Create a custom CodecoApp

Follow this guide to the CodecoApp attributes to create your custom Codeco Application Model.


Step 4: Verify correct deployment

Test that CodecoApp pods are Running (The following commands are used as integration tests)

If the Codeco Application Pods are not running within a timeout of 20 minutes, then the test is Failed and the script returns error. I the pods are Running with time, the script terminates giving a success signal.

if kubectl wait --for=condition=Ready pod --all -n he-codeco-acm --timeout=20m; then 
  kubectl get pods -A; 
  exit 0; # success
else
  kubectl get pods -A; 
  exit 1; # error
fi
Test that NetMA component correctly outputs network metrics

User should see a CR that has successfully populated the underlay-topology and overlay-topology sections.

  • underlay-topology metrics: metrics regarding the underlay cluster topology
  • overlay-topology metrics: metrics regarding the running application (pod to pod links)
kubectl get netma-topology netma-sample -o yaml -n he-codeco-netma
Test that PDLC correctly provides nodeRecommendations to SWM

kubectl get applications.qos-scheduler.siemens.com -n he-codeco-acm acm-swm-app -o yaml

The output CR (in yaml format) should contain a nodeRecommendations section.


Uninstall CODECO

For the kind cluster, one can simply delete the cluster as a whole:

kind delete cluster

Uninstalling CODECO components without deleting the cluster will be available soon. Remove PDLC:

If you're running on a standard Kubernetes cluster (not KinD or Minikube) and want to completely remove PDLC, you’ll need to delete the /data directory from the specific node where PDLC was deployed. You can identify that node by checking any of the PDLC deployment YAML file

For non-kind deployments you can delete all the CODECO resources using the remove_codeco.sh script by running:

./remove_codeco.sh


How a CODECO application can add metrics to Prometheus

All the user has to do is configure the deployment (CodecoApp), so prometheus get the metric to pull it:

...
spec:
  replicas: 1
  selector:
    matchLabels:
      app: <your service app>
  template:
    metadata:
      labels:
        app: <your service app>
      annotations:
        prometheus.io/port: <port>
        prometheus.io/scrape: "true"
        prometheus.io/path: <metric path>
   ...
   ..
   .

Deploy the entire CODECO framework inside a Docker container

If a user wishes to install CODECO in a protected and reversible environment, one can do all previous steps inside a Docker Container, supposing one already has Docker installed.

In the CODECO-integration repo of CODECO, one can find:

Please note:

  • User must add valid Docker credentials
  • If α cluster already exists, run the integration-script.sh script without the --create-cluster flag.

Deploy a custom Codeco Application Model using the --codeco-app argument to provide the path to your file. Without this argument a demo CodecoApp by RHT will be deployed: integration-script.sh --create-cluster --codeco-app <path to CodecoApp yaml>


export DOCKERHUB_USER=<some-dockerhub-user>
export DOCKERHUB_PASS=<password>

git clone https://gitlab.eclipse.org/eclipse-research-labs/codeco-project/gitlab-profile.git
cd gitlab-profile/
docker build -t integration:test -f Dockerfile-integration .
docker rm integ --force
docker run -t -v /var/run/docker.sock:/var/run/docker.sock -e DOCKERHUB_USER=$DOCKERHUB_USER -e DOCKERHUB_PASS=$DOCKERHUB_PASS --network host --name integ --rm integration:test bash -c -i "/integration-script.sh --create-cluster"

Known Issues & Workarounds

Multus pods error with KinD deployment, too many files open

The following is a known issue when deploying CODECO with KinD (observed in the deployment with Ubuntu 24.04, kernel 6.8-053-generic); Multus pods crash when deploying netma, l2sm, possibly because of too many open files.

Use the following command as a workaround:

sudo sysctl fs.inotify.max_user_watches=524288
sudo sysctl fs.inotify.max_user_instances=512

Secure-connectivity packets dropped when CNI sets iptables FORWARD policy to DROP

When CODECO runs on Kubernetes flavours whose CNI plugin installs an iptables rule that defaults the FORWARD chain to DROP (e.g. K3s with Flannel or Calico on bare-metal/Kind), packets emitted inside the Secure Connectivity component are silently discarded. Symptoms include:

  • pods in he-codeco-netma stuck in CrashLoopBackOff or Init
  • iptables -L FORWARD shows a default DROP policy
  • kubectl logs workload pods report time-outs or “connection refused”

Work-around

Reset the FORWARD policy to ACCEPT on every cluster node (master and workers) before or immediately after deploying CODECO:

sudo iptables -P FORWARD ACCEPT        # allow forwarding

Persisting the change On most distributions you can save the rule so it survives reboot with: sudo iptables-save | sudo tee /etc/iptables/rules.v4

After applying the command, re-deploy or restart the affected Secure Connectivity pods to restore normal traffic flow.

PDLC not making Recommendations when Manually Removing a CodecoApp Pod

In certain cases, you may need to manually remove a CodecoApp pod and reapply it. Note that PDLC will not automatically generate a new recommendation for this pod after removal.

To trigger a new recommendation and allow SWM to collect it for redeployment, you must also remove the pdlc-rl pod. This ensures the system fully refreshes the deployment state.

Work-around

Run the following command to delete the pdlc-rl pod in the he-codeco-pdlc namespace:

kubectl delete pod <pdlc-rl-pod> -n he-codeco-pdlc

After deletion, PDLC will recreate the pdlc-rl pod and issue a new recommendation, allowing SWM to proceed with redeployment.