Usage of another CNI Plugin in the testing cluster
Issue: Usage of Another CNI Plugin in the Testing Cluster Due to Kindnet and Multus Bug
Background:
The current testing cluster setup faces issues when deploying Secure Connectivity due to a bug involving the Kindnet CNI plugin and Multus. To resolve this, it's necessary to replace Kindnet with another CNI plugin, such as Flannel, which has better compatibility and avoids the conflict with Multus.
Required Changes:
Before deploying the cluster, we need to make the following adjustments:
- Download and Install the CNI Plugins: We need to download and compile the CNI plugins to ensure they can be accessed by the kind cluster.
- Modify the Kind Configuration: Update the Kind configuration to use an external CNI, Flannel in this case, and disable the default CNI (Kindnet).
Implementation Steps:
Here is the step-by-step implementation required to perform the necessary changes.
#!/bin/bash
## Prerequisites: git and go
## This script downloads the CNI plugins and compiles them, so they can later be accessed by the kind cluster in the tmp directory
git clone https://github.com/containernetworking/plugins /tmp/plugins
/tmp/plugins/build_linux.sh
And the following changes to kind-config.yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:5001"]
endpoint = ["http://kind-registry:5000"]
networking:
disableDefaultCNI: true # Disable Kindnet, we will use Flannel as the primary CNI plugin
podSubnet: "10.244.0.0/16" # Flannel requires this CIDR
nodes:
- role: control-plane
image: kindest/node:v1.26.6
labels:
siemens.com.qosscheduler.master: true
dedicated: control-plane # No post_script.sh modification needed for control-plane
extraMounts:
- hostPath: /tmp/plugins/bin
containerPath: /opt/cni/bin # Mount CNI plugins inside the container
- role: worker
image: kindest/node:v1.26.6
extraMounts:
- hostPath: /tmp/plugins/bin
containerPath: /opt/cni/bin
- hostPath: /tmp/nwapidb
containerPath: /nwapidb
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
name: "C1"
kubeletExtraArgs:
node-labels: "mac-address=5e0d.6660.a485,siemens.com.qosscheduler.c1=true"
- role: worker
image: kindest/node:v1.26.6
extraMounts:
- hostPath: /tmp/plugins/bin
containerPath: /opt/cni/bin
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
name: "C2"
kubeletExtraArgs:
node-labels: "mac-address=da69.022b.c8fc,siemens.com.qosscheduler.c2=true"
After the cluster is deployed, the following changes should be done:
- Install Flannel: Apply the necessary manifests to deploy Flannel as the primary CNI plugin.
- Deploy Secure Connectivity Components: Install required components for Secure Connectivity like Multus and Cert-manager.
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
echo "........................................Installing NetMA..............................................."
cd secure-connectivity
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.15.3/cert-manager.yaml
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-thick.yml
kubectl taint nodes kind-control-plane node-role.kubernetes.io/control-plane:NoSchedule-
# kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kubernetes.io/master-
kubectl create namespace he-codeco-netma
kubectl get nodes
sleep 60
kubectl create -f ./deployments/l2sm-deployment.yaml -n=he-codeco-netma
cd ..
echo "........................................Finished installing NetMA..............................................."