Skip to content

OCM deployment in KinD and K3S - scripts

Tested by different fortiss developers:

OCM in KinD example:

OCM Setup with k3s (Hub & Spokes)

This guide sets up a Hub cluster and multiple Spoke clusters using k3s (lightweight Kubernetes) and joins them with Open Cluster Management (OCM). You will finish by deploying a simple BusyBox workload from the hub to a spoke cluster.

Prerequisites

  1. k3s should be installed on the hub control-plane node, and the spoke cluster:

We can set up a hub K3s cluster consisting of only one control-plane node, and a spoke cluster that consists of one control-plane node and additional worker nodes.

Throughout this tutorial, we will focus only on the installation of the OCM CLI in the control planes of these two clusters. The worker nodes are not relevant for this process.

  1. clusteradm CLI installed on the control-plane nodes of both the hub cluster and spoke clusters:

    • Install clusteradm:
     curl -sSL https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash
    
    
    
    

1. Prepare Hub Cluster (Run on the Hub k3s control-plane node)

  1. Set kubeconfig for hub cluster:
export KUBECONFIG=~/multi-cluster-ocm/hub.kubeconfig
  1. Get the Hub cluster’s internal IP:
export HUB_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')
echo "Hub IP is $HUB_IP"
  1. Install OCM hub control plane:
clusteradm init \
  --singleton=true \
  --set route.enabled=false \
  --set nodeport.enabled=true \
  --set nodeport.port=30443 \
  --set apiserver.externalHostname=$HUB_IP \
  --set apiserver.externalPort=30443 \
  --singleton-name hub-controlplane
  1. Extract the hub control plane kubeconfig for clusteradm:
kubectl -n hub-controlplane get secret multicluster-controlplane-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > ~/multi-cluster-ocm/hub-controlplane.kubeconfig

2. Prepare Spoke Clusters (Run on the conrol-plane node of the spoke cluster)

  1. Ensure k3s is installed and running.

  2. (optional) Copy the spoke kubeconfig to the hub machine (example for the spoke1 cluster control node)

scp /etc/rancher/k3s/k3s.yaml user@hub:~/multi-cluster-ocm/spoke1.kubeconfig

Repeat for each spoke (spoke2, spoke3, etc.), adjusting the file names accordingly.


3. Join Spoke Clusters to Hub


Step A: On Hub, get the join token

clusteradm --kubeconfig ~/multi-cluster-ocm/hub-controlplane.kubeconfig get token --use-bootstrap-token

Copy the token printed out (e.g., abc123...).


Step B: On each Spoke cluster control-plane node, run the join command

Use the token and hub IP from above. On spoke1 node, run:

clusteradm join \
  --singleton=true \
  --hub-token abc123... \
  --hub-apiserver https://$HUB_IP:30443 \
  --cluster-name spoke1 \
  --kubeconfig /etc/rancher/k3s/k3s.yaml

(Change spoke1 and kubeconfig path per cluster as needed.)


Step C: Back on Hub, accept join request

clusteradm --kubeconfig ~/multi-cluster-ocm/hub-controlplane.kubeconfig accept --clusters spoke1

Repeat for all spoke clusters.


4. Deploy BusyBox Example Manifests (Run on Hub)


Create file: busybox-deployment.yaml

apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
  name: busybox-deployment
  namespace: spoke1 # the cluster you want to allocate your app to. 
spec:
  workload:
    manifests:
      - apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: busybox
          namespace: busybox-demo
        spec:
          replicas: 1
          selector:
            matchLabels:
              app: busybox
          template:
            metadata:
              labels:
                app: busybox
            spec:
              containers:
                - name: busybox
                  image: busybox
                  command: ["/bin/sh", "-c", "while true; do date; sleep 5; done"]

Apply:

kubectl apply -f busybox-deployment.yaml --kubeconfig ~/multi-cluster-ocm/hub-controlplane.kubeconfig

--

verfiy the mainfestwork was deployed:

kubectl get ManifestWork -A --kubeconfig ~/multi-cluster-ocm/hub-controlplane.kubeconfig


5. Verify Results


On Spoke node, verify pods are running:

kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml -n busybox-demo get pods

On Hub, verify spoke clusters joined:

kubectl --kubeconfig ~/multi-cluster-ocm/hub-controlplane.kubeconfig get managedclusters

6. (Optional) Merge kubeconfigs locally (workstation or hub)

export KUBECONFIG=~/multi-cluster-ocm/hub-controlplane.kubeconfig:~/multi-cluster-ocm/spoke1.kubeconfig
kubectl config view --merge --flatten > ~/.kube/config
kubectl config use-context default

Summary:

  • OCM init + hub control plane on the hub node and copy the hub token.
  • Copy spoke kubeconfigs from each spoke node to the hub machine. (optional for merging the contexts)
  • Run clusteradm join on the spoke nodes to join them to the hub.
  • Run clusteradm accept on the hub to approve the spokes.
  • Example: Deploy workloads on the hub via ManifestWork.
  • Verify workloads on spoke clusters via their kubeconfigs.

You have a multi-cluster OCM environment running on k3s.

...........................................................................

OCM in k3s (tested, but requires additional validation)

TeamAspects

This repository is used for adding documentation that is relevant to the team in the HE-CODECO project.

  • any member of the CODECO team can add issues, so that other members get an automatic e-mail about some issue in the lab, in the testbed, in the code, etc
  • any member of the CODECO team should add specific procedures to the "Documentation" folder.

CODING Aspects

An initial overview on rules for coding in the team is provided in the internal wiki, and added here to assist the CODECO team:

Managing git

Creating code

  • Your code should be added on a specific branch of the proposed repository.
  • we use the terminology repo_yourinitials_vx.x, e.g., pdlc_RS_v0.1
  • When a revision of a branch is ready and you want to merge it, you need to open a request
  • The person responsible for allowing the merges is Rute - CI/CD will notify automatically the person when you ask for a merge
  • your branch needs to have an adequate README:

Readme.Md

Explain the purpose of the code
Explain how to install and run
  • all files developed by you need to have a header explaining the code and the license.

  • the license is discussed with Rute, and depends on specific rules for OSS coding. Usual licenses in the team and in fortiss are Apache 2.0, MIT, GPLv3 or GPLv2.

  • then, any file developed by you needs to start with a header. A first option is to use the following header:

……………………………………………………………………………………………………

Header of each file

... /**

  • Copyright (C) xxxx fortiss GmbH, license type, date

  • @author name – e-mail

  • @author name – e-mail

  • @version 0.xxx

  • explanation on the content oft he file

*/ ... ……………………………………………………………………………………………………….

# SPDX-FileCopyrightText: 2024 fortiss GmbH

# SPDX-License-Identifier: Apache-2.0

# SPDX-FileContributor: author name, fortiss

Commenting your code

Comments on the code should be added for any defined class, variable, and also for the file. For instance:

/ for each class,

/**

  • @file Contains xxxxx. This class is the Main Activity class for the android application

*/

Add comments for parameters, etc.

/**

  • This is a comment

*/

…………………………………………………………………………..

Code Tags

/* TODO: improve the detection of the variable */

/* BUG: detected bug needs to be corrected, for now xxxx */

/* HACK: to solve detected bug, xxx */

@pkaram