Skip to content
Snippets Groups Projects
Commit cf8de868 authored by Harald Mueller's avatar Harald Mueller
Browse files

Fix docu, examples, base image

parent 289e5d61
No related branches found
No related tags found
1 merge request!3Import version 1.2.0
...@@ -14,30 +14,37 @@ Steps for getting a demo to run on a local Linux Helm environment and a local Ki ...@@ -14,30 +14,37 @@ Steps for getting a demo to run on a local Linux Helm environment and a local Ki
- Helm - Helm
- KinD - KinD
## Ensure access to container registry (CODECO private repo for now) ## Ensure access to container registry
- Create appropriate access token in CODECO container registry, with scopes - Create appropriate access token in private container registry, with scopes
- api - api
- read_registry - read_registry
- In local shell: (=> enter username and access token as password) - In local shell: (=> enter username and access token as password)
```bash ```bash
docker login https://colab-repo.intracom-telecom.com (=> enter username and access token as password) DOCKERUSER=<your_docker_username>
ACCESSTOKEN=<your_access_token>
REGISTRYHOST=colab-repo.intracom-telecom.com
REGISTRYNAME=${REGISTRYHOST}:5050
REGISTRYURL=https://${REGISTRYNAME}
REPOHOST=colab-repo.intracom-telecom.com
REPOGROUP=/colab-projects/he-codeco/swm/
docker login ${REGISTRYURL} -u ${DOCKERUSER} -p ${ACCESSTOKEN}
``` ```
- Test registry access (optional) - Test registry access (optional)
```bash ```bash
docker pull colab-repo.intracom-telecom.com:5050/colab-projects/he-codeco/swm/qos-scheduler/custom-scheduler:main docker pull ${REGISTRYNAME}${REPOGROUP}qos-scheduler/custom-scheduler:main
``` ```
## Copy required scripts to local environment ## Copy required scripts to local environment
- One way is to clone the CODECO private repository - One way is to clone the private repository
```bash ```bash
cd <qos-scheduler-directory> cd <qos-scheduler-directory>
git clone https://colab-repo.intracom-telecom.com/colab-projects/he-codeco/swm/qos-scheduler.git git clone https://${REPOHOST}${REPOGROUP}qos-scheduler.git
``` ```
- **Attention:** If this is done in Windows (using Visual Studio Code - VSC), depending on the settings of VSC, it may happen that text files use a CRLF (Carriage Return Line Feed) as line separation (the "Windows way"). Scripts will not execute with this or throw errors. The solution to this problem: - **Attention:** If this is done in Windows (using Visual Studio Code - VSC), depending on the settings of VSC, it may happen that text files use a CRLF (Carriage Return Line Feed) as line separation (the "Windows way"). Scripts will not execute with this or throw errors. The solution to this problem:
...@@ -57,7 +64,7 @@ Steps for getting a demo to run on a local Linux Helm environment and a local Ki ...@@ -57,7 +64,7 @@ Steps for getting a demo to run on a local Linux Helm environment and a local Ki
- This will install a KinD cluster with one master and two worker nodes - This will install a KinD cluster with one master and two worker nodes
- The K8s config to access the cluster will be appended to ~/.kube/config and in case there are multiple clusters in the config file, the context (of kubectl) will be switched to the new KinD cluster - The K8s config to access the cluster will be appended to ~/.kube/config and in case there are multiple clusters in the config file, the context (of kubectl) will be switched to the new KinD cluster
- Check whether cluster is working - Check whether cluster is working and whether your K8s config is pointing to the right cluster
```bash ```bash
kubectl get nodes kubectl get nodes
...@@ -74,13 +81,13 @@ Steps for getting a demo to run on a local Linux Helm environment and a local Ki ...@@ -74,13 +81,13 @@ Steps for getting a demo to run on a local Linux Helm environment and a local Ki
# Install QoS scheduler # Install QoS scheduler
## Install QoS Scheduler and Optimizer ## Install QoS Scheduler and Solver
- In local shell: - In local shell:
```bash ```bash
make chart make chart
helm install qostest --namespace=controllers-system --create-namespace --set global.image.credentials.username=<your-docker-username> --set global.image.credentials.password=<your-docker-token> --set global.image.credentials.email=<docker-token-user>@<your-domain> tmp/helm helm install qostest --namespace=controllers-system --create-namespace --set global.image.credentials.username=${DOCKERUSER} --set global.image.credentials.password=${ACCESSTOKEN} --set global.image.credentials.email=${DOCKERUSER} --set global.image.credentials.registry=${REGISTRYURL} tmp/helm
``` ```
- Show network topology/ network links (Custom Resources) - Show network topology/ network links (Custom Resources)
...@@ -89,14 +96,14 @@ Steps for getting a demo to run on a local Linux Helm environment and a local Ki ...@@ -89,14 +96,14 @@ Steps for getting a demo to run on a local Linux Helm environment and a local Ki
kubectl get networklinks -A kubectl get networklinks -A
``` ```
This should show you links in the network-k8s-namespace and network-loopback-namespace namespaces. This should show you links in the network-k8s-namespace namespace.
- Show network paths (Custom Resources) - Show network paths (Custom Resources)
```bash ```bash
kubectl get networkpaths -A kubectl get networkpaths -A
``` ```
This should show you paths in both the above namespaces. This should show you paths in both the above namespace.
# Deploy sample ApplicationGroup and Application # Deploy sample ApplicationGroup and Application
...@@ -121,14 +128,14 @@ Steps for getting a demo to run on a local Linux Helm environment and a local Ki ...@@ -121,14 +128,14 @@ Steps for getting a demo to run on a local Linux Helm environment and a local Ki
```bash ```bash
kubectl apply -f config/demo/applicationgroup.yaml kubectl apply -f config/demo/applicationgroup.yaml
kubectl apply -f config/demo/app1.yaml kubectl apply -f config/demo/app-besteffort.yaml
``` ```
- This will create an *ApplicationGroup* with minimum 1 Applications and the Application App1, which consists of - This will create an *ApplicationGroup* with minimum 1 Applications and the Application app-besteffort, which consists of
- Two *Workloads* w1 and w2 - Two *Workloads* w1 and w2
- each Workload has a container wbitt/network-multitool (which has a couple of networking tools to test and demonstrate communication) - each Workload has a container wbitt/network-multitool (which has a couple of networking tools to test and demonstrate communication)
- *Channel* "ernie" from w1 to w2 (5Mbit/s, 150µs) - *Channel* "ernie" from w1 to w2 (5Mbit/s, 150µs)
- *Channel* with a generated name (app1-w2-to-app1-w1) from w2 to w1 (2Mbit/s, 200µs) - *Channel* with a generated name (app-besteffort-w2-to-app-besteffort-w1) from w2 to w1 (2Mbit/s, 200µs)
*Channel* "ernie" requests the *BESTEFFORT* network service class, which the default Helm chart *Channel* "ernie" requests the *BESTEFFORT* network service class, which the default Helm chart
maps to the *k8s* network (which gets created automatically). maps to the *k8s* network (which gets created automatically).
......
...@@ -46,9 +46,8 @@ spec: ...@@ -46,9 +46,8 @@ spec:
port: 3333 port: 3333
basename: ernie basename: ernie
serviceClass: ASSURED serviceClass: ASSURED
bandwidth: "1M" bandwidth: "5M"
framesize: "150" maxDelay: "150E-6"
maxDelay: "100E-4"
- basename: w2 - basename: w2
nodeRecommendations: nodeRecommendations:
c1: 0.2 c1: 0.2
...@@ -76,7 +75,6 @@ spec: ...@@ -76,7 +75,6 @@ spec:
applicationName: app1-assured applicationName: app1-assured
port: 4444 port: 4444
serviceClass: ASSURED serviceClass: ASSURED
bandwidth: "1M" bandwidth: "2M"
framesize: "150" maxDelay: "200E-6"
maxDelay: "100E-4"
...@@ -50,9 +50,8 @@ spec: ...@@ -50,9 +50,8 @@ spec:
port: 3333 port: 3333
basename: ernie basename: ernie
serviceClass: BESTEFFORT serviceClass: BESTEFFORT
bandwidth: "1M" bandwidth: "5M"
framesize: "150" maxDelay: "150E-6"
maxDelay: "100E-4"
- basename: w2 - basename: w2
template: template:
metadata: metadata:
...@@ -84,6 +83,5 @@ spec: ...@@ -84,6 +83,5 @@ spec:
applicationName: app-besteffort applicationName: app-besteffort
port: 4444 port: 4444
serviceClass: BESTEFFORT serviceClass: BESTEFFORT
bandwidth: "1M" bandwidth: "2M"
framesize: "150" maxDelay: "200E-6"
maxDelay: "100E-4"
...@@ -26,7 +26,7 @@ nodes: ...@@ -26,7 +26,7 @@ nodes:
nodeRegistration: nodeRegistration:
name: "C1" name: "C1"
kubeletExtraArgs: kubeletExtraArgs:
node-labels: "mac-address=5e0d.6660.a485" node-labels: "mac-address=5e0d.6660.a485,siemens.com.qosscheduler.c1=true"
- role: worker - role: worker
image: kindest/node:v1.23.1 image: kindest/node:v1.23.1
kubeadmConfigPatches: kubeadmConfigPatches:
...@@ -35,5 +35,5 @@ nodes: ...@@ -35,5 +35,5 @@ nodes:
nodeRegistration: nodeRegistration:
name: "C2" name: "C2"
kubeletExtraArgs: kubeletExtraArgs:
node-labels: "mac-address=da69.022b.c8fc" node-labels: "mac-address=da69.022b.c8fc,siemens.com.qosscheduler.c2=true"
...@@ -22,8 +22,8 @@ spec: ...@@ -22,8 +22,8 @@ spec:
- source: ipc1 - source: ipc1
target: ipc2 target: ipc2
capabilities: capabilities:
bandWidthBits: "100M" bandWidthBits: "100M" # unit: bit/s, default: 1Gbit/s
latencyNanos: "100e-6" latencyNanos: "100e-6" # unit: seconds, default: 100µs
- source: ipc2 - source: ipc2
target: ipc1 target: ipc1
capabilities: capabilities:
...@@ -48,4 +48,13 @@ spec: ...@@ -48,4 +48,13 @@ spec:
target: ipc2 target: ipc2
capabilities: capabilities:
bandWidthBits: "10M" bandWidthBits: "10M"
latencyNanos: "20e-3" latencyNanos: "20e-3"
\ No newline at end of file # Loopback links (need to be explicitely specified)
- source: ipc1
target: ipc1
- source: ipc2
target: ipc2
- source: cloud
target: cloud
- source: control-plane
target: control-plane
...@@ -24,7 +24,7 @@ spec: ...@@ -24,7 +24,7 @@ spec:
target: c2 target: c2
capabilities: capabilities:
bandWidthBits: "100M" # unit: bit/s, default: 1Gbit/s bandWidthBits: "100M" # unit: bit/s, default: 1Gbit/s
latencyNanos: "200e-6" # unit: seconds, default: 100µs latencyNanos: "150e-6" # unit: seconds, default: 100µs
- source: c2 - source: c2
target: c1 target: c1
- source: c1 - source: c1
......
# SPDX-FileCopyrightText: 2023 Siemens AG # SPDX-FileCopyrightText: 2023 Siemens AG
# SPDX-License-Identifier: Apache-2.0 # SPDX-License-Identifier: Apache-2.0
# Use distroless as minimal base image to package the manager binary FROM alpine:3.13.5
# Refer to https://github.com/GoogleContainerTools/distroless for more details
FROM gcr.io/distroless/static:nonroot
ARG TARGETARCH ARG TARGETARCH
WORKDIR /bin WORKDIR /bin
COPY manager_${TARGETARCH} /bin/manager COPY manager_${TARGETARCH} /bin/manager
......
# SPDX-FileCopyrightText: 2023 Siemens AG # SPDX-FileCopyrightText: 2023 Siemens AG
# SPDX-License-Identifier: Apache-2.0 # SPDX-License-Identifier: Apache-2.0
# Use distroless as minimal base image to package the manager binary FROM alpine:3.13.5
# Refer to https://github.com/GoogleContainerTools/distroless for more details
FROM gcr.io/distroless/static:nonroot
ARG TARGETARCH ARG TARGETARCH
WORKDIR /bin WORKDIR /bin
COPY k8s-nw-operator_$TARGETARCH /bin/k8s-nw-operator COPY k8s-nw-operator_$TARGETARCH /bin/k8s-nw-operator
......
...@@ -228,3 +228,4 @@ rules: ...@@ -228,3 +228,4 @@ rules:
verbs: verbs:
- get - get
- list - list
- watch
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment