Skip to content
Snippets Groups Projects
Unverified Commit 205efcfc authored by Alex's avatar Alex Committed by GitHub
Browse files

Merge pull request #4 from Networks-it-uc3m/quick-installation

Quick installation
parents 9c213409 63ea59c7
No related branches found
No related tags found
1 merge request!2repo: added new directory where utils scripts will be
Showing
with 2389 additions and 262 deletions
......@@ -3,17 +3,17 @@ Welcome to the official repository of L2S-M, a **Kubernetes operator** that enab
Link-Layer Secure connectivity for Microservice platforms (L2S-M) is a K8s networking solution that complements the CNI plugin approach of K8s to create and manage virtual networks in K8s clusters. These virtual networks allow workloads (pods) to have isolated link-layer connectivity with other pods in a K8s cluster, regardless of the k8s node where they are actually deployed. L2S-M enables the creation/deletion of virtual networks on-demand, as well as attaching/detaching pods to that networks. The solution is seamlessly integrated within the K8s environment, through a K8s operator:
![alt text](https://github.com/Networks-it-uc3m/L2S-M/blob/main/v1_architecture.png?raw=true)
![alt text](./assets/v1_architecture.png?raw=true)
L2S-M provides its intended functionalities using a programmable data-plane based on Software Defined Networking (SDN), which in turn provides a high degree of flexibility to dynamically incorporate new application and/or network configurations into K8s clusters. Moreover, L2S-M has been designed to flexibly accommodate various deployment options, ranging from small K8s clusters to those with a high number of distributed nodes.
The main K8s interface of pods remains intact (provided by a CNI plugin), retaining the compatibility with all the standard K8s elements (e.g., services, connectivity through the main interface, etc.). Moreover, the solution has the potential to be used for inter-cluster communications to support scenarios where network functions are spread through multiple distributed infrastructures (this is still a work in progress).
The figure outlines the design of L2S-M. See [how L2S-M works](https://github.com/Networks-it-uc3m/L2S-M/tree/main/additional-info) to read further details on the L2S-M solution.
The figure outlines the design of L2S-M. See [how L2S-M works](./additional-info/) to read further details on the L2S-M solution.
If you want to learn how to install L2S-M in your cluster, see the [installation guide](https://github.com/Networks-it-uc3m/L2S-M/tree/main/deployments) of this repository to start its installation.
If you want to learn how to install L2S-M in your cluster, see the [installation guide](./deployments) of this repository to start its installation.
Did you already install the operator and you cannot wait to start building your own virtual networks in your K8s cluster? Check out our [ping-pong](https://github.com/Networks-it-uc3m/L2S-M/tree/main/examples/ping-pong) example!
Did you already install the operator and you cannot wait to start building your own virtual networks in your K8s cluster? Check out our [ping-pong](./examples/ping-pong) example!
If you want more information about the original idea of L2S-M and its initial design, you can check our latest publication in the [IEEE Network journal](https://ieeexplore.ieee.org/document/9740640):
......@@ -34,7 +34,7 @@ The solution can work jointly with L2S-M or be used standalone through the [Mult
The solution enables the creation and deletion of virtual link-layer networks to connect application workloads running in different virtualization domains. This way, it supports inter-domain link-layer communications among remote workloads.
### Additional information about L2S-M
In the [following section](https://github.com/Networks-it-uc3m/L2S-M/tree/main/additional-info) of the repository, you can find a series of documents and slides that provide additional information about L2S-M, including presentations where our solution has been showcased to the public in various events.
In the [following section](./additional-info) of the repository, you can find a series of documents and slides that provide additional information about L2S-M, including presentations where our solution has been showcased to the public in various events.
L2S-M has been presented in the following events:
......@@ -45,10 +45,13 @@ L2S-M has been presented in the following events:
### How to reach us
Do you have any doubts about L2S-M or its installation? Do you want to provide feedback about the solution? Please, do not hesitate to contact us out through e-mail!
- Luis F. Gonzalez: luisfgon@it.uc3m.es (Universidad Carlos III de Madrid)
- Ivan Vidal : ividal@it.uc3m.es (Universidad Carlos III de Madrid)
- Francisco Valera: fvalera@it.uc3m.es (Universidad Carlos III de Madrid
- Diego R. Lopez: diego.r.lopez@telefonica.com (Telefónica I+D)
- Alex T. de Cock Buning: 100383348@alumnos.uc3m.es (Universidad Carlos III de Madrid)
### Acknowledment
The work in this open-source project has partially been supported by the European H2020 FISHY Project (grant agreement 952644) and by the H2020 Labyrinth project (grant agreement 861696).
File moved
source diff could not be displayed: it is too large. Options to address this: view the blob.
# Build Directory
This directory contains Dockerfiles and scripts for building and pushing Docker images for different components of the project.
The files and scripts are meant to be run directly in the /L2S-M directory, as the COPY instructions will refer to the /L2S-M/src directory.
## Directory Structure:
- `./build/switch`: Dockerfile and related files for building the l2sm-switch Docker image.
- `./build/controller`: Dockerfile and related files for building the l2sm-controller Docker image.
- `./build/operator`: Dockerfile and related files for building the l2sm-operator Docker image.
- `./build/build_and_push_images.sh`: Bash script for automating the build and push process of Docker images.
## Script Usage:
### 1. Build Images:
```bash
./build/build_and_push_images.sh build
```
This command will build Docker images for l2sm-switch, l2sm-controller, and l2sm-operator.
### 2. Push Images:
```bash
./build/build_and_push_images.sh push
```
This command will push previously built Docker images to the specified DockerHub repository.
### 3. Build and Push Images:
```bash
./build/build_and_push_images.sh build_push
```
This command will both build and push Docker images.
Note: Make sure to set the appropriate environment variables in the script before running. (The repo name and the version tag)
For any additional details or customization, refer to the respective Dockerfiles and the build script.
#!/bin/bash
set -e
# Set environment variables
export VERSION="2.2"
export DOCKERHUB_REPO="alexdecb"
# Function to build image
build_image() {
local image_name="$1"
local folder_name="$2"
echo "Building ${image_name}..."
docker build -t "${DOCKERHUB_REPO}/${image_name}:${VERSION}" -f "./build/${folder_name}/Dockerfile" .
}
# Function to push image
push_image() {
local image_name="$1"
echo "Pushing ${image_name}..."
docker push "${DOCKERHUB_REPO}/${image_name}:${VERSION}"
}
# Option 1: Build image
if [ "$1" == "build" ]; then
build_image "l2sm-switch" "switch"
build_image "l2sm-controller" "controller"
build_image "l2sm-operator" "operator"
echo "Images have been built successfully."
# Option 2: Push image
elif [ "$1" == "push" ]; then
push_image "l2sm-switch"
push_image "l2sm-controller"
push_image "l2sm-operator"
echo "Images have been pushed successfully."
# Option 3: Build and push image
elif [ "$1" == "build_push" ]; then
build_image "l2sm-switch" "switch"
push_image "l2sm-switch"
build_image "l2sm-controller" "controller"
push_image "l2sm-controller"
build_image "l2sm-operator" "operator"
push_image "l2sm-operator"
echo "Images have been built and pushed successfully."
# Invalid option
else
echo "Invalid option. Please use 'build', 'push', or 'build_push'."
exit 1
fi
FROM onosproject/onos:2.7-latest
COPY . ./
COPY ./src/controller ./
RUN apt-get update && \
apt-get install wget && \
......
FROM python:3.11.6
RUN pip install kopf kubernetes PyMySQL cryptography requests
COPY l2sm-operator.py /l2sm-operator.py
CMD kopf run --standalone --all-namespaces /l2sm-operator.py
WORKDIR /usr/src/app
COPY ./src/operator/requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY ./src/operator/l2sm-operator.py .
CMD kopf run --liveness=http://0.0.0.0:8080/healthz --standalone --all-namespaces ./l2sm-operator.py
FROM golang:1.20 AS build
WORKDIR /usr/src/bin
WORKDIR /usr/src/l2sm-switch
COPY ./main.go ./go.mod ./
COPY ./src/switch/ ./build/switch/build-go.sh ./
RUN go build -v -o /usr/local/bin/l2sm-br ./...
RUN chmod +x ./build-go.sh && ./build-go.sh
FROM ubuntu:latest
WORKDIR /usr/local/bin
COPY --from=build /usr/local/bin/ .
COPY ./src/switch/vswitch.ovsschema /tmp/
COPY ./vswitch.ovsschema /tmp/
COPY --from=build /usr/local/bin/ .
RUN apt-get update && \
apt-get install -y net-tools iproute2 netcat-openbsd dnsutils curl iputils-ping iptables nmap tcpdump openvswitch-switch && \
mkdir /var/run/openvswitch && mkdir -p /etc/openvswitch && ovsdb-tool create /etc/openvswitch/conf.db /tmp/vswitch.ovsschema
COPY ./setup_switch.sh .
COPY ./src/switch/setup_switch.sh .
RUN chmod +x ./setup_switch.sh && \
mkdir /etc/l2sm/
CMD [ "./setup_switch.sh" ]
\ No newline at end of file
#!/usr/bin/env bash
set -e
DEST_DIR="/usr/local/bin"
if [ ! -d ${DEST_DIR} ]; then
mkdir ${DEST_DIR}
fi
go build -v -o "${DEST_DIR}"/l2sm-init ./cmd/l2sm-init
go build -v -o "${DEST_DIR}"/l2sm-vxlans ./cmd/l2sm-vxlans
[
{
"name": "<NODE_SWITCH_1>",
"nodeIP": "<IP_SWITCH_1>",
"neighborNodes": ["<NODE_SWITCH_2>"]
"name": "test-l2sm-uc3m-polito-1",
"nodeIP": "10.244.0.37",
"neighborNodes": ["test-l2sm-uc3m-polito-2","test-l2sm-uc3m-polito-3"]
},
{
"name": "<NODE_SWITCH_2>",
"nodeIP": "<IP_SWITCH_2>",
"neighborNodes": ["<NODE_SWITCH_1>"]
"name": "test-l2sm-uc3m-polito-2",
"nodeIP": "10.244.1.64",
"neighborNodes": ["test-l2sm-uc3m-polito-1","test-l2sm-uc3m-polito-3"]
},
{
"name": "test-l2sm-uc3m-polito-3",
"nodeIP": "10.244.2.33",
"neighborNodes": ["test-l2sm-uc3m-polito-1","test-l2sm-uc3m-polito-2"]
}
]
......@@ -18,127 +18,33 @@ kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kuber
## Install L2S-M
1. Create the virtual interface definitions using the following command:
```bash
kubectl create -f ./deployments/custom-installation/interfaces_definitions
```
2. Create the Kubernetes account Service Account and apply their configuration by applying the following command:
```bash
kubectl create -f ./deployments/config/
```
3. Create the Kubernetes Persistent Volume by using the following kubectl command:
```bash
kubectl create -f ./deployments/custom-installation/mysql/
```
4. Before deploying the L2S-M operator, it is neccessary to label your master node as the "master" of the cluster. To do so, get the names of your Kubernetes nodes, select the master and apply the "master" label with the following command:
```bash
kubectl get nodes
kubectl label nodes [your-master-node] dedicated=master
```
5. Deploy the L2S-M Controller by using the following command:
Installing L2S-M can be done by using a single command:
```bash
kubectl create -f ./deployments/custom-installation/deployController.yaml
```
You can check that the deployment was successful if the pod enters the "running" state using the *kubectl get pods* command.
6. After the previous preparation, (make sure the controller is running) you can deploy the operator in your cluster using the YAML deployment file:
```bash
kubectl create -f ./deployments/custom-installation/deployOperator.yaml
kubectl create -f ./deployments/l2sm-deployment.yaml
```
Once these two pods are in running state, you can finally deploy the virtual switches
The installation will take around a minute to finish, and to check that everyting is running properly, you may run the following command:
7. This is done by:
**First deploying the virtual OVS Daemonset:**
```bash
kubectl create -f ./deployments/custom-installation/deploySwitch.yaml
```
And check there is a pod running in each node, with ```kubectl get pods -o wide```
**Lastly, we configure the Vxlans:**
In order to connect the switches between themselves, an additional configuarion must be done. A configuration file specifying which nodes we want to connect and which IP addresses their switches have will be made, and then a script will be run in each l2sm switch, using this configuration file.
a. Create a file anywhere or use the reference in ./configs/sampleFile.json. In this installation, this file will be used as a reference.
b. In this file, you will specify, using the template shown in the reference file, the name of the nodes in the cluster and the IP addresses of **the switches** running on them. For example:
```bash
$ kubectl get pods -o wide
>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
>l2sm-controller-d647b7fb5-lpp2h 1/1 Running 0 30m 10.1.14.55 l2sm1 <none> <none>
>l2sm-operator-7d487d8468-lhgkx 2/2 Running 0 2m11s 10.1.14.56 l2sm1 <none> <none>
>l2sm-switch-8p5td 1/1 Running 0 71s 10.1.14.58 l2sm1 <none> <none>
>l2sm-switch-xdkvz 1/1 Running 0 71s 10.1.72.111 l2sm2 <none> <none>
```
In this example we have two nodes: l2sm1 and l2sm2, with two switches, with IP addresses 10.1.14.58 and 10.1.72.111.
We want to connect them directly, so we modify the reference file, ./configs/sampleFile.json:
```json
[
{
"name": "<NODE_SWITCH_1>",
"nodeIP": "<IP_SWITCH_1>",
"neighborNodes": ["<NODE_SWITCH_2>"]
},
{
"name": "<NODE_SWITCH_2>",
"nodeIP": "<IP_SWITCH_2>",
"neighborNodes": ["<NODE_SWITCH_1>"]
}
]
```
Note: The parameters to be changed are shown in the NODE and IP columns of the table above.
Example of how it looks:
```json
[
{
"name": "l2sm1",
"nodeIP": "10.1.14.58",
"neighborNodes": ["l2sm2"]
},
{
"name": "l2sm2",
"nodeIP": "10.1.72.111",
"neighborNodes": ["l2sm1"]
}
]
kubectl get pods -o wide
```
Note: Any number of nodes can be configured, as long as the entry is in this file. The desired connections are under the neighborNodes field, in an array, such as this other example, where we add a neighbor to l2sm2: ["l2sm1","l2sm3"]
Once this file is created, we inject it to each node using the kubectl cp command:
Which should give you an output like this:
```bash
kubectl cp ./configs/sampleFile.json <pod-name>:/etc/l2sm/switchConfig.json
```
And then executing the script in the switch-pod:
```bash
kubectl exec -it <switch-pod-name> -- setup_switch.sh
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
l2sm-controller-56b45487b7-nglns 1/1 Running 0 129m 10.1.72.72 l2sm2 <none> <none>
l2sm-operator-7794c5f66d-b9nsf 2/2 Running 0 119m 10.1.14.45 l2sm1 <none> <none>
l2sm-switch-49qpq 1/1 Running 0 129m 10.1.14.63 l2sm1 <none> <none>
l2sm-switch-2g696 1/1 Running 0 129m 10.1.72.73 l2sm2 <none> <none>
```
With the components: _l2sm-controller_, _l2sm-operator_ and one _l2sm-switch_ for **each** node in the cluster.
This must be done in each switch-pod. In the provided example, using two nodes, l2sm1 and l2sm2, we have to do it twice, in l2-ps-8p5td and l2-ps-xdkvz.
When the exec command is done, we should see an output like this:
## Configuring VxLANs
```bash
$ kubectl exec -it l2-ps-xdkvz -- setup_switch.sh
2023-10-30T10:22:18Z|00001|ovs_numa|INFO|Discovered 1 CPU cores on NUMA node 0
2023-10-30T10:22:18Z|00002|ovs_numa|INFO|Discovered 1 NUMA nodes and 1 CPU cores
2023-10-30T10:22:18Z|00003|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting...
2023-10-30T10:22:18Z|00004|netlink_socket|INFO|netlink: could not enable listening to all nsid (Operation not permitted)
2023-10-30T10:22:18Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected
initializing switch, connected to controller: 10.1.14.8
Switch initialized and connected to the controller.
Created vxlan between node l2sm2 and node l2sm1.
```
Each Node enables the creation of custom L2S-M networks, as can be seen in the [examples section](../../examples/) section. But for communicating pods that are in different Nodes of the cluster, additional configuration must be done, the VxLAN tunnels between them.
You can proceed to configure Vxlans by following the steps outlined in [the vxlan configuration guide.](../deployment/vxlans.md)
You are all set! If you want to learn how to create virtual networks and use them in your applications, [check the following section of the repository](https://github.com/Networks-it-uc3m/L2S-M/tree/release-2.0/examples/)
# L2S-M Installation Guide (Custom Installation)
This guide provides detailed steps for installing the L2S-M Kubernetes operator, enabling you to create and manage virtual networks within your Kubernetes cluster. This custom installation is intended for debugging or understanding the L2S-M components and their functionality.
## Introduction
The L2S-M custom installation is designed for debugging purposes and gaining a deeper understanding of the L2S-M components. Follow the steps below to install the L2S-M Kubernetes operator and configure virtual networks.
## Prerequisites
Before proceeding, ensure that you meet the prerequisites outlined in the [Prerequisites section](./deployment/README.md). Refer to the [./deployment/README.md](./deployment/README.md) file for detailed instructions on meeting these requirements.
## Custom Installation Steps
Follow the steps below to perform the custom installation of L2S-M:
1. Create the virtual interface definitions using the following command:
```bash
kubectl create -f ./deployments/custom-installation/interfaces_definitions
```
2. Create the Kubernetes account Service Account and apply their configuration by applying the following command:
```bash
kubectl create -f ./deployments/config/
```
3. Create the Kubernetes Persistent Volume by using the following kubectl command:
```bash
kubectl create -f ./deployments/custom-installation/mysql/
```
4. Before deploying the L2S-M operator, it is neccessary to label your master node as the "master" of the cluster. To do so, get the names of your Kubernetes nodes, select the master and apply the "master" label with the following command:
```bash
kubectl get nodes
kubectl label nodes [your-master-node] dedicated=master
```
5. Deploy the L2S-M Controller by using the following command:
```bash
kubectl create -f ./deployments/custom-installation/deployController.yaml
```
You can check that the deployment was successful if the pod enters the "running" state using the *kubectl get pods* command.
6. After the previous preparation, (make sure the controller is running) you can deploy the operator in your cluster using the YAML deployment file:
```bash
kubectl create -f ./deployments/custom-installation/deployOperator.yaml
```
Once these two pods are in running state, you can finally deploy the virtual switches
7. This is done by:
**First deploying the virtual OVS Daemonset:**
```bash
kubectl create -f ./deployments/custom-installation/deploySwitch.yaml
```
And check there is a pod running in each node, with ```kubectl get pods -o wide```
## Configuring Vxlans
Each node enables the creation of custom L2S-M networks, as can be seen in the [examples section](../../examples/) section. But for communicating pods that are in different Nodes of the cluster, additional configuration must be done, of configuring the Vxlan tunnels between them.
You can proceed to configure Vxlans by following the steps outlined in [the vxlan configuration guide.](../deployment/vxlans.md)
......@@ -14,11 +14,13 @@ spec:
spec:
containers:
- name: l2sm-controller
image: alexdecb/l2sm-controller:latest
# readinessProbe:
# httpGet:
# path: /onos/v1/l2sm/networks/status
# port: 8181
image: alexdecb/l2sm-controller:2.2
readinessProbe:
httpGet:
path: /onos/ui
port: 8181
initialDelaySeconds: 30
periodSeconds: 10
ports:
- containerPort: 6633
- containerPort: 8181
......
......@@ -15,13 +15,25 @@ spec:
l2sm-component: l2sm-opt
spec:
serviceAccountName: l2sm-operator
initContainers:
- name: wait-for-l2sm-controller
image: curlimages/curl
args:
- /bin/sh
- -c
- >
set -x;
while [ $(curl -sw '%{http_code}' "http://l2sm-controller-service:8181/onos/ui" -o /dev/null) -ne 302 ]; do
sleep 15;
done;
sleep 5;
containers:
- image: alexdecb/l2sm-operator:2.1
- image: alexdecb/l2sm-operator:2.2
name: l2sm-opt-pod
env:
- name: CONTROLLER_IP
value: l2sm-controller-service
#command: ["sleep","infinity"]
#imagePullPolicy: Always
- image: mysql/mysql-server:5.7
name: mysql
env:
......@@ -50,3 +62,18 @@ spec:
operator: Equal
value: master
effect: NoSchedule
---
apiVersion: v1
kind: Service
metadata:
name: l2sm-operator-service
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
selector:
l2sm-component: l2sm-opt
......@@ -22,10 +22,21 @@ spec:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
initContainers:
- name: wait-for-l2sm-operator
image: curlimages/curl
args:
- /bin/sh
- -c
- >
set -x;
while [ $(curl -sw '%{http_code}' "http://l2sm-operator-service:8080/healthz" -o /dev/null) -ne 200 ]; do
sleep 15;
done;
sleep 5;
containers:
- name: l2sm-switch
image: alexdecb/l2sm-switch:2.1
command: ["sleep","infinity"]
image: alexdecb/l2sm-switch:2.2
#args: ["setup_switch.sh && sleep infinity"]
env:
- name: NODENAME
......
......@@ -191,11 +191,13 @@ spec:
spec:
containers:
- name: l2sm-controller
image: alexdecb/l2sm-controller:latest
# readinessProbe:
# httpGet:
# path: /onos/v1/l2sm/networks/status
# port: 8181
image: alexdecb/l2sm-controller:2.2
readinessProbe:
httpGet:
path: /onos/ui
port: 8181
initialDelaySeconds: 30
periodSeconds: 10
ports:
- containerPort: 6633
- containerPort: 8181
......@@ -235,13 +237,25 @@ spec:
l2sm-component: l2sm-opt
spec:
serviceAccountName: l2sm-operator
initContainers:
- name: wait-for-l2sm-controller
image: curlimages/curl
args:
- /bin/sh
- -c
- >
set -x;
while [ $(curl -sw '%{http_code}' "http://l2sm-controller-service:8181/onos/ui" -o /dev/null) -ne 302 ]; do
sleep 15;
done;
sleep 5;
containers:
- image: alexdecb/l2sm-operator:2.1
- image: alexdecb/l2sm-operator:2.2
name: l2sm-opt-pod
env:
- name: CONTROLLER_IP
value: l2sm-controller-service
#command: ["sleep","infinity"]
#imagePullPolicy: Always
- image: mysql/mysql-server:5.7
name: mysql
env:
......@@ -271,6 +285,18 @@ spec:
value: master
effect: NoSchedule
---
apiVersion: v1
kind: Service
metadata:
name: l2sm-operator-service
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
selector:
l2sm-component: l2sm-opt
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
......@@ -295,11 +321,22 @@ spec:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
initContainers:
- name: wait-for-l2sm-operator
image: curlimages/curl
args:
- /bin/sh
- -c
- >
set -x;
while [ $(curl -sw '%{http_code}' "http://l2sm-operator-service:8080/healthz" -o /dev/null) -ne 200 ]; do
sleep 15;
done;
sleep 5;
containers:
- name: l2sm-switch
image: alexdecb/l2sm-switch:2.1
command: ["/bin/sh","-c"]
args: ["setup_switch.sh && sleep infinity"]
image: alexdecb/l2sm-switch:2.2
#args: ["setup_switch.sh && sleep infinity"]
env:
- name: NODENAME
valueFrom:
......
# L2S-M VxLAN configuration guide
In order to connect the switches between themselves, an additional configuration must be done. A configuration file specifying which nodes we want to connect and which IP addresses their switches have will be made, and then a script will be run in each **l2sm-switch**, using this configuration file.
a. Create a file anywhere or use the reference in ./configs/sampleFile.json. In this installation, this file will be used as a reference.
b. In this file, you will specify, using the template shown in the reference file, the name of the nodes in the cluster and the IP addresses of **the switches** running on them. For example:
```bash
$ kubectl get pods -o wide
>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
>l2sm-controller-d647b7fb5-lpp2h 1/1 Running 0 30m 10.1.14.55 l2sm1 <none> <none>
>l2sm-operator-7d487d8468-lhgkx 2/2 Running 0 2m11s 10.1.14.56 l2sm1 <none> <none>
>l2sm-switch-8p5td 1/1 Running 0 71s 10.1.14.58 l2sm1 <none> <none>
>l2sm-switch-xdkvz 1/1 Running 0 71s 10.1.72.111 l2sm2 <none> <none>
```
In this example we have two nodes: l2sm1 and l2sm2, with two switches, with IP addresses 10.1.14.58 and 10.1.72.111.
We want to connect them directly, so we modify the reference file, ./configs/sampleFile.json:
```json
[
{
"name": "<NODE_SWITCH_1>",
"nodeIP": "<IP_SWITCH_1>",
"neighborNodes": ["<NODE_SWITCH_2>"]
},
{
"name": "<NODE_SWITCH_2>",
"nodeIP": "<IP_SWITCH_2>",
"neighborNodes": ["<NODE_SWITCH_1>"]
}
]
```
Note: The parameters to be changed are shown in the NODE and IP columns of the table above.
Example of how it looks:
```json
[
{
"name": "l2sm1",
"nodeIP": "10.1.14.58",
"neighborNodes": ["l2sm2"]
},
{
"name": "l2sm2",
"nodeIP": "10.1.72.111",
"neighborNodes": ["l2sm1"]
}
]
```
Note: Any number of nodes can be configured, as long as the entry is in this file. The desired connections are under the neighborNodes field, in an array, such as this other example, where we add a neighbor to l2sm2: ["l2sm1","l2sm3"]
Once this file is created, we inject it to each node using the kubectl cp command:
```bash
kubectl cp ./configs/sampleFile.json <pod-name>:/etc/l2sm/switchConfig.json
```
And then executing the script in the switch-pod:
```bash
kubectl exec -it <switch-pod-name> -- /bin/bash -c 'l2sm-vxlans --node_name=$NODENAME /etc/l2sm/switchConfig.json'
```
This must be done in each switch-pod. In the provided example, using two nodes, l2sm1 and l2sm2, we have to do it twice, in l2-ps-8p5td and l2-ps-xdkvz.
When the exec command is done, we should see an output like this:
```bash
kubectl exec -it l2sm-switch-8p5td -- /bin/bash -c 'l2sm-vxlans --node_name=$NODENAME /etc/l2sm/switchConfig.json'
Defaulted container "l2sm-switch" out of: l2sm-switch, wait-for-l2sm-controller (init)
Created vxlan between node l2sm1 and node l2sm2.
```
You are all set! If you want to learn how to create virtual networks and use them in your applications, [check the following section of the repository](https://github.com/Networks-it-uc3m/L2S-M/tree/release-2.0/examples/)
# L2S-M Ping Pong example
This section of L2S-M documentation provides an example that you can use in order to learn how to create virtual networks and attach pods to them. To do so, we are going to deploy a simple ping-pong application, where we will deploy two pods attached to a virtual network and test their connectivity.
# L2S-M examples
All the necessary descriptors can be found in the *'./examples/ping-pong/'* directory of this repository.
This section of L2S-M documentation provides examples that you can use in order to learn how to create virtual networks and attach pods to them.
This guide will assume that all commands are executed within the L2S-M directory.
Feel free to make use of this tool in any scenario that it could be used in. Right now two examples are show.
### Creating our first virtual network
First of all, let's see the details of an L2S-M virtual network. This is the descriptor corresponding to the virtual network that will be used in this example: ping-network
```yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: ping-network
spec:
config: '{
"cniVersion": "0.3.0",
"type": "dummy",
"device": "l2sm-vNet"
}'
```
As you can see, L2S-M virtual networks are a [NetworkAttachmentDefinition](https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/quickstart.md) from MULTUS. In order to build a new network, just changing its name in the "metadata" field will define a new network.
**Warning**: Do not change the config section from the descriptor; the *l2sm-vNet* is an abstract interface used by the L2S-M operator to manage the virtual networks in the K8s cluster.
To create the virtual network in your cluster, use the appropriate *kubectl* command as if you were building any other K8s resource:
```bash
kubectl create -f ./examples/ping-pong/network.yaml
```
Et voilá! You have successfully created your first virtual network in your K8s cluster.
### Deploying our application in the cluster
After creating our first virtual network, it is time to attach some pods to it. To do so, it is as simple as adding an annotation to your deployment/pod file, just like you would do when attaching into a multus NetworkAttachmentDefinition.
For example, to add one deployment to ping-network, enter the following annotation in your descriptor in its metadata:
```yaml
annotations:
k8s.v1.cni.cncf.io/networks: ping-network
```
If you want to add your own Multus annotations, you are free to do so! L2S-M will not interfere with the standard Multus behavior, so feel free to add your additional annotations if you need them.
To assist you with the deployment of your first application with L2S-M, you can use the pod definitions available in this repository. To deploy both "ping-pong" pods (which are simple Ubuntu alpine containers), use the following commands:
```bash
kubectl create -f ./examples/ping-pong/ping.yaml
kubectl create -f ./examples/ping-pong/pong.yaml
```
After a bit of time, check that both pods were successfully instantiated in your cluster.
### Testing the connectivity
Once we have deployed the pods, let's add some IP addresses and make sure that we can connect with one another using the overlay. To do so, use the following commands to enter into the "ping" pod and check its interfaces:
```bash
kubectl exec -it [POD_PING_NAME] -- /bin/sh
ip a s
```
From the output of the last command, you should see something similar to this:
```bash
7: net1@if6: <BROADCAST,MULTICAST,M-DOWN> mtu 1450 qdisc noop state DOWN qlen 1000link/ether 16:79:4c:0c:d2:e8 brd ff:ff:ff:ff:ff:ff
```
This is the interface that we are going to use to connect in the virtual network. Therefore, we should first leave up that interface and assign an ip address to it (for example, 192.168.12.1/30):
```bash
ip link set net1 up
ip addr add 192.168.12.1/30 dev net1
```
**WARNING:** You must have the "[NET_ADMIN]" capabilities enabled for your pods to allow the modification of interfaces status and/or ip addresses. If not, do so by adding the following code to the *securityContext* of your pod in the descriptor:
```yaml
securityContext:
capabilities:
add: ["NET_ADMIN"]
```
Do the same action for your "pong" pod (with a different IP address, 192.168.12.2/30):
```bash
kubectl exec -it [POD_PONG_NAME] -- /bin/sh
ip link set net1 up
ip addr add 192.168.12.2/30 dev net1
```
See if they can ping each using the ping command (e.g., in the "pong" pod):
```bash
ping 192.168.12.1
```
If you have ping between them, congratulations! You are now able to deploy your applications attached to the virtual network "my-fist-network" at your K8s cluster. You will notice that the *ttl* of these packets is 64: this is the case because they see each other as if they were in the same broadcast domain (i.e., in the same LAN). You can further test this fact by installing and using the *traceroute* command:
```bash
apk update
apk add traceroute
traceroute 192.168.12.1
```
One last test you can perform to see that it is using the L2S-M overlay is trying to perform the same ping through the main interface of the pod (eth0), which will not be able to reach the other pod:
```bash
ping 192.168.12.1 -I eth0
```
If you are tired of experimenting with the app, you can proceed to delete both deployments from the cluster:
```bash
kubectl delete ping
kubectl delete pong
```
Firstly, there's [the ping-pong example](./ping-pong/). This is the most basic example, a virtual network that connects two pods through a L2S-M virtual network, and checking the connectivity using the ping command.
Secondly, there's the [cdn example](./cdn). In this example, there are two networks that isolate a content-server, storing a video, from the rest of the Cluster. It will only accessible by a cdn-server, using a router pod between these two other pods. This way, if the Cluster or cdn-server are under any safety risks, or we want to apply our own firewall restrictions through a Pod, there's more control in accessing the Pod. Additionally, this section has an L2S-M live demo showcasing this scenario.
# Example: Isolating an NGINX server from a CDN with Custom L2SM networks
## Overview
This example demonstrates the isolation of traffic between pods using custom networks with L2S-M In this scenario, two networks, v-network-1 and v-network-2, are created, and three pods (cdn-server, router, and content-server) are connected. The objective is to showcase how traffic can be isolated through a router (router) connecting the two networks.
## Topology
The example video shows a Cluster scenario with three nodes, where a pod will be deployed in each Node, as shown in the following figure.
<p align="center">
<img src="../../assets/video-server-example.svg" width="400">
</p>
The following example doesn't really need a three Node scenario, it can be used with just a Node in the Cluster. Through the example guide, we will create the following resources:
### Networks
- [v-network-1](./v-network-1.yaml)
- [v-network-2](./v-network-2.yaml)
Two virtual L2S-M networks, without any additional configuration.
### Pods
Note: The configurations specified can be seen in each Pod YAML specification.
- **[cdn-server](./cdn-server.yaml) (CDN Server)**
This pod will act as a CDN server, it's just an alpine image with the following pre-configuration:
- IP: 10.0.1.2
- Network: v-network-1
- **[router](./router.yaml) (Router)**
This pod will act as a router, where we could launch some firewall rules if we wanted. It will have the following pre-configuration:
- Networks: v-network-1, v-network-2
- IP: 10.0.1.1 (net1) and 10.0.2.1 (net2)
- Forwarding enabled
- **[content-server](./content-server.yaml) (Content Server)**
This pod will act as a content server. The image can be found at the [./video-server directory](./video-server/). It's an NGINX image with a video file that will be served. It has the following pre-configuration:
- IP: 10.0.2.2
- Network: v-network-2
## Procedure
Follow the steps below to demonstrate the isolation of traffic between pods using custom networks with L2S-M:
### 1. Create Virtual Networks
- Create two virtual L2S-M networks: [v-network-1](./v-network-1.yaml) and [v-network-2](./v-network-2.yaml).
```bash
kubectl create -f ./examples/cdn/v-network-1.yaml
```
```bash
kubectl create -f ./examples/cdn/v-network-2.yaml
```
### 2. Verify Network Creation
- This step is optional, but it will help you understand how L2S-M internally work, if you already know a bit about SDN and network overlays.
- Check the logs in the `l2sm-controller` and `l2sm-operator` to ensure that the virtual networks have been successfully created.
```bash
kubectl get net-attach-def
```
```bash
kubectl logs l2sm-operator-667fc88c57-p7krv
```
```bash
kubectl logs l2sm-controller-d647b7fb5-kb2f7
```
### 3. Deploy Pods
- Deploy the following three pods, each attached to specific networks:
- [cdn-server](./cdn-server.yaml) (CDN Server) attached to `v-network-1`
- [router](./router.yaml) (Router) connected to both `v-network-1` and `v-network-2`
- [content-server](./content-server.yaml) (Content Server) attached to `v-network-2`
```bash
kubectl create -f ./examples/cdn/cdn-server.yaml
```
```bash
kubectl create -f ./examples/cdn/content-server.yaml
```
```bash
kubectl create -f ./examples/cdn/router.yaml
```
### 4. Verify Intent Creation
- Examine the logs in the `l2sm-controller` to confirm that the intents for connecting the pods to their respective networks have been successfully created.
```bash
kubectl logs l2sm-controller-d647b7fb5-kb2f7
```
```bash
kubectl get pods
```
### 5. Inspect Content Server
- Enter the `content-server` pod and check its IP configuration.
- Start the server to serve the video content.
```bash
kubectl exec -it content-server /bin/bash
```
In the Content-Server pod, execute the following commands:
```bash
ip a s # Show IP addresses
```
```bash
ip r s # Display routing table
```
```bash
nginx # Start the server
```
### 6. Inspect CDN Server
- Enter the `cdn-server` pod and add the `curl` command to initiate communication with the content server.
- Check the IPs to ensure connectivity.
To test the connectivity from the cdn server:
```bash
kubectl exec -it cdn-server /bin/bash # Enter CDN-Server pod
```
In the CDN pod, execute the following commands:
```bash
apk add curl # Install the curl cli
```
```bash
ip a s # Show IP addresses
```
```bash
ip r s # Display routing table
```
### 7. Perform Traceroute
- Execute a traceroute to observe any intermediaries between the content server and CDN server. It should appear like theres a step between them, the router.
```bash
traceroute 10.0.2.2 # Trace route to content-server
```
### 8. Test Communication
- Perform a `curl` from the CDN server to the content server to initiate video retrieval.
```bash
curl http://10.0.2.2/big_buck_bunny.avi --output video.avi --limit-rate 2M # Download video
```
Note: leave this Pod running while doing the next steps.
### 9. Introduce Interruption
- Delete the pod for the router and observe that the video communication stops.
While the video downloads delete the router pod:
```bash
kubectl delete pod router
```
### 10. Restore Connection
- Restart the router pod and verify the reconnection of the `content-server` and `cdn-server`.
```bash
kubectl create -f router.yaml
```
apiVersion: v1
kind: Pod
metadata:
name: cdn-server
labels:
app: test4
annotations:
k8s.v1.cni.cncf.io/networks: v-network-1
spec:
containers:
- name: server
command: ["/bin/ash", "-c", "ip a add 10.0.1.2/24 dev net1 && ip route add 10.0.2.0/24 via 10.0.1.1 dev net1 && trap : TERM INT; sleep infinity & wait"]
image: alpine:latest
securityContext:
capabilities:
add: ["NET_ADMIN"]
#nodeName: test-l2sm-uc3m-polito-1
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment