Skip to content
Snippets Groups Projects
Commit 2d48117c authored by Alex ubuntu vm's avatar Alex ubuntu vm
Browse files

Examples: added one more example

Added a working example of the L2S-M usage, showcasing the benefits of using L2S-M in isolating Pods
parent 49c2d41b
No related branches found
No related tags found
1 merge request!2repo: added new directory where utils scripts will be
# L2S-M Ping Pong example # L2S-M examples
This section of L2S-M documentation provides an example that you can use in order to learn how to create virtual networks and attach pods to them. To do so, we are going to deploy a simple ping-pong application, where we will deploy two pods attached to a virtual network and test their connectivity.
All the necessary descriptors can be found in the *'./examples/ping-pong/'* directory of this repository. This section of L2S-M documentation provides examples that you can use in order to learn how to create virtual networks and attach pods to them.
This guide will assume that all commands are executed within the L2S-M directory. Feel free to make use of this tool in any scenario that it could be used in. Right now two examples are show.
### Creating our first virtual network Firstly, there's [the ping-pong example](./ping-pong/). This is the most basic example, a virtual network that connects two pods through a L2S-M virtual network, and checking the connectivity using the ping command.
First of all, let's see the details of an L2S-M virtual network. This is the descriptor corresponding to the virtual network that will be used in this example: ping-network
```yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: ping-network
spec:
config: '{
"cniVersion": "0.3.0",
"type": "dummy",
"device": "l2sm-vNet"
}'
```
As you can see, L2S-M virtual networks are a [NetworkAttachmentDefinition](https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/quickstart.md) from MULTUS. In order to build a new network, just changing its name in the "metadata" field will define a new network.
**Warning**: Do not change the config section from the descriptor; the *l2sm-vNet* is an abstract interface used by the L2S-M operator to manage the virtual networks in the K8s cluster.
To create the virtual network in your cluster, use the appropriate *kubectl* command as if you were building any other K8s resource:
```bash
kubectl create -f ./examples/ping-pong/network.yaml
```
Et voilá! You have successfully created your first virtual network in your K8s cluster.
### Deploying our application in the cluster
After creating our first virtual network, it is time to attach some pods to it. To do so, it is as simple as adding an annotation to your deployment/pod file, just like you would do when attaching into a multus NetworkAttachmentDefinition.
For example, to add one deployment to ping-network, enter the following annotation in your descriptor in its metadata:
```yaml
annotations:
k8s.v1.cni.cncf.io/networks: ping-network
```
If you want to add your own Multus annotations, you are free to do so! L2S-M will not interfere with the standard Multus behavior, so feel free to add your additional annotations if you need them.
To assist you with the deployment of your first application with L2S-M, you can use the pod definitions available in this repository. To deploy both "ping-pong" pods (which are simple Ubuntu alpine containers), use the following commands:
```bash
kubectl create -f ./examples/ping-pong/ping.yaml
kubectl create -f ./examples/ping-pong/pong.yaml
```
After a bit of time, check that both pods were successfully instantiated in your cluster.
### Testing the connectivity
Once we have deployed the pods, let's add some IP addresses and make sure that we can connect with one another using the overlay. To do so, use the following commands to enter into the "ping" pod and check its interfaces:
```bash
kubectl exec -it [POD_PING_NAME] -- /bin/sh
ip a s
```
From the output of the last command, you should see something similar to this:
```bash
7: net1@if6: <BROADCAST,MULTICAST,M-DOWN> mtu 1450 qdisc noop state DOWN qlen 1000link/ether 16:79:4c:0c:d2:e8 brd ff:ff:ff:ff:ff:ff
```
This is the interface that we are going to use to connect in the virtual network. Therefore, we should first leave up that interface and assign an ip address to it (for example, 192.168.12.1/30):
```bash
ip link set net1 up
ip addr add 192.168.12.1/30 dev net1
```
**WARNING:** You must have the "[NET_ADMIN]" capabilities enabled for your pods to allow the modification of interfaces status and/or ip addresses. If not, do so by adding the following code to the *securityContext* of your pod in the descriptor:
```yaml
securityContext:
capabilities:
add: ["NET_ADMIN"]
```
Do the same action for your "pong" pod (with a different IP address, 192.168.12.2/30):
```bash
kubectl exec -it [POD_PONG_NAME] -- /bin/sh
ip link set net1 up
ip addr add 192.168.12.2/30 dev net1
```
See if they can ping each using the ping command (e.g., in the "pong" pod):
```bash
ping 192.168.12.1
```
If you have ping between them, congratulations! You are now able to deploy your applications attached to the virtual network "my-fist-network" at your K8s cluster. You will notice that the *ttl* of these packets is 64: this is the case because they see each other as if they were in the same broadcast domain (i.e., in the same LAN). You can further test this fact by installing and using the *traceroute* command:
```bash
apk update
apk add traceroute
traceroute 192.168.12.1
```
One last test you can perform to see that it is using the L2S-M overlay is trying to perform the same ping through the main interface of the pod (eth0), which will not be able to reach the other pod:
```bash
ping 192.168.12.1 -I eth0
```
If you are tired of experimenting with the app, you can proceed to delete both deployments from the cluster:
```bash
kubectl delete ping
kubectl delete pong
```
Secondly, there's the [cdn example](./cdn). In this example, there are two networks that isolate a content-server, storing a video, from the rest of the Cluster. It will only accesible by a cdn-server, using a router pod between these two other pods. This way, if the Cluster or cdn-server are under any safety risks, or we want to apply our own firewall restrictions through a Pod, there's more control in accessing the Pod. Additionally, this section has an L2S-M live demo showcasing this scenario.
# Example: Isolating an NGINX server from a CDN with Custom L2SM networks
## Overview
This example demonstrates the isolation of traffic between pods using custom networks with L2S-M In this scenario, two networks, v-network-1 and v-network-2, are created, and three pods (cdn-server, router, and content-server) are connected. The objective is to showcase how traffic can be isolated through a router (router) connecting the two networks.
## Topology
### Networks
- v-network-1
- v-network-2
### Pods
- **podA (CDN Server)**
- IP: 10.0.1.2
- Network: v-network-1
- **podB (Router)**
- Networks: v-network-1, v-network-2
- IP: 10.0.1.1 (net1) and 10.0.2.1 (net2)
- **podC (Content Server)**
- IP: 10.0.2.2
- Network: v-network-2
## Procedure
1. **Show Nodes**
```bash
kubectl get nodes
```
2. **Show Pods**
```bash
kubectl get pods -o wide
```
3. **Show Networks**
```bash
kubectl get net-attach-def
```
4. **Operator Logs**
```bash
kubectl logs l2sm-operator-667fc88c57-p7krv
```
Show the creation of networks and pod attachments.
5. **Controller Logs**
```bash
kubectl logs l2sm-controller-d647b7fb5-kb2f7
```
Demonstrate the creation of networks and connections between pods.
6. **Enter CDN and Content-Server Pods**
To setup the server, enter it by doing the ``exec`` command
```bash
kubectl exec -it content-server /bin/bash # Enter Content-Server pod
```
In the Content-Server pod, execute the following commands:
```bash
ip a s # Show IP addresses
```
```bash
ip r s # Display routing table
```
```bash
nginx # Start the server
```
To test the connectivity from the cdn server:
```bash
kubectl exec -it cdn-server /bin/bash # Enter CDN-Server pod
```
In the CDN pod, execute the following commands:
```bash
ip a s # Show IP addresses
```
```bash
ip r s # Display routing table
```
```bash
traceroute 10.0.2.2 # Trace route to content-server
```
```bash
curl http://10.0.2.2/big_buck_bunny.avi --output video.avi --limit-rate 2M # Download video
```
While the video downloads delete the router pod:
```bash
kubectl delete pod router
```
And watch how the traffic stops. You may continue the download by doing:
```bash
kubectl create -f router.yaml
```
Where the router pod enter the two desired networks and will start funcion again.
apiVersion: v1
kind: Pod
metadata:
name: cdn-server
labels:
app: test4
annotations:
k8s.v1.cni.cncf.io/networks: v-network-1
spec:
containers:
- name: server
command: ["/bin/ash", "-c", "ip a add 10.0.1.2/24 dev net1 && ip route add 10.0.2.0/24 via 10.0.1.1 dev net1 && trap : TERM INT; sleep infinity & wait"]
image: alpine:latest
securityContext:
capabilities:
add: ["NET_ADMIN"]
#nodeName: test-l2sm-uc3m-polito-1
apiVersion: apps/v1
kind: Deployment
metadata:
name: content-server
spec:
selector:
matchLabels:
app: test4
replicas: 1
template:
metadata:
labels:
app: test4
annotations:
k8s.v1.cni.cncf.io/networks: v-network-2
spec:
containers:
- name: content-server
image: alexdecb/video-server-test:1
command: ["/bin/sh", "-c", "ip a add 10.0.2.2/24 dev net1 && ip route add 10.0.1.0/24 via 10.0.2.1 dev net1 && trap : TERM INT; sleep infinity & wait"]
imagePullPolicy: Always
securityContext:
capabilities:
add: ["NET_ADMIN"]
#nodeName: test-l2sm-uc3m-polito-3
\ No newline at end of file
apiVersion: v1
kind: Pod
metadata:
name: router
labels:
app: test4
annotations:
k8s.v1.cni.cncf.io/networks: v-network-1, v-network-2
spec:
# securityContext:
# sysctls:
# - name: net.ipv4.ip_forward
# value: "1"
containers:
- name: router
command: ["/bin/ash", "-c"]
args: ["echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf && sysctl -p && ip addr add 10.0.1.1/24 dev net1 && ip addr add 10.0.2.1/24 dev net2 &&
trap : TERM INT; sleep infinity & wait"]
image: alpine:latest
securityContext:
privileged: true
capabilities:
add: ["NET_ADMIN"]
#nodeName: test-l2sm-uc3m-polito-2
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: v-network-1
spec:
config: '{
"cniVersion": "0.3.0",
"type": "dummy",
"device": "l2sm-vNet",
"custom-things": ["path-to","another-node"]
}'
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: v-network-2
spec:
config: '{
"cniVersion": "0.3.0",
"type": "dummy",
"device": "l2sm-vNet"
}'
# Use the official Nginx image as the base image
FROM nginx:latest
# Set the working directory to /usr/share/nginx/html
WORKDIR /usr/share/nginx/html
# Copy the video file into the container
COPY big_buck_bunny.avi .
# Create an Nginx configuration file to serve the video
RUN echo "server {" > /etc/nginx/conf.d/default.conf \
&& echo " listen 10.0.2.2:80;" >> /etc/nginx/conf.d/default.conf \
&& echo " location / {" >> /etc/nginx/conf.d/default.conf \
&& echo " root /usr/share/nginx/html;" >> /etc/nginx/conf.d/default.conf \
&& echo " index big_buck_bunny.avi;" >> /etc/nginx/conf.d/default.conf \
&& echo " autoindex on;" >> /etc/nginx/conf.d/default.conf \
&& echo " types {" >> /etc/nginx/conf.d/default.conf \
&& echo " video/avi avi;" >> /etc/nginx/conf.d/default.conf \
&& echo " }" >> /etc/nginx/conf.d/default.conf \
&& echo " }" >> /etc/nginx/conf.d/default.conf \
&& echo "}" >> /etc/nginx/conf.d/default.conf
RUN apt update && apt install -y iproute2
# Sleep indefinitely to keep the container running
CMD ["sleep", "infinity"]
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment