Unwanted pod-scheduling behaviour on vnode
When i add a node to the (kind) cluster, I see incoming pod-scheduling-requests ( EG for "docker.io/bitnami/multus-cni:4.0.2" or "hecodeco/swm-node-daemon:2.0.1" )
the new node has the following Taints that should enforce a fairly specific tolerance:
Taints: kubernetes.io/arch=Armv7:NoExecute
kubernetes.io/os=Bluenet:NoExecute
node.kubernetes.io/unreachable:NoExecute
kubernetes.io/arch=Armv7:NoSchedule
kubernetes.io/os=Bluenet:NoSchedule
node.kubernetes.io/unreachable:NoSchedule
virtual-kubelet.io/provider=sphere:NoSchedule
the qostest-multus-cni pod has the following in the description:
Tolerations: :NoSchedule op=Exists
:NoExecute op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Is there another way i can avoid pods from being scheduled on my (virtualkubelet) node?
Edited by tymon jonge