Details
-
Bug
-
Resolution: Unresolved
-
High
-
None
-
Kubernetes v1.21.0
Description
The ovn4nfv is deployed as a secondary CNI along with multus. The steps used and the issue observed are as follows,
- Deploy the flannel as primary CNI in the k8s cluster.
- Deploy the multus CNI. After this the CNI files installed in the host are as below, $ ls /etc/cni/net.d/
00-multus.conf 10-flannel.conflist multus.d - Deploy the ovn4nfv CNI. Now the folder "/etc/cni/net.d/" has the following files,folders
00-multus.conf 00-network.conf 10-flannel.conflist multus.d ovn4nfv-k8s.d - $ sudo cat /etc/cni/net.d/00-multus.conf
- { "cniVersion": "0.3.1", "name": "multus-cni-network", "type": "multus", "kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig", "delegates": [ { "name": "ovn4nfv-k8s-plugin", "type": "ovn4nfvk8s-cni", "cniVersion": "0.3.1" }
] }*
- $ sudo cat /etc/cni/net.d/00-network.conf
{
"name": "ovn4nfv-k8s-plugin",
"type": "ovn4nfvk8s-cni",
"cniVersion": "0.3.1"
}
The ovn4nfv network conf is installed (alphabetically) before the flannel network configuration and the multus has the configuration of ovn4nfv (instead of flannel).
Due to this issue, the new pods are assigned the IP address from the ovn4nfv subnet as below,
$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default busy-deployment-5bc54b854c-h8jns 1/1 Running 0 4s 10.158.142.6 kube-three <none> <none>
kube-system coredns-558bd4d5db-4ltb8 1/1 Running 0 8m29s 10.244.0.3 kube-two <none> <none>
kube-system coredns-558bd4d5db-c4tsd 1/1 Running 0 8m29s 10.244.0.2 kube-two <none> <none>
kube-system etcd-kube-two 1/1 Running 0 8m45s 192.168.20.52 kube-two <none> <none>
kube-system kube-apiserver-kube-two 1/1 Running 0 8m45s 192.168.20.52 kube-two <none> <none>
kube-system kube-controller-manager-kube-two 1/1 Running 0 8m45s 192.168.20.52 kube-two <none> <none>
kube-system kube-flannel-ds-l7gfn 1/1 Running 0 5m26s 192.168.20.53 kube-three <none> <none>
kube-system kube-flannel-ds-s9nsm 1/1 Running 0 8m29s 192.168.20.52 kube-two <none> <none>
kube-system kube-multus-ds-5gqk7 1/1 Running 0 8m29s 192.168.20.52 kube-two <none> <none>
kube-system kube-multus-ds-894r5 1/1 Running 0 5m25s 192.168.20.53 kube-three <none> <none>
kube-system kube-proxy-bdvtb 1/1 Running 0 5m26s 192.168.20.53 kube-three <none> <none>
kube-system kube-proxy-cvp8b 1/1 Running 0 8m29s 192.168.20.52 kube-two <none> <none>
kube-system kube-scheduler-kube-two 1/1 Running 0 8m45s 192.168.20.52 kube-two <none> <none>
kube-system nfn-agent-9grcw 1/1 Running 0 7m48s 192.168.20.52 kube-two <none> <none>
kube-system nfn-agent-kwmd5 1/1 Running 0 5m25s 192.168.20.53 kube-three <none> <none>
kube-system nfn-operator-6cf6bf57c6-6cks4 1/1 Running 0 7m49s 192.168.20.52 kube-two <none> <none>
kube-system ovn-control-plane-846b975c8-z4gpw 1/1 Running 0 8m9s 192.168.20.52 kube-two <none> <none>
kube-system ovn-controller-h8c96 1/1 Running 0 5m25s 192.168.20.53 kube-three <none> <none>
kube-system ovn-controller-md2hh 1/1 Running 0 8m9s 192.168.20.52 kube-two <none> <none>
kube-system ovn4nfv-cni-fxptw 1/1 Running 0 7m48s 192.168.20.52 kube-two <none> <none>
kube-system ovn4nfv-cni-x7qcl 1/1 Running 0 5m25s 192.168.20.53 kube-three <none> <none>
The issue seems to be with the "entrypoint" file where the filename for the network.conf is fixed to 00-network.conf
OVN4NFV_NET_CONF_FILE="/tmp/ovn4nfv-cni/00-network.conf"
CNI_CONF_DIR="/host/etc/cni/net.d"
cp -f $OVN4NFV_NET_CONF_FILE $CNI_CONF_DIR