To communicate between pods residing on different member nodes, it is necessary that the master node and member node should be able to communicate among each other. Even though all the nodes are able to ping each other, the communication between master and members can be stuck. This happens because kubernetes sets up its own network for communication between nodes. Communication of pods is only successful if nodes are able to communicate among each other. In order to communicate between pods, kubernets uses network plugins. These plugins integrate with kubelet to enable pod communication. hese plugins are sometimes also called as addons. There are two flavors of network plugins available:
- CNI plugins: adhere to the appc/CNI specification, designed for interoperability.
- Kubenet plugin: implements basic cbr0 using the bridge and host-local CNI plugins
Note: Please make sure that you execute all the commands as root user.
Before beginning, you must ensure that there are no member nodes connected to your master node. Once you are done with the setup of network plugins, you should connect the member nodes with master nodes. If you have already joined member nodes to master nodes, then first reset the kubeadm on master node and all the member nodes. To reset kubeadm, use:
kubeadm reset
For a better and richer compatibility, CNI plugins must be used. Before installing CNI plugin, we need to have a package called kubernetes-cni. This package should be installed on master node and all the member nodes. To install it, use:
apt-get install kubernetes-cni
Now you need to install a cni plugin which will allow the pod networking. You can either use it from the already saved yaml file on your computer or you can get it from the internet. It is better to get it from the internet because the plugin which you will use should be compatible with your kubadm, kubectl and kubelet packages. This CNI plugin is to be installed only on master node. To get a plugin from the internet use:
kubectl apply -f
If you want to use the yaml file available on your computer, use:
kubectl apply -f
Your plugin will now get installed on master node. Once this installation is done, you can join your member nodes with the master node. Remember that you did a reset of kubeadm in the beginning, therefore your master node is not ready yet. To make your master node ready, use following command in master node:
kubeadm init
Now, your master node is ready and you can connect member nodes to it. For connecting the same use:
kubeadm join –token
Your member nodes have now joined the master node and a cluster is formed. This cluster is a group of single master node and multiple member nodes. To view the machines participating in your cluster use:
kubectl get nodes
The above command will also show the status of each node. Initially the status of nodes will be NotReady. After few minutes their status will change to Ready. When the status changes to Ready, your pods will get scheduled.
It is the responsibility of master node to provide the network plugin to all the connected member nodes. The master nodes automatically configures the plugin in all the members. Same plugin is used by all the nodes of a cluster.
Some of the available networking options are:
Cilium: is open source software for providing and transparently securing network connectivity between application containers. Cilium is L7/HTTP aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.
Contiv: Contiv provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. Contiv is completely open sourced.
Contrail: Contrail, based on OpenContrail, is a truly open, multi-cloud network virtualization and policy management platform. Contrail / OpenContrail is integrated with various orchestration systems such as Kubernetes, OpenShift, OpenStack , Mesos, and provides different isolation modes for virtual machines, containers/pods and bare metal workloads.
Flannel: Flannel works on layer 3 IPv4 network. It establishes network between master and member nodes.
Kube-router: Kube-Router is responsible to provide layer 4 load balancing. It uses IPVS/LVS technology. Each Kubernetes service type like Cluster IP, NodePort and LoadBalancer is configured as an IPVS virtual service.
Nuage Networks VCS (Virtualized Cloud Services): It is a framework for datacenter and cloud networking. It allows you to automate management, configuration and optimization of virtual networks. It also provide security services that allows tenant isolation and access control to individual workloads and applications. Nauge Networks VCS is designed for enterprises and service providers. To make IT services more responsive, VCS provides you with policy based automation.
OVN (Open Virtual Networking)
OVN supports virtual network abstraction, virtual L2 and L3 overlays. It doesn’t require any agents for simplified deployment and debugging. It is designed to scale and can also fit into Neutron Model.
Project Calico: Project Calico is a network policy engine. It provides open source networking solution. Calico is a highly scalable networking solution for connecting Kubernetes pods residing on the same IP networking principles. Calico can be deployed without overlays or encapsulation to provide high-scale data center networking and high-performance. It also provides intent based network security policy for Kubernetes pods using its distributed firewall. Calico can also supports policy enforcement mode along with other networking solutions like Flannel, or native GCE networking.
Romana: It allows management and control of network traffic by applying network policies. Romana supports both kubernetes and open stack. It users RESTful API to receive the policy. After which, the policy is sent to romana agent residing on every host of cluster.
Weave Net from Weaveworks: Weave Net is a network for Kubernetes hosted applications. It can run stand-alone as well as a CNI plug-in. It does not need any extra code or configuration to run. In both cases, the network will provide an IP address to each pod which is standard for Kubernetes.
CNI-Genie from Huawei: CNI-Genie supports multiple kubernetes network models like Flannel, Calico, Romana, Weave-net. CNI plugin also allow multiple IPs for a single pod by supporting different networking plugins.