Kubernetes (also known as k8s or “Kube”) is an open-source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.
Why is it named “Kubernetes”?
Let’s understand how the name Kubernetes came to its origin.
The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation results from counting the eight letters between the “K” and the “s”. Now, let’s understand the logic behind the Kubernetes logo.
Kubernetes and Docker – The “logo” connection
If we observe the docker logo, we can see that it has a ship that holds containers.
And in the Kubernetes logo, we can see that, it’s a seven-spoked steering wheel or the helm.
Initially, the projected name for Kubernetes was ‘Project 7’ but later it was renamed to Kubernetes. Hence, the Kubernetes logo has 7 cross lines in the wheel. Generally, to control the ship we need a wheel, hence, the Kubernetes wheel is being used to control the (Docker sheep) containers.
Why do we need Kubernetes?
In earlier days, software was developed in Monolithic architecture. In recent days, this monolithic architecture is getting replaced with Microservices. In the microservices architecture world, each service gets converted into microservices and these are deployed on each server. However, the resource utilization by each microservice is weak. To overcome this, containers come into the picture, where we can run multiple containers on a single server. And, each container runs a microservice achieving better resource utilization.
However, while deploying microservices on a container, below points need to be considered:
- Containers cannot communicate with each other
- Though multiple containers can be installed or removed, autoscaling is not possible along with auto load balancing to manage resources.
- Difficult to manage containers for each deployed software along with resource usage.
Here, Kubernetes comes into the picture for controlling the activities to create, scale, and manage multiple containers. Kubernetes helps to overcome the challenges encountered while working with containers and hence is called a container orchestration (management) platform/ tool. It is developed by Google and source code is in the Go language. Orchestration is nothing but the clustering of any number of containers running on a different network.
Key Features of Kubernetes Architecture
Kubernetes helps to schedule, run, and manage isolated containers running on Physical, Virtual, or Cloud machines. Below are some features of Kubernetes:
- Autoscaling (Vertical and Horizontal)
- Load balancing
- Fault tolerance for Node and POD failure
- Platform independent (As it supports physical. Virtual and Cloud)
- Automated rollouts and rollbacks
- Batch execution (One time, Sequential, Parallel)
- Container Health monitoring
- Self-healing (If a container fails, Kubernetes automatically redeploys the afflicted container to its desired state to restore operations.)
Cloud-based K8s services:
- Google K8s services are known as GKE
- Azure K8s services are known as AKS
- Amazon elastic K8s services are known as Amazon EKS
Important Facts to know about Kubernetes
- Kubernetes is fully open source.
- Kubernetes manages everything through code.
- Kubernetes enables cloud-native development.
- Kubernetes can be used on single machine for Development purpose
- Kubernetes can setup Persistent Volume for Stateful Application.
A high-level overview of Kubernetes Architecture
A Kubernetes follows Master-Slave or Server-client architecture. It has a Master (Control Plane) which acts as Server and Node(s) which acts as clients (Worker Nodes). Each node can have one or many PODs and a container creates inside the POD. We can create multiple containers in one POD; however, the recommendation is to create one container in one POD. The application gets deployed on the container. POD is a smaller unit over which K8s works, this means that the Master node always communicates with POD directly and not with the container.
Refer below high-level pictorial representation for the same:
The components of the Kubernetes architecture are divided into the Control plane and the Worker node.
Control Plane components:
- API-Server: The API-Server is the front-end component for the Kubernetes control plane which expose Kubernetes API. This receives the request and forwards it to the appropriate component to accomplish the required action. The user applies ‘.yaml’ or ‘.json’ manifest to API Server. API-Server scale automatically as per load to handle all the requests.
- etcd: It is a kind of database that stores information metadata, the status of the cluster, PODs, and its containers in Key-value format. etcd is a consistent and highly available store. All the access to etcd is via API-Server. This is not the actual part of the Kubernetes components; however, this component is required to work with the Kubernetes cluster.
- Controller Manager: It balances the actual and desired state of containers inside the POD. The Controller Manager communicated with the API server and then the API-Server communicates with etcd to get the current state.
- Kube Scheduler: This is the action kind of component which takes care of the creation and management of PODs as per the action suggested by the Controller manager via API-server.
Worker Node components:
- Pod: This is the smallest atomic unit of Kubernetes. Kubernetes always works with PODs and not directly with the container as the container can be created using any tools like Docker, rkt (CoreOS Rocket), Podman, Hyper-v, and Windows container, etc. Once a POD gets failed, it is never repaired, and hence need to create a new POD.
- Kube-Proxy: This component performs networking work. For example: Assigning IP address to POD.
- Kublet: This is the agent running on the node and listening to Kubernetes master. This component keeps the track of the container on PODs and shares information with the API server to update in etcd and verified by the Controller manager. Based upon the state, the Controller Manager performs necessary action with the help of the Kube Scheduler.
- Container Engine: This is not part of Kubernetes as this is provided by the set of platform-as-a-service products that use OS-level virtualization to deliver software in packages called containers. Depending upon the platform or tool being used for the container, needs to be installed on Master as well.
So far, we understood why we need the Kubernetes along with its high-level architecture overview. K8S provides many benefits of learning as it releases a lot of efforts from deployment and it helps in auto-scaling, intelligent utilization of available resources. Since Kubernetes is a vast technology, it is impossible to wrap everything in just one blog. To simplify things, I will be publishing more blogs soon related to k8s Deployments, Labels, Selectors, Networking, Replica Sets, Persistent Volumes, Namespaces, and Kubectl, among others.
Let’s stay connected and keep learning more on K8s.