Powering Edge With Kubernetes: A Primer

This article is originally published on ContainerJournal. We are re-publishing it here.

Adoption of Kubernetes into data centers or cloud has been remarkable since it was released in 2014. As an orchestrator of lightweight application containers, Kubernetes has emerged to handle and schedule diverse IT workloads, from virtualized network functions to AI/ML and GPU hardware resources.

Kubernetes’ core capabilities have improved exponentially as IT workloads have become more diverse and new technologies introduced. Kubernetes is now being adopted at edge infrastructure, which has fewer capacity resources and a persistent connection to the central cloud for processing the data generated by different IoT devices.

Kubernetes has become a de facto standard for enterprises scaling up their IT infrastructure to achieve cloud-native capabilities.


Download our ebook – A Deep-Dive On Kubernetes For Edge, focuses on current scenarios of adoption of Kubernetes for edge use cases, latest Kubernetes + edge case studies, deployment approaches, commercial solutions and efforts by open communities.


However, there are challenges regarding resource and workloads management in edge-based infrastructure, as there are thousands of edge and far edge nodes to manage. Having a greater centralized control from cloud, security policies and less latency are basic features expected with edge infrastructure deployed by either enterprises as well as telecom service providers. Let us understand why and how Kubernetes will play a role to overcome such challenges.

Why Kubernetes for Edge?

Edge nodes add another layer in the IT infrastructure required by enterprises and service providers in their on-premises and cloud data center architecture. So, it is imperative for admins to manage the workloads at the edge level in the same dynamic and automated way as with on-premises or cloud.

Figure 1: Kubernetes for Edge

Additionally, the whole architecture includes different types of computing hardware resources and software applications. Kubernetes is a good fit because it is infrastructure-agnostic and can manage a diverse set of workloads from different compute resources seamlessly.

For such edge-based environments, Kubernetes can be useful for orchestrating and scheduling resources from cloud to edge data center workloads. Also, Kubernetes can be helpful in managing and deploying edge devices along with cloud configurations.

In normal edge and IoT-based architecture, analytics and control plane services reside in cloud. As the flow of operations and data is from cloud to edge devices and opposite, there is a need for a common operational paradigm for automated processing and execution of instructions. Kubernetes provides this common paradigm for all the network deployments so that any policies and rulesets can be applied to the overall infrastructure. Also, policies can be narrowed down for specific channels or edge nodes based on particular configuration requirements. Kubernetes provides horizontal scaling for infrastructure and application development; enables high availability and a common platform for rapid innovation from cloud to edge; and, more importantly, pushes edge nodes to get ready for low-latency application access from different IoT devices.

Another critical requirement for edge environment can be high availability of services deployed at edge—a consideration when deciding to use Kubernetes for edge orchestration. Kubernetes is bundled with monitoring and tracking using APIs and interconnection among all cluster nodes. Moreover, containers are fast to switch between and highly resilient.

How Can Kubernetes Be Used in Edge Architecture?

In March 2019, a Kubernetes IoT Edge Working Group was formed, made up of engineers from Red Hat, VMware, Futurewei, Google, Edgeworx and others. The group was formed as a community to discuss, design and document all efforts taken for utilizing Kubernetes for edge and IoT use cases and solve challenges around it.

According to their presentation at KubeCon Europe 2019, there are three ways are available wherein Kubernetes can be used for edge-based architecture to handle workloads and resource deployments.

Let’s discuss all three approaches.

The basic Kubernetes architecture is something like this:

Figure 2: Kubernetes Architecture

A Kubernetes cluster consists of master and nodes. Master is responsible for exposing the API to developers and scheduling the deployment of all clusters, including nodes. Nodes contain the container runtime environment, either Docker or rkt; an element called kubelet, which communicates with the master; and pods, which are collection of one or more containers. Nodes can be a virtual machine in the cloud.

For edge-based scenarios, the approaches can be as below.

  1. The first approach is whole clusters at the edge. In this approach, the whole Kubernetes cluster is deployed within edge nodes. This option is ideal for instances in which the edge node has fewer capacity resources or a single-server machine and does not want to consume more resources for control planes and nodes. K3s is the reference architecture suited for this type of solution. k3s is wrapped in a simple package that reduces the dependencies and steps needed to run a production Kubernetes cluster, making the clusters lightweight and better able to run on edge nodes.

    Figure 3: Rancher K3s as a reference
  1. The second approach of using Kubernetes for edge is referred from KubeEdge. It is based on Huawei’s IoT Edge platform, Intelligent Edge Fabric (IEF). In this approach, the control plane resides in the cloud (either public cloud or private data center) and manages the edge nodes containing containers and resources. This architecture allows support for different hardware resources at the edge and enables optimization in edge resource utilization. This helps to save setup and operational costs significantly for edge cloud deployment.

    Figure 4: KubeEdge as a reference
  1. The third option is hierarchical cloud plus edge, in which a virtual kubelet is used as reference architecture. Virtual kubelets reside in the cloud and contain the abstract of nodes and pods deployed at the edge. Virtual kubelets get supervisory control for edge nodes containing containers. Using virtual kubelets enables flexibility in resource consumption for edge-based architecture.
Figure 5: Virtual Kubelet as reference

There are infrastructure, control plane and data plane-related challenges in using Kubernetes for edge and IoT use cases. Challenges include how to manage resources and workloads at the edge as well as communication by edge sites with cloud and themselves.

Summary

Kubernetes add many enhancements and feature sets for edge-based network infrastructure.

  • Streamlines workloads and resource management using policy based scheduling.
  • Adds security and networking features.
  • Enables auto-scaling and traffic shaping for better resource utilization and workload prioritization.

Apart from Kubernetes Edge IoT working group community, there are key developments are in progress by many companies to integrate and utilize Kubernetes power for edge and IoT. I will cover more details in upcoming articles about Kubernetes for edge.

 
Share:

Related Posts

IoT and its Applications in Driving Smart Manufacturing

IoT and its Applications in Driving Smart Manufacturing

The Internet of Things (IoT) is a key element of global industrial transformation, and the manufacturing sector leads in leveraging this technology. The millions of IoT devices,…

Share:
Top 9 Emerging Technology Trends in Networking & Telecom 2024

Top 9 Emerging Technology Trends in Networking & Telecom 2024

Stay updated with the latest telecom trends for 2024. Learn about AI, 5G, edge computing, cybersecurity, SD-WAN, and Open RAN.

Share:
Telecom Technology Trends 2024: Navigating the Evolution

Telecom Technology Trends 2024: Navigating the Evolution

The telecom industry is witnessing a rapid network transformation, enabled by a wide range of pioneering technology trends. The industry is gearing up for constant innovation in the coming year 2024. The digital disruption with technologies like 5G, IoT, and AI set the pace for modern market trends and business development in 2024. Read the blog to explore the major technology trends driving competitive and innovative business strategies in the telecom industry.

Share:
5G NR-Light (RedCap)

5G NR-Light (RedCap): Powering the Future of IoT

5G NR-Light (RedCap) technology is poised to revolutionize the Internet of Things (IoT) landscape. NR-Light or RedCap signifies a modernized and optimized version of the 5G New Radio (NR) standard. RedCap indicates its potential to significantly enhance power efficiency and capabilities for IoT devices. 5G RedCap is designed to enable seamless connectivity for massive IoT devices, from smart sensors to industrial machines, creating an interconnected ecosystem that can transform industries and daily life. Read the blog to explore the significance of 5G RedCap for massive IoT adoption.

Share:
Blog-The Future of IoT Networking

The Future of IoT Networking: Key Technologies to Know!

The future of IoT networking is modeled for significant innovations, driven by key technologies that promise to revolutionize the way devices interact and communicate. The key technologies and innovations together will foster a more connected, intelligent, and secure IoT ecosystem, paving the way for a transformative future. Read on the blog to understand different wireless networking technologies and their performing characteristics which realizes efficient and secure IoT network.

Share:
5G: Network Slicing, Its Management, and Orchestration

5G: Network Slicing, Its Management, and Orchestration

5G Network Slicing is a key characteristic in 5G which is realized through the integration of virtualization and software-defined networking technologies. The management and orchestration of 5G network slicing is a complex task that involves a combination of software and hardware solutions. Read this blog to explore the concept of Network Slicing and its management, orchestration aspects mainly focusing on management models.

Share: