Virtual Machines or Containers. Which is Better in NFV Infrastructure?

“An efficient telecommunications network is the foundation upon which an information society is built.” Aptly put by Talal Abu-Ghazaleh, this quote gives the true essence of what telecommunications means to the world. In today’s fast-paced world, enjoying the power of seamless connectivity is not a luxury anymore, but a necessity. Well, this highlights not just everyday communication, but the global exchange of information that helps businesses thrive in the digital era.

As the world demands more connectivity, telecommunications continue to flourish rapidly. NFV, also known as Network Function Virtualization, has emerged as a crucial element, enabling more flexible and scalable services.

However, the million-dollar question businesses struggle with in terms of designing NVF infrastructure is choosing the right virtualization technology. And the choice is between Virtual Machines (VMs) or containers. Both technologies are engineered to offer unique advantages and have their trade-offs, which can significantly impact performance, cost, and operational efficiency.

Virtual machines have been a staple of virtualization, offering strong isolation and security and serving as a trusted choice for many network functions. However, they come with the overhead of a full operating system, which can consume valuable resources. On the other hand, containers provide lightweight virtualization with faster boot times and better resource utilization, making them an attractive option for environments where efficiency and scalability are crucial.

But which is better suited for NFV infrastructure? Should you stick with the tried-and-tested VMs, or is it time to adopt containers for your network needs? This blog delves into the strengths and weaknesses of both technologies, helping you make an informed decision for your NFV strategy. Whether you are an architect, engineer, or decision-maker, understanding these nuances will guide you in optimizing your network functions for the future.

Decoding Virtual Machines

The image below gives a brief idea about the basic architecture of Virtual Machines.

Virtual Machine Architecture Overview
Figure 1: Virtual Machine Architecture Overview

The image illustrates a standard Virtual Machine (VM) architecture. At the bottom is the hardware layer, which includes physical resources such as the CPU and RAM. Above it is the host operating system, followed by a hypervisor that manages multiple VMs. Each VM runs its own guest operating system and applications independently.

What are Virtual Machines or VMs?

Virtual Machines represent instances of an Operating System (OS) located together on a physical machine via a hypervisor. Each VM has an OS, memory, and other related resources of its own, which are kept in silos from other VMs on the same physical machine. This setup enables several operating systems to run simultaneously on the same physical hardware without impacting each other. VMs are primarily built and managed using hypervisor software. A hypervisor is nothing but software, responsible for managing resources on a physical computer and allocating them to VMs.

Decoding Containers

The image below gives a brief overview of the basic architecture of Containers.

Container Architecture in Microservices: Layers and Services
Figure 2: Container Architecture in Microservices: Layers and Services

The diagram depicts a containerization architecture. At the bottom, there’s the physical hardware layer, followed by the host operating system. Above that, a container engine manages multiple containers, each running different services and their dependencies. This setup enables multiple applications to run in isolated environments on the same machine while sharing the host OS.

What are Containers?

Containers are lightweight, portable, and self-contained executable images that carry software applications and their dependencies. They are utilized to consistently deploy and run applications across various environments, such as development, staging, and production. Deploying containers from an image using an orchestration platform like Kubernetes provides a scalable method for managing and deploying containers.

Compared to traditional virtualization methods, containers offer several advantages. Being more lightweight and portable than virtual machines (VMs), containers facilitate the decomposition of a monolith into microservices. Additionally, they are faster to manage and deploy, which can result in time and cost savings during application deployment.

Key difference areas between Containers and Virtual Machines

Feature Container Virtual Machine
Operating System Utilizes the host operating system’s kernel Includes a full operating system with its kernel
Portability Highly portable across different environments Less portable due to reliance on full OS virtualization
Speed Quick to launch and shut down Takes longer to start and stop due to the full OS boot process
Resource Usage Lightweight and efficient, using fewer resources More resource-intensive, requiring dedicated hardware resources
Isolation Provides process-level isolation, ideal for microservices Offers strong isolation, suitable for running multiple OS instances
Security The lower level of isolation; more dependent on host security Higher security with complete isolation from other systems
Scalability Great for rapidly scaling applications in a dynamic environment Suited for scenarios requiring stable, consistent environments
Maintenance Easier updates and management due to shared OS layers Requires individual management of each VM and its OS

The influence of cloud-native applications, primarily orchestrated with Kubernetes, was evident in VMware’s announcements at VMworld 2019. This demonstrated to the IT community that the emphasis is now moving from virtualization to containerization. Adopting cloud-native technologies and migrating workloads to Kubernetes clusters has become a major industry trend. To know more, check out our blog on How is Kubernetes Leading the Game in Enabling NFV for Cloud-Native

The Evolution of NFV: From Virtual Machines to Containers in 5G Networks

Since NFV – Network Function Virtualization was officially proposed by the European Telecommunications Standards Institute (ETSI) we have seen exponential development in the NFV ecosystem through the contribution of many large vendors and open communities. Until now, NFV stayed in the immature stage due to a lack of common information or deployment models and certain specified guidelines. But recently, telecom network providers along with leading network solution vendors started to test 5G networks which are mainly based on NFV/SDN (Network Function Virtualization / Software-Defined Networking) Technologies. Initially, the majority of NFV deployments comprised MANO (Management and Orchestration) layers, NFV infrastructure (NFVi), and development of a core part of NFV i.e. VNFs (Virtual Network Functions) focused on the utilization of virtual machines to host VNFs. Since NFV is the backbone technology for 5G networks, it will demand large-scale deployment with agility, portability, scalability, highly automated, and lower overheads along with minimum CAPEX and OPEX for telecom service providers.

Given these demands for efficiency and scalability, the limitations of traditional virtual machine-based approaches became apparent. This is where container technology came to the rescue and brought a lot of benefits to NFV infrastructure. Containers have existed in Linux in the form of LXC containers but with the emergence of Docker Inc., this technology exploded and now with the Kubernetes orchestration engine, containers are easily integrated and managed from low to high-scale deployments in maximum use cases.

Virtualization is a traditional and proven technology that brought huge innovation at the data center level enabling businesses to introduce a variety of digital products and enhance their digital life. As telecom service providers started to think about 5G networks based on NFV, virtual machine deployments were not enough considering the scale and demand for agility in service launches and low latency requirements to support high-end technologies like machine learning, real-time analytics, autonomous cars, virtual reality and augmented reality products, etc.

Due to this reason, for the telecom use case, the software virtualization approach must be different than we have seen in previous cases where such agility and performance did not much matter.

Let’s compare the features offered by both virtual machines and containers for the NFV use case.

  • Resource overheads: Virtual machines are hardware or system-level virtualization, and Containers are OS-level virtualization. To run a virtual machine guest OS and hypervisor are needed on top of server OS. This means there will be a limited number of isolated VNFs that can be mounted on a single server and need more resources (CPE, memory, etc.) for guest OSes. Adding to it, there will be more VNFs needed to form a single network service, NFV deployment chain, the multiple VNFs together. Already, there is huge infrastructure rebuilding involved at the core and edge part of telecom networks for powering 5G networks with NFV.
  • Due to this fact, telecom network providers will expect fewer resources to be utilized to generate high throughput by running a maximum number of VNFs. To cope with it, containers can be utilized to host VNFs. Containers are lightweight packages bundled with all dependencies to run a single native application. It is an image file package that can be easily ported across various operating systems environments. Containers can be deployed on a host OS and on top of its container orchestration engine is installed which manages the lifecycle of individual and isolated VNFs. OS-level setup for containers incurs fewer computing resources needed to run the VNFs.
  • Faster Deployment & Portability: Containers contain only binaries and dependencies required for application execution along with information for interconnection with other containers and system. It is packaged in a small file having less file size as compared to a virtual machine file which encloses the whole OS-encapsulated application environment.

    Due to its lightweight nature containers can move and easily deploy in NFV infrastructure. This advantage helps telecom network providers to launch newer services in a much faster way to stay ahead of the competition.

  • Security: VNFs hosted on Virtual machines have been proven to be more secure as they are isolated by hypervisors at a system level. Containers were lagging in security due to sharing the same OS kernel i.e. malfunction with OS causes a threat to all container VNFs sharing the same kernel features which are not isolated, however, due to recent developments in IT communities, kernel security solutions like SELinux and AppArmor can ensure security for container-based VNFs.
  • Scalability: Autoscaling ability is the key requirement for telecom network providers to address the sudden resources and services demand by consumers. NFV infrastructure must be on top of it to provide consistent service. As containers are lightweight, resources required to run container-based VNFs can be easily scaled as per demand at the MANO layer where the Kubernetes container /management orchestration engine is deployed as a VIM component.

    Kubernetes extends automation and dynamic orchestration ability for containers at MANO later of the NFV infrastructure. Resource provisioning to virtual machines takes more time than containers.

  • Resiliency: In case of failure or error at the VNF package a whole service formed with multiple VNFs may get affected. Again, due to its lightweight nature and management with Kubernetes container-based VNFs can be easily debugged and replaced. Kubernetes provides self-healing features like auto-placement, restart, and replacement by using service discovery and continuous monitoring of NFV infrastructure. For a virtual machine, it takes time to reboot and spin up a new virtual machine to push it to a working state.
  • Agility: Multiple VNFs are chained together to form a new service. In some cases, telecom network providers need to dynamically launch new services or upgrade existing services at runtime. As containers are hosted inside the same host OS, it becomes easier to pass on instructions to deploy new container-based VNFs. Dealing with virtual machines for dynamic service enablement takes more time as instructions pass through guest OS and hypervisor to each virtual machine-hosted VNF.

Cloud Native Approach in NFV with Container-based VNFs

Cloud-native application development is a new trend that which IT ecosystem has accumulated to enable faster service launches for enterprises who want to stay ahead of the competition. A cloud-native approach promotes a DevOps methodology by integrating continuous integration (CI) and continuous deployment (CD) in the application release cycle. As telecom networks are impacted by NFV it becomes imperative for them to switch to a cloud-native approach to shorten faster time to market to launch or upgrade VNFs-based services. Telecom network providers can take advantage of the cloud-native approach as VNFs can be deployed and scale within containers. Cloud-native approach using container-based VNFs gives more agility in service launch and upgrade.

To enable a cloud-native approach, VNFs can be decomposed into microservices hosted within different containers, automatically scaled, and intercommunicated using well-defined APIs. Kubernetes can be used to handle orchestration and control operations to manage a range of microservices. It takes less time to upgrade the service as there is only a need to update specific microservices. Also, CI/CD methodology can be applied as VNFs are decomposed into smaller chunks in microservices.

Deployment Models of Using VMs and Containers

There are three ways telecom network providers can think of deploying VNFs. Virtual machine only, containers only, and a mix of both in NFV infrastructure. Another way could be some VNF components can be deployed in containers, and some are in virtual machines. But it makes more complex architecture that someone will hardly use depending on necessity.

Some challenges limit the use of container-based VNFs only NFV infrastructure. Currently, containers are a growing technology as compared to virtualization. So, it is still not advisable to deploy VNFs in a container only. Support for VNF-based containers is only on selected operating systems like Linux, Windows, and Solaris at the service provider end. Many NFV infrastructures at the current stage may not support Linux or other OSes such as Windows and Solaris. So, a hybrid approach of deploying VNFs in virtual machines and containers can be employed. To make it possible, there may need to be a specialized controller to handle VMs and containers and a message message-passing mechanism between both. To manage and orchestrate VMs and containers combination of two different virtual infrastructure managers (VIMs) can be used at the MANO layer. For example, OpenStack for VMs and Kubernetes for containers. Also, VNF vendors will need to supply and support both types of VNF deployments in target NFV infrastructure.

What to use and when?

Well, by far this has been one of the questions that keeps hounding the IT teams. Choosing between VMs and containers can be a daunting task. As each of these technologies comes with unique benefits and ideal use cases, IT teams need to be extremely mindful while mapping which technology aligns with business needs to make the most of the first step, understanding the differences between the two is crucial for determining which solution best fits specific infrastructure needs and organizational goals.

So yes. When to use VMs and when to use containers. Let’s demystify some common pointers to make decisions for the IT folks easier and better. We will try and explain this using different scenarios.

Environment Configuration

VMs allow developers more control over the application’s environment. Developers can manually install system software, snapshot configuration states, and restore them to their former state if necessary. Extremely useful for ideation or testing different environments, it can improve the performance of the application.

Containers, however, offer constant or stable configuration standards once freezing on the best ones.

Software Development Pace

VMs are full-stack systems and can be difficult to build and regenerate. Modifications in this type can be extremely time-consuming to validate as it is mandatory to regenerate the environment.

In this scenario, containers are a superior choice for building, testing, and releasing new features periodically. This is because they only contain high-level software that can be quickly modified and iterated on.

Scalability

VMs require more storage space and demand more hardware in the on-premises data centers. In situations like these switching to cloud environments can curb the cost factor, but migrating the entire environment comes with its own set of challenges.

Containers, however, need less space and can be scaled hassle-free. Most importantly, containers offer granular control of application scalability by enabling microservices. All in all, Containers allow you to scale individual microservices as needed.

Summary

We have observed that there are a significant number of advantages to using container-based VNFs rather than hypervisor driven. But if we look at the current stage of NFV, progress in NFV technology innovation was not much faster than anticipated in 2012 due to the lack of a common guidelines model. But now, 5G networks have started to be deployed and tested in some cities by service providers with the help of leading vendors. Some of the leading vendors have worked on PoCs of using containers in the NFV environment. The development will be leading to targeting the more innovative features that 5G is bringing such as network slicing, Mobile Edge Computing (MEC), and cRAN (Cloud Radio Access Networks). These new 5G features will surely require dynamism and benefits offered by containers for highly automated deployment of services towards every edge of the 5G network. We can expect all service providers and network solutions vendors aware of container benefits to be utilized to gain high throughput.

At Calsoft, we believe in the power of connectivity and have helped customers across the globe with a range of NFV solutions backed by cutting-edge technologies and the best talent in the industry. If you are seeking support for your NFV needs, you are at the right place. Explore our range of network transformation solutions

 
Share:

Related Posts

How Server Virtualization Works

How Server Virtualization Works

Discover how server virtualization optimizes hardware utilisation by creating multiple virtual servers on a single physical machine, enhancing efficiency and flexibility in IT environments.

Share:
What is Virtualization and its Types

What is Virtualization and Its Types

Discover the types, benefits, and how virtualization can optimize your IT infrastructure effectively.

Share:
Introduction to Virtualization Network in Cloud Computing

Introduction to Virtualization Network in Cloud Computing

Explore the blog to understand the significance of network virtualization in cloud computing, its benefits and key use cases.

Share:
Why Product Modernization is Essential for Business Growth

Why Product Modernization is Essential for Business Growth

Discover why product modernization services are essential for business growth, enhancing efficiency, security, and scalability in a competitive digital landscape.

Share:

Importance of High Availability Feature in Storage

Discover the importance of High Availability (HA) in storage solutions. Learn how HA ensures data access, scalability, and operational efficiency in the IT industry.

Share:
Key Differences Between NSX-V and NSX-T You Need to Know Before Migration

Key Differences Between NSX-V and NSX-T You Need to Know Before Migration

Discover the key differences between VMware’s NSX-V and NSX-T and essential insights for a smooth migration to optimize your network infrastructure.

Share: