8 March-2016: If you plan to kick off 2016 by hyper-converging your infrastructure, storage expert George Crump outlines the benefits of such a project, as well as some areas of concern.
In 2015, hyper-converged architectures certainly lived up to the hype part of their name. So is 2016 the year to hyper-converge your architecture? The answer, as is often the case, depends on your organization and data center. But unlike many other IT decisions, there are a greater number of variables to consider when deciding whether or not to hyper-converge.
How virtualized are you?
All hyper-converged architecturescount on some form of virtualization, so understanding your organization’s stance toward virtualization is essential. If you are well down the virtualization path, converging the architecture to simplify it may make sense. The opposite situation — no virtualization — also makes a compelling case to hyper-converge the architecture, as it may be the only architecture your data center needs.
Virtual machines and hosts
The number of virtual machines (VMs) and hosts an organization needs to run its business is another key variable. While a few hyper-converged architectures can start at two nodes, most require a quorum of three nodes to start optimally. Most systems can comfortably support 10 to 15 VMs per node, so if your organization can run its virtual infrastructure on less than 45 VMs, a single hyper-converged architecture would provide ultimate simplification. However, as the node count grows, so does the importance of design, especially concerning storage aggregation and network quality of service.
Greenfield projects make hyper-convergence easier
If the organization has a greenfield project in 2016, it may be the ideal opportunity to use a hyper-converged architecture. Project-driven hyper-converged architecture is especially applicable for projects such as virtual desktop infrastructure (VDI) or providing compute for a disaster recovery site. These projects do not require strict performance guarantees. One exception is the virtualization of mission-critical environments such as MS SQL or Exchange. While hyper-converged architectures can indeed host these applications, guaranteeing specific performance to them is more difficult than with a more traditional architecture.
Expanding the current virtual environment
Most data centers are 50% to 60% virtualized and have two remaining objectives:
- The virtualization of mission-critical servers, which may not be ideal when you hyper-converge your architecture.
- Spinning up new applications. The result of a virtualize-first policy is that most data centers will virtualize any new application brought into production. A hyper-converged architecture makes an ideal foundation for those new applications and provides a place to move legacy, less-critical applications.
Most hypervisors have migration built in, so the transition from a traditional architecture to a hyper-converged architecture should be straightforward. A weakness is that many hyper-converged architecture offerings do not support existing legacy storage, which creates a silo of storage internal to their nodes. Many also do not externalize the storage resources of the hyper-converged system. Virtual machines and physical hosts that are not part of the hyper-converged architecture cannot use its resources, further siloing capacity.
Most data centers have justification to embark on some form of a hyper-converged architecture project in 2016, but it will depend on where they are in their storage and server refresh cycles. Hyper-converged architecture is mature enough for consideration in almost any IT endeavor, and there are some projects, such as VDI, where it may be the best technology for the job.