Defining the software-defined storage market

16 March-2016:  Nothing in the storage world elicits more divergent opinions than the term “software-defined storage” (SDS). With no universally accepted definition, SDS is vendor-specific. Vendors shape the SDS definition to match their storage offerings. The result is that every storage vendor appears to offer SDS.

 

The closest the software-defined storage market has come to a SDS consensus is more marketecture than architecture. Software-defined storage separates the data storage hardware from the software that manages it. The storage software is itself hardware-independent. The storage control plane is usually, but not always, separated from the data plane.

 

That broad definition enables just about every variation of storage currently available. So it’s up to the software-defined storage market consumer to determine which ones work best for them.

 

Driving forces behind the SDS trend

All storage systems have always been software-defined. What’s changed is that the software has become portable. Storage system software historically was tied to the hardware it managed. When the hardware ran out of capacity or performance, it had to be replaced and the software licensing was repurchased along with the hardware.

 

What made matters significantly worse was that storage system architectures created isolated silos. Unique infrastructures made everything from storage provisioning, data protection, disaster recovery, tech refresh, data migration, power and cooling more and more untenable. Compound that with the ongoing trend of rapid data growth and the need to store ever-increasing amounts of data, and the available architectures made storage systems management too complicated, difficult, expensive and ultimately un-maintainable.

 

Several technological factors contributed to the software-defined storage market phenomenon as well. The first is the direct result of the continuous x86 compute architecture performance improvements. The x86 performance improvements and availability of cores for specific storage functions has led to x86 architectural standardization for storage systems.

 

An additional technological factor aiding SDS is the general acceptance of x86 virtualization of servers, desktops, applications and networking (SDN). That has helped condition IT into accepting separation of the data image from the hardware upon which it resides.

The popularity of cloud technologies has also had a major effect on driving the software-defined storage market. The cloud data centers needed a new and much lower-cost storage architecture based on industry standards and commodity hardware. Other technological factors driving SDS include server-side flash storage and the software that allows memory and server storage to be transparently shared with other physical server hosts.

 

All of these technology changes eroded the differentiation between server and storage hardware while expediting storage software portability and flexibility, and, not inconsequentially, also radically reducing storage costs.

 

SDS categories pros and cons

With no working standard SDS definition, a variety of technologies have emerged in the software-defined storage market. For our purposes, the four categories of SDS include:

 

  • Hypervisor-based SDS
  • Hyper-converged infrastructure (HCI) SDS
  • Storage virtualization SDS
  • Scale-out object and/or file SDS

 

There are both significant differences and equally significant similarities between these categories, and several products may actually fit into multiple categories. And some products are unique enough to be in their own category such as PernixData or Saratoga Speed.

Since SDS is focused on flexibility, simplicity, scalability with performance and total cost of ownership (TCO), we’ll use those criteria to evaluate the pros and cons of each SDS approach.

 

Hypervisor-based SDS

VMware invented this category with VMware vSphere Virtual SAN. This is the only category that is a specific product. Virtual SAN is architected to be a part of vSphere and operates as a feature of vSphere and works with all vSphere virtual machines and virtual desktops. Virtual SAN runs in the ESXi layer which means it’s not a virtual storage appliance and doesn’t require a VM to execute.

 

Hypervisor-based SDS pros:

Flexibility. Virtual SAN works with both hard disk drives (HDD) and solid-state drives (SSD) including DIMM-based flash drives, PCIe, SAS, SATA and even NVMe. VMware Virtual SAN supports both HDDs and SSDs in a hybrid mode or all SSDs in all-flash mode.

 

Scalability and performance. Virtual SAN is highly scalable while delivering high levels of performance. It scales out through vSphere clustering and can support up to 64 vSphere hosts per cluster. Each vSphere host supports approximately 140 TB raw storage capacity and well north of 8 PB of raw storage capacity per cluster. On the performance side, each Virtual SAN host can supply up to 90,000 IOPS, yielding more than 5 million IOPS per cluster.

 

Simplicity. Virtual SAN is simple because it’s natively integrated as part of the VMware stack. It feels and acts like all other vSphere features so it’s intuitive for a vSphere administrator. Virtual SAN automates storage tasks on a per-VM basis such as provisioning, snapshots/data protection, high availability, stretch clusters, disaster recovery and business continuity. Even data migration to a Virtual SAN can be accomplished relatively simply via vSphere Storage vMotion.

 

Total cost of ownership (TCO). Compared to legacy storage architectures, its TCO should be less. The saving comes from the difference in the price of drives (HDDs and SSDs) in a storage system compared to the same drives in a server. Those drives are typically three times more expensive in the storage system. Some of the other Virtual SAN cost advantages come from the predictable pay-as-you go scaling, unified storage management, unified data protection, disaster recovery and business continuity; and consolidated storage networking.

 

Hypervisor-based SDS cons:

Flexibility issues. Virtual SAN is a closed-loop SDS in that it only works with VMware vSphere 5.5 or better. Older ESXi implementations, other hypervisors, or physical machines don’t work with Virtual SAN. It can’t be used by virtual or physical machines that are not part of the vSphere cluster. There is an element of do-it-yourself (DIY) to Virtual SAN. For example, running on inexpensive commoditized hardware is somewhat limited to VMware’s hardware compatibility list (HCL). If hardware isn’t on the list, it’s not supported.

 

Scalability and performance issues. Virtual SAN clusters cannot exceed 8.8 PB. If more capacity is required, it is not a good fit. If a VM requires more IOPS than the 90,000 available in their vSphere host, it can get them from other nodes in the cluster, but with at a considerable latency penalty. Inter-cluster storage performance is another issue. Most Virtual SAN clusters use 10 Gbps to 40 Gbps Ethernet and TCP/IP to interconnect the hosts. This architecture essentially replaces a deterministic system bus with a non-deterministic TCP/IP network so latencies between hosts become highly variable. Unless the cluster uses more sophisticated and faster interconnections, its storage performance from one clustered host to another will be highly variable and inconsistent.

 

Some things are not so simple. Converting from a siloed storage environment to a pure Virtual SAN requires converting non-VM images to VMs first. It’s a time-consuming process for non-vSphere environments.

 

TCO issues. Until the most recent release — version 6.2 — Virtual SAN lacked deduplication and compression capabilities. This raises costs per usable TB considerably versus SDS products that include data reduction. In addition, making sure data and VMDKs on a specific clustered vSphere host remain available to the rest of the cluster in case that host fails currently requires multi-copy mirroring. Best practices require at least two copies of the original data and many administrators opt for three copies. This practice eliminates the drive price advantages. And because Virtual SAN is a vSphere exclusive option, it has its own license costs that can be substantial.

Click here to read more.

Source