Data Reduction in Primary Storage – Boon or a Burden?

In recent times there are many storage players embarking into the market and the competition is fierce. Storage vendors would not like to be behind in offering rich features in their products to attract customers. In addition, there are FUDs around dealing storage in a traditional ways due to advent of many disruptive technologies and the trend is going to grow upward in coming days.

This blog focuses only on data reduction features offered by primary storage vendors in general and its benefits and downsides. Data reduction in storage means attempt to store lesser data in disks versus the amount of user data generated. This implies increasing storage efficiency in terms of capacity and hence reduced costs. Let’s understand further on this and its implications to customers.

Calsoft Whitepaper: Software Defined Storage-Quality Assurance

The key objective of this paper is to illustrate how to carry out quality assurance for a software defined storage, which comprises not only feature testing, performance testing, system testing & regression testing; but in parallel also to help understand the various testing tools, needs & available options for automation.

Download

Data reduction in storage can be achieved using several techniques. De-duplication and compression are the two prominent techniques being used since backup storage times and they have gained much significance in primary storage in last few years. There are many storage vendors that offer data reduction features in their primary storage products including EMC, NetApp, Microsoft, Oracle Sun, IBM Storwize, Dell Ocarina Networks, Pure Storage, Naxenta, Tegile systems and host of many more.

It’s essential to understand what primary storage compression and de-duplication means, what type of data pattern is really suitable to adapt this technology, what types of data dedupe methods to follow and their pros and cons.

Compression is a data reduction technique using certain algorithms to reduce the amount of physical disk space. Compression can be done at file system level or storage array level. Compression algorithms can be lossless or lossy. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless. LZ compression methods are most popular in lossless such as LZ1, lZ2, etc., For example, lZ1 is the basis of GZIP, PKZIP, WINZIP, ALDC, LZS and PNG among others. LZ2 is the basis for LZW and DCLZ.  Whereas lossy compression reduces bits by identifying non-essential information and removing it and some loss of information is acceptable.  JPEG, MP3/MP4 compression methods are some of popular lossy compression algorithms. Lossy scenarios mostly found in video streaming and photographs. Lossy algorithms are more efficient than lossless, but when one tries to uncompress, it’s impossible to get back the original content. Compression ratio is the one which quantifies the reduction in data produced by a compression algorithm. Typically it’s a ratio between data uncompressed size and data compressed size. Compression ratio often determines the complexity of a data stream and used to approximate algorithmic complexity.

In contrast, de-duplication reduces the size of the data by detecting repeating patterns in data and reducing such patterns to a single instance and leaving pointers to that instance. De-duplication logic can be applied at file or block level. De-duplication techniques are of three types, viz., source side de-duplication, inline de-duplication and post process de-duplication. Source side de-duplication is removing repeating patterns at the data source before transmitting to the storage. The key benefit of source side dedupe is to use reduced network bandwidth and hence efficiently utilize the storage with lesser data foot prints. Inline de-duplication is removing repeating patterns on the fly when data is written to storage device. Inline dedupe is CPU intensive and can bring down overall storage performance. The key benefit of inline dedupe is to efficiently utilize storage capacity when data is written to disk. Inline dedupe is the one operating in storage I/O path. Post process de-duplication is removing repeating patterns only after data is written to the disk. Data redundancy is eliminated by either running a scheduled task typically during non-peak hours or automatically based upon certain growth of data. The key benefit of post process de-dupe is performance, since it doesn’t intercept storage I/O path, but the downside is to have enough storage capacity to retain the data before they get reduced. Like compression ratio, de-duplication ratio is what quantifies the reduction of data using de-duplication technique. De-dupe ratio is typically expressed as a ratio between protected capacity to actual physical capacity stored on disks. Higher the de-dupe ratio, better the data reduction efficiency.

Different storage vendors use different types of compression and dedupe logic. Some use inline, others use post process. Again, data reduction using these techniques depends upon other important factors such as data stream types, data retention policy, rate of data change. If implemented correctly, compression and dedupe will help significantly to reduce the data foot print in tier one storage. In recent times with flash based storage, data reduction technologies have turned out to be a boon of efficiently utilizing premium priced flash storage capacity. But that does not come free; it comes with some performance impact. So, it’s important to evaluate before using a storage product of a specific vendor to consider performance as the highest focus or a feature rich product. There are many such products in the market that are feature rich but have not really kept their promise when performance matters.

Some storage array offers dedupe and compression along with encryption. So, it’s important to run a proof of concept on primary storage arrays before considering dedupe and compression. If data reduction feature is not going to be much useful for specific workloads, then the feature can be tuned to disable to optimize performance, else it will turn out to be a burden than real benefit. For a customer, it’s not important whether to consider a primary storage array to be feature rich; it all depends upon how the customer previews a particular product to be more useful for its specific needs.

To know more email: marketing@calsoftinc.com

Contributed by: Santosh Patnaik | Calsoft Inc

Calsoft Storage Expertise

Leveraging years of experience with Storage platforms, ecosystems, operating systems and file systems, Calsoft stands as pioneer in providing storage product R&D services to ISVs. Our service offerings enable storage ISVs/ vendors to quickly develop next generation storage solutions that can perform and cut across enterprise IT needs.

 
Share:

Related Posts

Role of Cyber Security in Business Continuity

Cyber security plays a critical role in business continuity by mitigating risks, cyber-attacks, and by maintaining trust with customers and partners. Explore the crucial role of cybersecurity in ensuring business continuity!

Share:
Navigating Big Data Storage Challenges

Navigating Big Data Storage Challenges

The last decade or so has seen a big leap in technological advancements. One of the technologies to come up at this time and see a rapid…

Share:

A Deep Dive into 5G Service-Based Architecture (SBA)

5G technology roll out signifies an immense revenue opportunity for telecom industry.

Share:
cloud storage vs. on Premises storage

Cloud Storage vs. On-Premises Storage: A Comparative Analysis

Enterprises in today’s digital landscape, be they Large/ Small Medium Enterprises (L/SMEs) or startups, face a perpetual dilemma – how to manage their data, applications, and technology…

Share:
Technical Documentation

Technical Documentation Review and Tips

Technical reviews are vital for effective and quality documentation. To make this happen, have documentation and its reviews listed as one of the deliverables – just like development or testing. This will place priority on the process, and ensure everyone involved understands the importance of proper and thorough reviews.

Share:
Understanding the Potential of Storage and Security in IoT

Understanding the Potential of Storage and Security in IoT

The potential of storage and security in IoT plays a significant role in transforming industries and the lives of people. However, tackling challenges such as data isolation, interoperability, and scalability will be essential in underpinning this potential. To embrace the full potential of storage and security in IoT involves a holistic method, incorporating technological advancements with comprehensive tactics. Read the blog to understand the potential of security and storage in the IoT ecosystem, its challenges, and keyways to overcome them.

Share: