File System Testing – A Sneak Peak

File system is one of the most essential components in any storage appliance especially in NAS storage appliances. It plays a significant role in data reduction technologies such as; compression, de-dupe, thin provisioning, data integrity, security, data protection, and many more. Because of this tremendous dependency on File Systems; its performance and behavior has been a major focus area for most of the storage ISVs. This also prompts us to understand some of the basic rules while designing and planning the file system test executions. This not only increases the product quality but also turns out to be a good ROI.

What is file system and why it is important to storage stack?
File system is a software component managing on-disk layout of storage and facilitates I/O between user applications and the underlying storage subsystems and disks. File system intermediates the file-level I/Os at the top half and block-level I/Os at the bottom half layers in the stack. Also, File system plays a vital role when it comes to storage performance in particular, since most of the important tasks such as block allocation, de-allocation, metadata updates, integrity checks, de-dupe, compression are done at the file system level and most of them are latency bound. If poorly designed and configured, chances are I/Os will get throttled at file system layer, keeping overall functionality and performance at stake.There are different types of file systems available and they are integral part of storage software offered by different storage vendors. Some of the very familiar on disk file systems are Oracle ZFS (earlier Sun’s), Linux ext3/ext4, MS Windows NTFS/FAT32, Veritas VxFS, IBM GPFS, Redhat GlusterFS, Hadoop HDFS, etc.
Again, every file system is designed with specific objectives and no two file systems are equal and they are designed to serve various purposes. However, in principle every file system is designed to meet a common end goal, serve the I/Os of the user applications. So, understanding basic semantics behind a file system is not only worthwhile but also useful in understanding and implementing it’s successful test executions by formulating a good test strategy.File system testing considerations
Below are some of the important parts of a file system testing that should and must be considered for successful test evaluations and test executions in a storage software release cycle.

  • I/O Path Testin: One should focus on complete i/o path including block allocation, block de-allocation, in memory operations, cache semantics, disk I/Os etc.,
  • Metadata I/O path testing: Since metadata plays a very important role for file I/Os, it’s equally important to design test cases based on different meta I/O operations.
  • Data fragmentation testing: Designing and executing test cases around data fragmentation will provide a good insight as how file system behaves with different workloads when data is heavily or sparingly fragmented.
  • Meta fragmentation testing: Meta fragmentation often turns out to be I/O bottleneck and this might impede the overall performance of the file system. So, considering different meta fragmentation scenarios along with different data streams and workloads patterns help in evaluating file system behavior.
  • Data Dedupe testing: If the file system supports de-dedupe (which most of the commercially available file systems do as part of data reductions), it’s highly recommended to consider different scenarios by ingesting different dedupe ratios in the data streams which would closely mimic the customer representative data. Also, due considerations is to be given if the appliance is a generic storage appliance or backup storage appliance.
  • Data Compression testing: If file system compression logic is enabled, good to verify with many different compression logic that the file system supports and this would help to evaluate which one best suits with the file system offerings.
  • Data/meta Integrity check: Since maintaining data integrity is the first and foremost objective of any file system, it’s very much essential to design test plans around file system integrity check. All most all file systems has various integrity check mechanisms viz., ZFS uses checksum method internally to maintain integrity of each block allocated. There are many tools available to do the integrity check e.g., fsck is one of them which comes with Linux
  • RAID type consideration: In case file system offers different RAID levels, test plan should cover different RAID levels with different disk types (e.g., HDD vs SSD) and also need to consider different volume sector sizes and disk sector sizes ( e.g., 512 byes vs 4KB).
  • Stress and Load testing : File system will work fine and as expected under normal scenarios with usual workloads. However, this is not true in the real world as there are always extreme situations. So, it’s important to put file system under stress and load tests. So, consideration should be on good stress and load test plans and executions. Also, one can consider some of the most suitable tools such as Load Dynamics, iometer, vdbench, jetstress, etc., targeting different data streams and workload patterns and running them over longer period of time. Also, as part of stress testing, one should consider injecting negative test scenarios, e.g., removing one of the disks under heavy I/O and observing the file system and overall system behavior.
  • Performance testing: Everyone is concerned about performance and no one likes it if system under performs. A solid performance test plan around file system will help in analyzing file system behavior and overall storage system performance, since most of the time a file system’s poor performance is heavily taxed to overall system performance and hence eventually impacts the economics of scale in business. Again, performance testing is a vast area and care must be taken in considering specific and important criteria while evaluating file system performance testing and most importantly the testing efforts should not go haywire.

Hope this blog helped you understand the basics of file system testing. As a file system testing expert I recommend to have a clear understanding of the architecture and functional aspects of the file system before designing a test plan. Again, every file system is designed to cater some specific requirements along with basic objectives; care must be taken in considering all the aspects and requirements of the file system under test including right test environment, right tools and techniques.

To know more email:
Contributed by: Santosh Patnaik | Calsoft Inc


Related Posts

Generative AI: Transforming Industries for Success

Generative AI : Transforming Industries for Success

Generative AI is the hot topic of discussion everywhere and is being embraced by everyone. Read this blog to explore how different sectors are leveraging Generative AI to drive innovation, enhance efficiency, and deliver superior experiences.

Private 5G Promising Industry 4.0 Transformation blog

Private 5G: Promising Industry 4.0 Transformation

The potential of Private 5G in ensuring super connectivity and higher data rates in Industry 4.0 is achieving traction worldwide. Private 5G together with other key emerging technologies such as Artificial Intelligence (AI), automation, and Internet of Things (IoT) support operators to generate innovative revenue streams. These advancements make Private 5G an apt choice for all types of enterprise ecosystem (big/small/mid-sized) to realize digital transformation. Read the latest blog to know what, why, and how Private 5G is fueling Industry 4.0.

Potential of 5G in Manufacturing and Industrial Automation

Potential of 5G in Manufacturing and Industrial Automation

Manufacturing industries are probing for novelties and modernization to gain better profitability and productivity. 5G technology with its key capabilities such as higher speed, greater availability, support for ultra-reliable and low-latency communication has potential to revolutionize the manufacturing industry. The technology promises to facilitate digital infrastructure to realize automated and advanced operations which will lead to an enhancement in business output. Read the latest blog to learn how 5G can impact and benefit manufacturing and industrial automation.


Edge-to-Cloud in Manufacturing

In the manufacturing industry, an Edge-to-Cloud strategy can bring a number of benefits and challenges. Read this blog to know all about it.


5G Network Slicing: A Gamechanger for Telcos

5G network slicing is a powerful tool that can help telcos differentiate themselves from their competitors by offering more tailored and customized services to their customers. It has the potential to be a gamechanger for telcos, and we can expect to see more and more telcos investing in this technology in the coming years. Read the latest blog to explore how 5G Slicing can be a gamechanger for the Telco Industry in the future.


MWC 23 Top Technology Trends

Mobile World Congress (MWC) is the one of the greatest and most influential connectivity events in the mobile industry where mobile device manufacturers, technology providers, and other industry stakeholders come together to showcase their latest products, services, and innovations. MWC 23 was held in Barcelona from 27 February to 2 March 2023. The event highlighted several emerging technologies and latest trends in the industry market. Read the blog to discover the top technology trends at MWC 23 and how these trends grow over the coming years!


Leave a comment / Query / Feedback

Your email address will not be published. Required fields are marked *