Infinidat Blog

What’s the Right Write?

Ten years ago, if you asked storage administrators about their read/write ratios, they would not think too much before saying it’s around 80/20 across most applications. This was especially true when it came to OLTP workloads and, later on, Virtual Machines.

The Change in IO Pattern
Over the last ten years, this ratio has changed for many reasons:

  • vSphere has led to the virtualization of most applications, changing the I/O patterns with features like memory deduplication, requiring fewer page in/out operations (which translate to reads on the storage layer)
  • Deploying VMs moved from actual read/write operation to a smart XCOPY operation, offloaded to the storage through VMware’s VAAI, Microsoft ODX and other operating systems through the T10 standard
  • Write-through cache solutions (usually flash-based) reduce reads without affecting writes. This trend is expected to accelerate further as next generation, post-flash technologies like Storage Class Memory (SCM) and NVMe devices continue to improve IO read caching on the server side
  • VDI workloads were the first common data-center workload with a “majority of writes” and this trend has grown over the last few years

At the same time, storage systems have grown in capacity, efficiency and performance, thus further increasing the total number of writes that an individual array must handle.

The bottom line is that we have a higher ratio of writes today. Some customers are already talking about 70/30 read/write ratios, and the shift is continuing. We also have a higher amount of writes in the system due to higher consolidation ratios. Writes Matter! Ignoring this fact in today’s storage decisions means compromising on a storage system with a shorter life span and limited scale.

The Bottom Line

The Write Cliff
Write cliffs happen at the point where the write rate is higher than the storage array’s write-cache can sustain, translating to a steep increase in latency, which tends to get worse the more writes the user performs. The implications of write cliffs vary by application, from slowness in parallelized applications, to complete crash in latency sensitive applications.

While write cliffs have profound effects in currently available all-flash arrays (AFAs); they also affect many traditional arrays. A good example is chip-design (EDA) workloads where write sizes increase as the silicon grows in complexity. These writes often happen in short bursts, which can fill the small write-cache, while hitting the same write cliff as AFAs. This problem is further exacerbated if the write cache is partitioned into smaller banks where only one bank is active (receiving data) at any given time.

Another challenge for flash technologies is the inconsistent latency attained when read, write and garbage collection take place simultaneously.

High Write Rate

What is A Good Write?
An ideal write is one that never needs to hit your slow, persistent media. When compared to RAM, all other media types are slow. Data written on persistent media may remain there for relatively lengthy periods, but much of it also gets over-written and modified repeatedly before actually “cooling down” and becoming appropriate for persistent media. There are other writes which do not require being stored in persistent media, such as temporary data that is written and then deleted at the end of a transaction. A storage system that keeps data in cache long enough can solve these issues.

Solving the Write-Challenge
InfiniBox architecture is based on a single, large pool of RAM-based write cache, spread across the three nodes of our Active-Active-Active architecture. Alongside this, is a thick flash layer that guarantees high performance for the applications. As the written blocks “cool down,” InfiniBox cherry-picks them for destage from the RAM cache, this decision being based on many factors. So, it is not just a question of how full the write cache is at a given point in time, but it is also a matter of write patterns that the intelligent mechanisms of Infinibox are able to identify in terms of frequency of updating of specific regions in a volume. These scenarios require a considerable lower amount of writes to the persistent layer, and Infinibox is designed to avoid any superfluous write operation of this kind. This translates into a dramatic overall improvement in the system’s performance.

InfiniBox’s flash-optimized architecture is designed to give applications the lowest possible latency by minimizing operations with slower, persistent media to the absolute minimum. Another key component of InfiniBox’s architecture is designed to prevent a flash latency spike on any device from ever appearing as a system-level latency spike. The InfiniBox architecture is not dependent on destaging data to Flash, and can choose to only destage certain datasets to the disk persistent layer, thus maintaining persistent application performance. One more characteristic of InfiniBox’s flash optimization is the use of large pages, avoiding the garbage collection issue altogether.

Data Reduction on Written Data
Another advantage of the InfiniBox’s very large write cache is that the storage can first acknowledge the write to the host with minimal latency, and without having to deal with data reduction tasks in the IO path.

Data reduction in the IO path translates into higher latency and is often used by AFAs to compensate for the small write cache. This however defeats the purpose of using fast media to gain low latency in the first place.

Writes matter. Does your storage write right?

About Eran Brown
Eran Brown is the EMEA CTO at INFINIDAT.
Over the last 14 years, Eran has architected data center solutions for all layers — application, virtualization, networking and most of all, storage. His prior roles include Senior Product Management, systems engineering and consulting roles, working with companies in multiple verticals (financials, oil & gas, telecom, software, and web) and helping them plan, design and deploy scalable infrastructure to support their business applications.