Infinidat Blog

Infinidat: Epic Performance for Healthcare and Other Tier 1 Applications

Good-Cheap-Fast?

I saw this image from a restaurant recently, and it reminded me a lot of the state of enterprise storage for the last 20 years or so. To make the analogy more clear, let’s replace the word “Good” with “Reliable.”  Yes, I know this ignores the features dimension, but since Tier 1 enterprise storage has been at parity when it comes to features that matter, let’s explore if the shoe fits…

Good Fast and Cheap

Some marketers present this same concept as the sides on a triangle, with appropriately sanitized vectors such as “Reliability,” “Cost” and “Performance,” but in reality the concept is the same, and the challenge of delivering across all three of these dimensions is just as hard.

Thinking about the landscape of storage arrays, some products manage to excel in only one side of the triangle, while more successful products optimize two.  Once in a great while, a truly epic product manages to capture all three sides.

An Epic Story

The term epic in it’s original context, describes a story about a hero who, through a series of adventures, achieves an incredibly difficult task.  Epic may conjure a different connotation to the average reader today, however.  In social media, the term epic has gone viral.  In the lexicon of the internet meme, epic simply means the epitome or a spectacular example of it’s kind.

We will look at two kinds of “epic” performance. First and foremost, epic as in the astounding real-world performance the INFINIDAT InfiniBox achieves on commodity hardware.  Next, we will look at the performance this architecture delivers on a particularly demanding Electronic Medical Records (EMR) application used by the majority of healthcare institutions – Epic software.

Epic Real World Performance

Customers are reporting that InfiniBox is beating AFA’s (All-Flash Arrays) in performance, on some of the most demanding workloads.  How is it possible that a hybrid array can beat a much more expensive AFA?  In a nutshell, it’s about using each layer in the storage system to its maximum efficiency.  INFINIDAT has over 100 patented algorithms designed to achieve maximum performance. Let’s take a look at a few bottlenecks seen in every storage architecture.  But first, a word of caution – this blog post confines itself to real world performance on real world big data applications, not performance achievable for canned benchmarks that produce synthetic I/O profiles.

The first bottleneck occurs around DRAM cache.  DRAM is the fastest resource the array has. Of course DRAM must be protected, and to do protection and scalability right, it’s important to have a “shared-everything” architecture as opposed to a “shared-nothing” as seen in dual controller or clustered arrays. The primary bottlenecks for DRAM center around two activities – Cache Destage and Cache Read Miss. INFINIDAT has addressed both of these issues by vastly increasing the destage rate, and reducing the impact of a DRAM cache read miss. Destage is decreased as a result of the backing store RAID-like topology – that is, the way information is stored on, and protected by, the rotating media drives. INFINIDAT is the world’s only array that does 100% dynamic, fine-grained, redirectable mapping. This allows drives to destage writes orders of magnitudes faster than when forcing the drives to seek. The combination of write redirection, write bundling, write logging, and out-of-order writing means that at least on writes, the drive will spend almost zero time seeking.  This means that destages can achieve the equivalent of thousands of random write IOPs per backing HDDs, and sustain that rate indefinitely, unlike flash-based SSDs.  But wait, I thought INFINIDAT has flash based SSDs. Indeed, but to avoid bottlenecking under intense write pressure, INFINIDAT bypasses the SSDs for most writes, only intelligently caching in the SSDs.  One such example is frequently read data. Of course, to utilize the SSDs to reduce DRAM read miss, one is no better than the caching algorithm.  For each 64k area, the InfiniBox will track, via collection of metadata, the relative frequency of reference between a block and other blocks, even in other volumes, based on timestamps. This method is far superior to simple buffer based caching algorithms. As a result, the SSD caching layer can be used to reduce the latency of a DRAM read miss to that of the SSD.

The second bottleneck occurs around the backing storage (hard drives). Let’s compare two media type’s write operations – flash based SSD and a typical magnetic hard drive.  The SSD can perform random reads essentially forever at the same latency, and without wear. However, writes suffer from a limited lifetime, and erasing a page to prepare for writes is a relatively slow operation. Flash-based SSDs can be forced into a mode where writes slow due to the erase/write cycle.  In normal operation, this is rarely seen unless free pages run low. Only in demanding enterprise workloads with heavy write activity or write surges will this be seen.  Magnetic drives pay no such write penalty, but must seek before writing or reading. If, however, we could prevent almost all seeks for writes, and perform moderate “just right” sized operations, the efficiency of hard drives increases by orders of magnitude. This is exactly what INFINIDAT’s array does.

In short, the INFINIDAT array uses each storage type to its maximum efficiency – DRAM, SSD and HDD.  This is the true measure of performance for any storage array. As a result, for many real world demanding workloads, the INFINIDAT hybrid array outperforms AFAs, and at a massively lower cost.

Epic Software (Electronic Medical Records) Performance

Epic is a commonly run application in a large percentage of healthcare institutions, that has a demanding storage workload profile.  As a result, many institutions have spent large sums keeping their storage performance acceptable as their Epic workload grows.  Traditionally, Epic has been deployed on Tier 1, shared everything, high end (read expensive) storage.  The particular storage workload problem seen in Epic deployments is random write surge.  If the random write surge is in excess of the DRAM cache of a traditional Tier 1 array, or the AFA is out of free pages, then performance will drop by an order of magnitude or more.

To achieve excellent Epic performance, the INFINIDAT has two advantages – huge DRAM caches (around 3 TB) coupled with the elegant, efficient destage capabilities mentioned earlier.  As a result, unbelievable performance running Epic is just another real world application that INFINIDAT excels at.

About Randy Seamans

Randy Seamans is a veteran of the storage, supercomputer, defense, seismic, and banking industries at such companies as E-Systems, Cray Research, BancTec, Hitachi Data Systems, 3PAR, EMC, XIV and INFINIDAT. Randy holds BS and MS degrees in Computer Science. Someone who enjoys new challenges, Randy has done post-Masters work at the University of Texas at Dallas and in Arlington.