Infinidat Blog

Solving the Essential Enterprise Business Data Storage Equation – Part 1

Capacity + Performance = Savings

There’s no question about it. Everything we do as individuals or businesses that generates information requires a place to store that information. Historically, the only thing we needed at one point was “capacity” — Give me megabytes! There was little concern with regards to performance, features or functions, power consumption or even cost, and that’s because those who wanted value, had to paid for it. Capacity was a luxury item. Those were the days when datacenters roared with tens and sometimes hundreds of multi-bay storage arrays from famous storage vendors with three-letter names.

Times have changed. First, megabytes turned into gigabytes, which then quickly became terabytes. Now, it’s all about petabytes. The usual storage characteristics we continually ignored, are now critical and we are all scrambling for solutions that can bring all of them together.

Unfortunately, along the path to innovation, we didn’t make our lives any easier. Solutions sprawled ranging from SAN FC-based connectivity to NAS (File-Based) to Object. From multiple copies of data to space efficient clones and snapshots. From centralized/scale-up storage to scale-out to converged solutions and back to scale-up. And finally, from intelligent storage solutions, to tiering, to JBOD (just a bunch of disks), onto the latest and greatest — Software Defined Storage (SDS).

Infinidat Capacity

In all of this, there’s a simple premise: Companies really don’t care about any of that. What they really want is a simple solution to address their capacity and performance needs and more importantly, to increase their business value. And for the vast majority of these organizations, they want all of these things really, really “cheap.” Easy right? Bad news boys and girls, we are not there yet.

So What Are We Missing?
To be able to provide “capacity” and “performance” we must take a deeper look at the exact metrics that define both of these variables:

  • When we say capacity, we are inherently talking about secure, reliable, scalable, easy to use, easy to learn, green (low-power), easy to integrate and easy to protect USABLE capacity. (Who really cares about raw capacity these days anyway
  • When we describe performance, we then turn to simple metrics of IOPS, bandwidth and (up until recently, widely ignored) — latency. This is where the technical and borderline geek storage specialists attempt to understand when and where exactly is the best place to use tiering, performance analysis software and of course, Flash / SSD technology.

Combine these two simple yet important requirements and we can probably infer the following
non-mathematical, yet more business-oriented equation — the one that every CIO/CTO has been trying to solve. So far, only a few have been able to solve it through the formidable use of low-cost, commodity hardware components and the ingenuity of in-house manpower. Yes, we are referring to the Amazons, Facebooks and the Googles of the world. To get there just like they did, you must solve this puzzle:

Capacity + Performance = Savings

The Future of Mass Storage Capacity
Capacity and performance are, in general, at the top of the technical requirements of every CTO/CIO whenever dealing with storage solutions, especially when these requirements are now hitting the upper hundreds of terabytes and petabytes, as well as when applications and massive consolidations and data growth require high IOPS and bandwidth with consistent and predictable response times (latency).

The focus of this post is to bring the realities of storage solutions to light when it comes to mass storage capacities, combined with optimal performance while providing true savings/cost reduction to the business.

There are plenty of storage vendors and solutions out in the market and many of them, to their advantage, have a sole specific focus or problem they aim to solve. All-Flash-Arrays (AFAs), for example, can solve quite well one part of the equation — performance. But truthfully, they are not at a point yet where they can offer all the benefits of massive capacity seen in “enterprise” arrays. Furthermore, these storage arrays are not even close to solving the whole of the business equation stated before — savings. Has anyone seen the all-flash-array datacenter yet? Of course not.

Infinidat Hybrid Array

There are other solutions that can provide plenty of cheap capacity. Some of these solutions don’t even bother adding any sophisticated features or functions to their play and simply let servers, converged architectures or other layered-products deal with the enterprise requirements. These solutions can certainly provide capacity, but cannot provide the whole list of requirements demanded by businesses on their own. One of those must-have characteristics is, in fact, a critical one — reliability. And here’s where the rubber meets the road, where the dual-controller, ALUA-based, Active/Passive arrays hits the wall without warning. The market calls them ‘entry-level’ and ‘mid-range’ arrays, and there’s a good reason for that. If money was NOT a constraint, these solutions wouldn’t even exist! Again, another “almost there,” “would-have-been-nice” solution that cannot solve the stated business equation: Capacity ( if willing to sacrifice reliability, features and functionality), Performance ( if willing to sacrifice business operations or willing to add-on other layers in the solution), Savings (checked!) granted that the other two (Capacity and Performance) are compromised.

So here’s a fact: In order to provide massive amounts of capacity at lower costs, we still need spinning disks. They are considerably denser than any other media (DRAM and NAND) and considerably cheaper too. On the other side, yes, spinning disk is an order of magnitude slower than DRAM and NAND. So how do we combine the best of worlds? The answer is a new, smarter equation:

Hybrid Array + Innovation = Capacity + Performance + Savings

Stay tuned for part two of this blog post next week, where I will explain how revolutionary innovation in hybrid arrays can be used to bring about the capacity, performance and cost savings organizations desire, without compromising on any of these key requirements.

About Adrian Flores-Serafin

Adrian Flores-Serafin, is General Manager for Mexico and Latin America at INFINIDAT. He has been involved in IT and storage technologies for over 20 years. Has performed innumerable roles including storage administration, tech-support, solutions architect, technical sales manager and WW business unit executive. His is a veteran of Sun Microsystems, Perot Systems, EMC and IBM.