Infinidat Blog

How to Use TCO Analysis to Separate Storage Hope from Storage Hype

“You Can’t Manage, What You Don’t Measure…”

Early in my career, I had the good fortune of being mentored by “A Few Good Men.”  No, I’m not referring to Tom Cruise or Jack Nicholson, but to individuals who were able to condense life lessons into profound, “aha” moments for me and my astute colleagues who were focused on learning about Continuous Process Improvement (CPI).  One specific mentor was a visiting graduate school professor, who was fortunate enough to have worked as an assistant to Peter Drucker, the “Father of Modern Day Management,” all while pursuing his own post graduate degree(s).  His favorite one liner to our class was, “You can’t manage, what you don’t measure…”

You Can't Manage What You Don't Measure

These words have stuck with me throughout my IT career and have been especially valuable when it comes to evaluating the Total Cost of Ownership (TCO) for data storage.  The truth is, TCO is not easy to measure objectively and requires leadership rigor and a measurement framework that helps circumnavigate through the multidimensional and complex aspects of data storage.  IT leaders should start by looking at these six foundational pillars to analyze a storage solution’s true TCO:

  • Cost – Look at cost in terms of hardware/software/services price per usable TB, not the effective or raw TB which is used by some vendors to over hype value.  Luckily, most seasoned storage professionals understand the interrelated dynamics between price, performance and true capacity.  For, example, if a vendor represents larger data reduction ratios, a better price or value can be perceived.  That being said, on certain data types one should never expect high ratios or any reduction on encrypted data.  And furthermore, the majority of vendor data reduction algorithms are inversely proportional to their performance capabilities.  One unscrupulous approach, used far too often, is stating peak threshold capabilities in performance and true capacity factored per price, all while knowing they can never occur simultaneously.
  • Performance – Measure performance in terms of latency, throughput or bandwidth and IOPS for your typical workload profile.  An example of a typical workload profile might be referred to as a 70/30/50 or seventy percent read, thirty percent write, fifty percent cache hit and varying block sizes of very small (0-8KB), small (8-64KB), medium (64-512KB) or large (over 512KB).  Latency is the time taken to complete a single I/O operation in micro/miliseconds (µs/ms) or how fast a storage system responds to reads and writes.  Throughput or bandwidth is the capability of a storage system to transfer a fixed amount of data in a measured time quantified as gigabytes or megabytes per second (GBps or MBps).  And IOPS, is a measure of the number of individual reads/writes requests a storage system can service per second.
  • True Capacity – Look at true capacity in terms of usable physical TBs. Vendors may attempt to skew this value by selling raw capacities leaving the calculation of RAID protection and spares to be determined by the buyer.  Beware of smoke and mirror marketing tactics that blur usable physical TBs to effective TBs by claiming a 100TB storage configuration with a 5 to 1 data reduction ratio is equivalent to 500 usable physical TBs.  The problem with this thinking is that no one can guarantee you’ll get that reduction all the time, across all data, the majority of data reduction algorithms will negatively impact performance, encrypted data should be excluded, and any data reduction post processing algorithms still need usable physical TBs to optimize to effective TBs.
  • Environmentals – Evaluate operational costs for power, cooling and floor space. Continuing data growth means these costs are likely to increase and can only be positively impacted through consolidation efforts and greater efficiencies that allow you to do more with less, basically taking advantage of Moore’s Law.
  • Feature Functionality – Determine the value of software features licensed and supported by the storage array.  This includes management capabilities like the GUI/CLI, snapshots, clones, replication, capacity/performance monitoring (SRM), 3rd party integration (VMware), automated OS server configuration/best practices, encryption, RESTful API and OpenStack.
  • Ease of Management – Don’t forget to factor in the value of ease of use. Ease of use can be measured by the number of FTEs required to manage 100 PBs.

The Pillars of Storage

Having an established measurement framework makes it easier to evaluate the potential value of the latest “hot” storage initiatives. This also helps IT leaders balance the ongoing desire of their staff to work with the latest, “not-to-be-left-behind” technologies, while also fulfilling their fiduciary responsibilities to the business.  This is especially important with all the analyst and industry expert hype surrounding the cloud, solid state (i.e. RAM, SSD or Storage Class Memory (SCM)), software defined storage, and converged or hyper-converged infrastructure. Without the kind of leadership rigor and practice of objective measurement described above, our initial reaction may be to double down in these various areas to ensure we understand the bleeding edge technologies and are not left behind.  But there are two pitfalls with this approach that can prevent you from achieving the superior TCO you desire:

  • The “bleeding edge” is labeled as such, due to the excessive financial resources needed to maintain that bleeding edge.
  • There are plenty of examples of the “shiny object syndrome” where the latest, greatest, shiny object fails  (i.e. Beta vs VHS, proprietary vs open standards or NOR vs NAND Flash).

The real secret to success is cutting through the storage hype and finding out what approach will actually work to deliver industry leading TCO.  It entails that IT leaders measure and manage with a rigorous efficiency and effectiveness framework focused on superior value in the foundational pillars of IT storage TCO.

Hopefully, much like Jack Nicholson’s “aha” or “oh, no” moment a few minutes after blurting out, “You can’t handle the truth!” to Tom Cruise in the climactic courtroom scene, this post will open your eyes to true storage TCO and how to get there by diligently measuring all the different aspects of what you hope to manage in your storage environment.

About Ed Garver

Ed Garver is Technical Sales Director at INFINIDAT.  He has been involved in IT and storage technologies for over 30 years.  He has held various leadership positions at Fortune 500 companies like Verizon, EMC and IBM.