Flash Prices are Dropping. Now What?

Unless you spent the last month in a cave on a remote island near Antarctica you probably saw the abundance of articles on flash price drop predictions. Chris Mellor’s article at The Register is a good example. The obvious question is “Now what?” Will we see the world moving to the ‘all-flash data center’ that some vendors have been saying is the only storage solution moving forward?

To answer that question we must first take a look at the wider trends driving the industry. To do that, let’s add a few more dimensions into the discussion:

The Movement to a Data-Centric Architecture

Charlie Giancarlo from Pure Storage wrote an excellent blog about the architectural shift from “big compute” with large, legacy applications, to small, transient/stateless compute (e.g. containers) with big data lakes behind them. I especially liked his point that “we don’t run IT out of ‘Server Centers’ or ’Networking Centers’ – they’re called ‘Data Centers’ for a reason.”

So basically, what Charlie tells us is:

  • You think your data is big now – it’s about to get a LOT bigger.
  • You have multiple needs for your data (block for existing apps, shared storage for modern apps), and you want to avoid managing multiple products.
  • The solution is to buy multiple, expensive AFAs from Pure and you’re good to go.

I think there’s a hole in that logic. The blog claims we need to look for efficient ways of managing multiple petabytes (PBs) of data, telling customers to increase their cost per PB and buy multiple products. This contradicts the goal they set out to achieve!

We at INFINIDAT believe the only way to reduce storage costs and drive higher performance is through innovation. That’s why we’ve invested so many engineering years into developing our learning algorithm – the one that has been proven across our worldwide install base of over 3.7 exabytes to successfully serving all hot data from DRAM and warm data from Flash. With DRAM latency measured in nanoseconds (thanks to its direct connection to the CPU) vs flash media’s microseconds, we have successfully removed most of the latency from both reads and all writes while at the same time disrupting the price tag customers pay for Tier 1 storage.

Where’s the Innovation?

Another point Charlie makes that I agree with is “New technologies are enabling us to potentially re-wire the infrastructure completely.” Well, if innovation in learning algorithms is allowing us to do more with less hardware (1 GPU / TPU processing more data than dozens of more expensive CPUs), shouldn’t storage, the most expensive component in the data-centric age, benefit from these innovations and use learning technology to reduce the cost of storage?

I think Charlie may have confused what Pure offers with what customers are asking for:

  • Improve performance by driving down latency. [Most applications are still very latency sensitive due to their design.]
  • Grow capacity [exponentially] without increasing management overhead.
  • Drive down the cost of storage, since it became the #1 line item almost a decade ago.

While Pure and other AFAs definitely help on the performance front, they are bound by the response time of flash media. They are also talking about NVMe-oF as the next step in that direction, as they can’t reduce the latency of flash media any further. Talking about a fast fabric to reach a medium speed media (compared to DRAM) is like buying a sports car and sitting in traffic with the rest of the sedans and minivans. Yes, you have a potential to drive faster, but the medium (the road) is slowing you down just like Flash media does!

Data lakes = Business Enabler + Increased Risk

Another important aspect of the data-centric infrastructure is that it aims to consolidate the organization’s view of what it knows about each customer. That means more data, centralized in one place, and the risk of both data breach and data loss are increased.

It highlights a question that has been discussed quite often – to safely put so many eggs in one basket, maybe we should ask “How padded is this basket, in case someone drops it?”

Well, in the case of a data lake, the basket is the storage array. We need to address a significant question – “What should you ask your vendor before you put several PB of your data in their array?” When I sold my first storage array in 2006 with a whopping 1.2 terabytes(!) of usable capacity, the dual-controller architecture was a great fit – the risk was proportional to the level of reliability.

The risk you incur as an IT operation can roughly be summed up in the formula:

Storage Size (aka Blast Radius) / Resiliency Level = Risk

Since then we’ve increased our storage size (blast radius) to multiple terabytes and then multiple petabytes, and the dual-controller architecture (and its resiliency) is still there, providing no increase in resiliency for this new, data-centric age. How many customers trust a dual controller architecture to store a petabyte of data reliably? The math behind that risk will only get worse…

At INFINIDAT, we believe consolidation is an enabler to the data-centric architecture, operational efficiency and the cost reduction customers demand. To achieve that goal we designed a triple-redundant array where every component critical to system availability is triple-redundant, from the power configurations all the way to the data layout, allowing customers to securely deploy multiple petabyte arrays without fearing increased risk.

Privacy Regulations – Encryption for the Masses

Contrary to what Mark Zuckerberg likes to say, privacy is still the social norm and regulators are making sure it stays that way. People often talk about GDPR as the privacy gold-standard when all over the world regulators have realized how much power corporations have over citizens, putting checks and balances in place to limit their power. From NYDFS in New York to HIPAA (USA) to the California legislation for data privacy enacted earlier in 2018 – they all have a common thread – corporations have a legal responsibility to protect the privacy of their customers.

Many of the regulations explicitly stipulate that if the organization can show that it has acted to prevent private data from leaking, it will be either exempt from punishment or expected to pay a much smaller fine. Many of these regulations explicitly require the data to be encrypted so that if any data is stolen, individuals are not at risk:

The communication to the data subject referred to in paragraph 1 shall not be required if any of the following conditions are met:

The controller has implemented appropriate technical and organizational protection measures, and those measures were applied to the personal data affected by the personal data breach, in particular, those that render the personal data unintelligible to any person who is not authorized to access it, such as encryption

Source: GDPR, Article 34

IT’s role is to implement tools that serve business needs: Firewalls serve to protect and control access, collaboration tools enable better project efficiency and faster time to market, etc.
Now businesses have some new requirements – comply with new privacy regulations and minimize financial exposure to fines. To meet these requirements, IT has been asked to implement encryption. There are many encryption tools out there, and just as many philosophies about where and how encryption should be implemented.

I believe that VMware’s VM encryption is the gold standard, as it solves most of the IT organization’s needs: simple from an operational perspective, easily integrates with all the existing tools, handles both legacy and new applications and granular enough. Wherever encryption lands, it will have a dramatic effect on the way data is stored as data reduction technologies (deduplication, compression, pattern removal) can’t work on encrypted datasets.

Moving encryption up the stack solves the problem of data encryption in flight without requiring re-engineering of the entire data center. However, it also affects AFAs, increasing their costs dramatically. Conservative estimates put the impact of encryption on AFAs at about 20% globally, but the real impact will vary depending on the granularity of the encryption solutions. VM encryption will have a higher impact than Oracle column-level encryption, but will also be much simpler from an operational perspective (thanks to policies).

The impact on AFA costs is the reason for another Pure Storage blog by Ben Woo, explaining their perspective on why encryption shouldn’t move up the stack.

Ben calls encryption at the application layer “…the least effective, both in terms of allocation of resources, and in terms of business efficiency.” I understand why this is scary for Pure Storage, as well as other companies betting on the “All Flash Datacenter.” Implementing computational processes (encryption) where you have the most CPUs available (hypervisor), and it’s easiest to add more CPU, is a logical choice. Ben also ignores the fact that encryption is no longer a CPU intensive operation, thanks to Intel and AMD developing specialized AES instruction sets that remove most of the overhead.

Ben Woo says data should only be encrypted while traversing outside the WAN, not inside the WAN. I have two questions for Ben:

Have You Looked at the Numbers?

According to the Data Breach Incident Report for 2017 over 1 billion (!) credentials were stolen from websites. Some of these credentials are (unfortunately) reused by the users in your organization as corporate credentials, allowing hackers access through your firewall as a result. If we acknowledge that the threat is inside your firewall, are you still comfortable with data on the WAN in cleartext?

Isn’t the WAN Connected to the Cloud?

A shortage in cloud security skills has lead to some major data breaches last year, making it even clearer that encrypting your data in the cloud is a good practice, not just a regulatory issue.

If your data stays in cleartext on premises, it is very challenging to encrypt it when it is replicated to the cloud. However, if your sensitive data is already encrypted, it remains protected over the WAN and in the cloud. Are you suggesting customers should implement 3 separate encryption solutions (storage + WAN + cloud) instead of one solution?

Encrypting at the application layer is “the least effective”? I disagree!

Conclusion

The price of flash storage is coming down, that’s no secret, but the price for HDDs is also continuing to come down. Flash and HDDs are still around 10x apart in $/TB. Even with a 4:1 data reduction capability, that only reduces the gap to 2.5x.

Source: Western DIgital

For a smaller, 10-50 terabyte array, the price gap has a lower economic impact.
For a larger, petabyte or more array, the price gap is huge! It defines the difference between an efficient IT budget with room for innovation and an inefficient IT budget that only focuses on “keeping the lights on.”

Getting performance from DRAM and capacity from disk is the winning strategy for the next decade. Data privacy and accelerated data growth are only two of the reasons why that approach remains a winning strategy, especially for anyone who understands that an efficient, data-centric infrastructure is essential for a competitive business.

About Eran Brown
Eran Brown is the EMEA CTO at INFINIDAT.
Over the last 14 years, Eran has architected data center solutions for all layers — application, virtualization, networking and most of all, storage. His prior roles include Senior Product Management, systems engineering and consulting roles, working with companies in multiple verticals (financials, oil & gas, telecom, software, and web) and helping them plan, design and deploy scalable infrastructure to support their business applications.

×

Request a Demo