The Case for Flash-Optimized Storage Arrays
Capacity, performance and reliability are the top technical requirements of every CTO and CIO when vetting a storage solution. As capacities move into the multi-PB range, a system architecture that can handle all existing business application needs, as well as accommodate the build out of a cloud infrastructure, is becoming a more pressing requirement. Maintaining multiple different storage platforms for different solutions does not scale and adds to management overhead. The hybrid approach provides the ability to accommodate most, if not all application needs (in terms of availability, IOPS and bandwidth) yet does so at a far more competitive and sustainable price point than all-flash arrays.
Enterprise storage solutions must meet a variety of technical and business requirements.
Many storage vendors in the market tend to select specific storage characteristics to focus on when delivering their solution. All-flash array vendors, for instance, tend to focus almost exclusively on performance. When these vendors speak with customers, they typically speak in terms of their most performance-intensive workloads (which generally represent roughly 5–10% of the customer’s total data footprint). They do this primarily because flash is so expensive compared to other storage media, customers simply can’t afford to have all of their data residing on flash storage. So the all-flash data center, while desirable on paper, will not be a viable option for a vast majority of organizations in the foreseeable future. It would simply “break” their IT budgets since 90% or more of their aggregate capacity has much more modest performance requirements. In addition, write rates and wear and tear are still issues for flash.
There are, of course, alternative solutions that provide more economical capacity, although they often lack the more sophisticated storage services required to most effectively manage large volumes of dynamic data. These solutions simply rely instead on servers, converged building blocks or other layered software products to fulfill those requirements. These solutions can certainly provide capacity, but cannot provide all the additional high performance and availability requirements businesses demand on their own. Reliability tends to be the most prominent weakness within this class of solutions.
As the figure above shows, solutions spanning all capacity ranges need the highest levels of reliability and availability possible. Many contemporary all-flash solutions have yet to achieve levels of reliability that would classify them as truly “enterprise ready,” recalling that reliability in arrays built upon legacy controller architectures (as most all-flash arrays are) requires large numbers of discrete, independent drives in a RAID configuration to ensure data protection. In order for all-flash arrays to achieve similar levels of reliability they must rely upon some creative engineering, since increasing the number of discrete SSD media devices and leveraging RAID is not economically viable given the cost of flash media. One common practice is reducing the amount of usable capacity on the expensive SSD to support more copies of data for reliability purposes. When compared to an intelligent hybrid array, the SSD serves as just another layer of cache. Therefore, it does not introduce overhead and limit capacity, thus ensuring more effective capacity for the same investment in SSD media. Also, although SSD is expensive, DRAM is even more so, and as such all flash-arrays tend to be equipped with modest amounts of DRAM as a means of reducing overall hardware costs. Hybrid arrays utilize more DRAM and often drive better performance as a result. The amount and price of DRAM per unit of usable capacity is much smaller in a flash-optimized array owing to its higher aggregate capacity and more efficient use of each storage tier.
In addition, many all-flash arrays are still based upon legacy dual controller architectures. These architectures have long since been the Achilles heel of the storage industry with performance problems during failovers and numerous other issues manifested in poorly designed upgrade practices (which very often fail). In addition, these architectures suffer from lengthy disk rebuild times (and thus higher levels of vulnerability to multiple concurrent component failures), and typically have no more than five (5) nines of uptime/availability.
They also rely heavily upon a media type that by its very nature is significantly less statistically reliable than spinning disk. Consequently, they must utilize complex, multi-point mechanisms to increase reliability and utilization to levels approaching — but not reaching — those of HDDs.
Most all-flash arrays (AFAs) are only optimized for performance, unlike INFINIDAT’s flash-optimized hybrid arrays which are built to deliver value, density, and RAS, in addition to performance.
When CIOs examine their storage and associated budget, decisions are usually predicated upon what sacrifices the business is willing to make regarding capacity, performance and reliability. Maximizing all three categories usually requires an unreasonably high level of investment. In the all-flash array approach, this is certainly the case, especially in the capacity/density arena where the storage is still too expensive (even with realistically attainable deduplication and compression ratios) to consider as a viable option. Reliability, availability and serviceability (RAS) continues to be downplayed even if the capacity cost can be addressed. Performance may be great, but that is overkill if only 5% of your data requires that level of service.
In addition to all the technical and operational reasons why all-flash arrays are not the right solution for storing all the data in the enterprise, storage analysts are also observing that the cost per GB of flash capacity is still nearly 10x that of spinning media, and it is predicted to remain at least 3–5x for several years to come. Due to this, a hybrid solution that can deliver all-flash array class performance, but at much higher capacity and reliability — and at a reasonable, more predictable price — is much better suited to the enterprise storage landscape of today and tomorrow. This is why INFINIDAT has resonated so strongly with many top C-level executives who have a good understanding of the needs of their data and how it supports their businesses.
Figure 5 – The cost of flash is expected to exceed the cost of spinning media for years to come.1
So, the one indisputable fact that we can derive from all this is: in order to provide the massive amounts of capacity required by the enterprise at a sustainable cost, the storage industry still very much needs spinning disks. HDDs are considerably denser than other media (DRAM and NAND), have far superior serviceability and wear, and are considerably less expensive per unit of addressable capacity. While it is commonly known that spinning disks are slower than DRAM and NAND, the technological breakthroughs at INFINIDAT have enabled us to circumvent that slowness from the end user by creating a superior flash-optimized hybrid array. This is why it is so important to have a skilled, veteran engineering team that can solve these difficult design problems with highly optimized software running on off-the-shelf components.
This piece was excerpted from the executive brief titled “INFINIDAT Flash-Optimized Hybrid Array vs. Current All-Flash Arrays (AFAs).” Download the executive brief to learn where and how these solutions are best utilized in the data center.
1 David Floyer, Enterprise Flash vs. HDD Projections 2012-2026, Wikibon, http://wikibon.com/enteprise-flash-vs-hdd-forecasts-2012-2026/ (August 11, 2015)