Infinidat Blog
Infinidat Blog

The Myths and Realities of Software-Defined Storage

The Myths and Realities of Software-defined Storage

Software-Defined Storage (SDS) is not a new technology; it is a go-to-market strategy that decouples storage hardware from storage software purchases. Claimed SDS benefits include more agility, more scalability, more performance, more fault tolerance, a standardized hardware infrastructure that creates economies of scale, faster hardware refresh cycles, no vendor lock-ins, and lower purchase prices. However, the relative lack of success for SDS, outside of a handful of very specific use cases, highlights the fact that obtaining these promised benefits is not without risks and costs for both vendors and users.

Executive Summary
  • SDS is not going to displace on-premises primary storage arrays supporting mission-critical applications for the foreseeable future
  • Hyper-Converged Infrastructures (HCIs) will continue to account for the bulk of on-premises SDS usage for the foreseeable future
  • Whether SDS becomes a significant share of the secondary storage market remains to be determined

Let's go through how I arrived at these and more conclusions...

Analysis
  • User interest in SDS is driven by the desire to build software-defined infrastructures and to lower storage acquisition and ownership costs
  • Vendor interest is driven by the need to: offer a cost-effective alternative to the cloud; create an appealing hybrid cloud infrastructure vision, and protect existing storage array revenue streams as well as protecting customers' investments in their technology

All storage arrays sold today are software-defined because their backend media is virtualized and abstracted into LUNs, file systems, or object stores. However, that is sidestepping the real questions about where and how SDS is being purchased and used. 

The three primary use cases for SDS are:

  1. As a foundational technology within hyper-converged integrated systems (HCIS)
  2. At the edge of IOT infrastructures
  3. Building inexpensive scale-out secondary storage arrays

To understand why SDS is not displacing storage arrays in mission-critical environments, we need to understand the tension that exists between on-premises storage vendors' business needs and the market realities.

On-Premises Vendor Business Needs
  • Vendors have no choice but to compete against HCIs, SDS, cloud providers, and other storage array vendors
  • The need to protect existing storage array revenue streams and their customers' investments in these arrays has forced established storage vendors to sell SDS defensively. In other words, the user has to motivate their incumbent storage vendors to bid their SDS solutions
  • Established storage vendors do not want to introduce new storage architectures into the market that will put them in the position of having to recompete for their customers
Market Realities
  • Businesses are rethinking their acquisition procedures to take full advantage of the CapEx, capacity on demand (COD) and consumption-based pricing models
  • Many privately held SDS vendors lack the credibility, marketing, sales bandwidth, ecosystem support, and testing capabilities needed to displace arrays supporting mission-critical applications in petabyte-scale data centers
  • CIOs, storage administrators, and operations directors are not going risk their ability to meet mission-critical SLAs when most data growth is in unstructured data that can be stored on inexpensive on-premises storage or in the cloud
End User Realities
  • Deploying storage arrays is a safer alternative to deploying standalone SDS solutions
  • Large enterprises are evaluating the operational and efficiency advantages of building more cloudlike infrastructures 

For InfiniBox customers and petabyte-scale prospects ready to deploy InfiniBox arrays there is no need to read further because InfiniBox is delivering many of the promised benefits of SDS without the risks inherent in becoming your own systems integrator.

For those that need or want a deeper understanding of SDS pros/cons vs storage arrays, let's begin by acknowledging that storage failures are longer lasting and therefore more painful and expensive to recover from than compute or networking failures and that interest in SDS is driven primarily by the desire to lower storage acquisition and ownership costs. If an application aborts, it's restarted from a checkpoint. Recovering from a Software-Defined Networking (SDN) failure can take more time than recovering and restarting an aborted application, but it's still short compared to recovering from an SDS loss of data integrity that can result in hours to days of hobbled operations as data is restored or declaring a disaster and moving operations to a disaster recovery site.

Observations
  • Established storage vendors will continue selling SDS defensively until they can sell SDS without damaging their revenue streams, or competition from on-premises and cloud providers forces them to sell SDS proactively
  • Current storage capacity growth forecasts have made discounting SDS by more than 10% to 15% below their equivalent array solutions is a money loser for established storage vendors: the more they sell, the more they lose
User Guidance
  • Installing SDS requires that you become your own systems integrator. You are buying the hardware, managing software and firmware upgrades across all nodes, and integrating the hardware with licensed software
  • Monetize additional support and downtime costs to ensure that SDS savings do not evaporate over time or with a single outage
  • Do not buy SDS hardware and software from different vendors unless there is a compelling financial or strategic reason to do so because it can:
    • Create finger pointing situations when problems arise
    • It could force established storage vendors to try and claw back lost hardware margins by raising software prices
Claimed SDS advantages

Agility – Moving data between arrays or data centers is an inherently time-consuming, resource intensive, error-prone process. Moving bits from one location to another is real work and the reason why data generally stays where it was first stored even if it's no longer cost-effective. One of InfiniBox's advantages is that it moves data between DRAM, Flash, and HDDs in real time without human intervention to optimize price/performance and without consuming SAN bandwidth.

Scalability – For "SDS washed" array software SDS instantiations this claim is a non sequitur because it's the same software that runs the vendors' arrays, but for scale-out SDS instantiations this is a mostly true claim, but one built on many assumptions. Among the more prominent claims are:

  • Scale-out SDS has lower acquisition and ownership costs than storage arrays
  • Performance will scale linearly and provide a consistent performance experience with high node counts
  • Intermixing asymmetric and different generation nodes in the same cluster will not cause problems. This is almost inevitable if the organization is pursuing a just in time upgrade strategy because of the short marketing lives of servers, HDDs, and SSDs.
  • Scale-up arrays cannot satisfy the organization's performance and capacity needs.
  • SDS vendors, without the gross margin of hardware sales, can build and sustain an effective support organization and fund R&D at a high enough level to maintain large compatibility support matrices and product competitiveness.

While InfiniBox cannot compete on price against small SDS configurations, at petabyte-scale the cost comparisons between scale-out SDS and InfiniBox demand closer analysis. InfiniBox's scale-up integrated mixed-media architecture lowers InfiniBox controller and media costs vs a scale-out SDS solution that relies heavily on Flash to deliver performance. InfiniBox arrays also reduce total ownership costs by automating data placement and being complemented by a broad set of productivity tools that includes InfiniVerse, InfiniMetrics, and Host Power Tools.

Performance – More performance sounds great, but without a baseline reference it's meaningless; furthermore, if it's 30% or more past the organizations' worst case forecasted need over the planned service life of the storage solution, it doesn't make a difference. If maintaining consistent performance requires frequent and episodic tuning, the higher ownership costs can easily wash away any initial acquisition savings. InfiniBox DRAM centric data flows, coalesced writes, and QoS capabilities essentially eliminate the creation of hotspots and scaling to up to 4 PB of usable capacity and up to 10 PBs of effective capacity when best practice configuration techniques are followed.

More Fault Tolerance - Since scale-out architectures generally have more components than scale-up arrays of comparable performance and capacity, they need more fault tolerance to match a scale-up arrays usable availability. However, moving beyond this simple, common-sense observation are more fundamental and important observations. More specifically:

  • HDD and SSD failures account for the bulk of hardware failures in a scale-up or scale-out array. Hence it is the MTBFs of the HDDs and SSDs, the number of failures that can be tolerated while still guaranteeing data integrity, and rebuild times that will most influence the mean time between data losses of an array. InfiniBox HDD groups can tolerate 2 HDD failures before losing the ability to guarantee data integrity and its rebuild times are measured in minutes, not hours or days
  • Human errors and software bugs account for approximately 80% of all downtime
    • InfiniBox is self-management capabilities and superb ergonomics reduce the opportunities for human errors
    • InfiniBox's low frequency of patch activity reflects high software quality
    • InfiniBox software is market validated by our customers

Standardized Hardware Infrastructure Creates Economies of Scale - This is a true but limited statement because server margins are lower than storage array margins and there are limits to discounting whether buying servers or storage arrays. Stated differently, going beyond buy quantities that deliver the best possible prices does not result in even better prices. Since Infinidat's go to market strategy is focused on install base growth, rather than protecting existing revenue streams, their use of price/performance as a value proposition greatly diminishes SDS's purported price advantage.
Because data tends to stay where it first landed the flexibility of moving data between servers within a standardized infrastructure is of more academic and practical interest because moving data between arrays or storage nodes is a time consuming, error-prone process high impact process. 

Faster Hardware Refresh Cycles – These are dictated by the vendor, not the user, unless the user is willing to run uncertified hardware. Asset management and internal qualification policies can also retard the adaption of new hardware platforms. Many enterprises will not buy a new storage solution until it has been market validated, meaning that it has been used actively in the market for 6 to 12 months. Then, if there is an internal qualification procedure, that could add 3 more months before deploying into production, with the result that users know that they will be 9 to 15 months behind the newest technology. Infinidat as a matter of historical record has done two hardware refreshes since launching InfiniBox in early 2013, and on May 7, 2019, guaranteed that its newest InfiniBox is both NVMe-oF and Storage Class Memory (SCM) ready. 

No Vendor Lock-Ins – This is an unobtainable fantasy because once you buy hardware or software from a vendor, you create a relationship that requires resources to break. The lock-ins can be weak or strong, technical, financial, procedural, visible or invisible, emotional, or legal. Array replication technologies have historically been the most difficult to break because changing them can break so many things. The right business decision is to accept them where they provide needed capabilities or create competitive advantage. That said if we can acknowledge that NFS is perhaps the least sticky protocol in ubiquitous use, InfiniBox's NFS implementation can help users minimize lock-ins and potentially delay or avoid converting applications to S3 or other RESTful object protocols because it can store billions of files without performance imploding.

Lower Purchase Price – Without lower ownership, upgrade, and refresh costs, a low-cost storage solution can easily become an expensive storage solution. Since scale-up arrays usually just add media and enclosures when adding capacity vs. SDS scale-out arrays which usually add nodes, (complete with HBAs, microprocessors, DRAM, power supplies, blowers, and media) their cost of goods (COGs) is generally going to be higher than the COGs for an equivalent scale-up architecture such as an InfiniBox. This means that acquisition and maintenance costs are going to be set by the vendor's go-to-market strategy, competitive pressures, and the user's negotiating skills – ditto for upgrade costs. 

However, ownership costs are determined by more than acquisition, maintenance, power and cooling costs. Personnel, backup/restore, downtime, and lost opportunity costs (i.e., the lack of agility) are also significant contributors, and these are very heavily influenced by array automation, scripting capabilities, GUI ergonomics, data flows, code quality, and fault tolerance – all areas of InfiniBox excellence. While downtime costs are always hard to calculate, reframing the question into "How many extra downtime events need to occur before the inexpensive storage solution becomes a false saving?" If the answer is "one," it's probably not the best decision.

Bottom Line Conclusions
  • SDS is NOT going to displace on-premises primary storage arrays supporting mission-critical applications for the foreseeable future
  • HCIs account for the bulk of on-premises SDS usage for the foreseeable future
  • Whether SDS becomes a significant share of the secondary storage market remains to be determined
  • SDS instantiations of on-premises storage arrays, running on shared cloud infrastructures, will become a common hybrid cloud deployment model because it solves the technology translation problem at the infrastructure level that plagues current hybrid cloud deployment models
  • Deployments at the edge will remain a popular SDS use case because of its ability to improve data availability while reducing environmental footprints and costs
  • InfiniBox, InfiniGuard, and Neutrix Cloud are delivering many of the values SDS promises, without any of the accompanying risks, WITH satisfaction that has 99% of Infindat customers proclaiming that they recommend Infindat storage to their peers
Информация Stanley Zaffos

Stanley is the Sr. VP of Product Marketing at Infinidat.
Prior to joining Infinidat, he was a Research VP with Gartner focused on Infrastructure and Operations Management. His areas of expertise cover storage systems, emerging storage technologies, software-defined storage, hyper-converged infrastructure, and hybrid cloud infrastructure. He's worked with numerous clients to develop messaging and collateral that maximizes the impact of their product announcements and sales training, as well as helping to define roadmaps that ensure ongoing competitive advantage.