Infinidat Blog

Struggles and Frustrations of OpenStack Storage

OpenStack is a growing trend across the IT industry, offering clear advantages in terms of standardization across vendors, accelerated application deployment, increased efficiency, and other benefits listed in the recently released OpenStack User Survey – in short, agility, flexibility, and direct TCO benefits. We at INFINIDAT firmly believe in the power of OpenStack, and many of our potential clients ask us about OpenStack support. Those clients typically fall into three categories:

  1. Organizations that recognize the agility, flexibility, and cost savings possibilities OpenStack offers, but haven’t actually deployed an OpenStack environment in production
  2. Organizations that have OpenStack environments in production which incorporate whitebox storage hardware – a build-it-yourself type approach with some kind of storage software layer like Ceph or LVM (for smaller deployments)
  3. Organizations that have OpenStack environments in production which incorporate branded storage hardware from traditional and startup vendors

That third category of organizations we encounter is dramatically smaller than the others, and I think that says a lot about both prevailing client philosophies and the state of OpenStack capabilities among the storage vendor community. I’ll leave the philosophical battles aside to discuss over a beer at the OpenStack Summit in Austin later this month. For now, let’s focus on a couple of key elements anyone looking for OpenStack persistent storage should be thinking about: simplicity, efficiency, and resiliency – all with an eye toward TCO, of course, because most technology decisions are fundamentally economic decisions.

Looking at the approach taken by organizations in or contemplating category 2 – the DIY crowd – we absolutely appreciate the flexibility that comes with assembling a system from different functional components, and certainly custom-built systems can be incredibly powerful. However, there are costs and risks that come with configuring software, testing systems, figuring out complex data protection structures, manually rack’n’stacking potentially dozens of quasi-independent servers – and these costs add up quickly when you consider multi-petabyte scale environments. These indirect costs have a huge negative impact on the TCO of large-scale OpenStack solutions, and we’ve seen plenty of organizations who miss those factors in their budgetary estimates.

OpenStack

Meanwhile, for organizations in or considering OpenStack storage solutions based on branded vendor hardware (category 3), many of the more traditional storage vendors offer solutions that sound simple – traditional arrays now updated with OpenStack Cinder support, for example. It’s only when you actually start mapping out all the components required to get Cinder volumes out of a box that you find out the box requires a separate management server to translate to OpenStack, there are significant restrictions on the features that can be used with their OpenStack drivers, the authentication is different, and so on. All of a sudden you’re in a scenario where what sounded like the easier option ends up requiring twice the effort and significantly more cost – you have to both manage the traditional array features and you have to manage the OpenStack environment.

Crossing both of these categories is a common thread of efficiency and resiliency challenges. On the DIY side, for quite a while the expectations for scale-out software-defined storage solutions include several replicas of data to get to anything resembling four- or five-nines resiliency; a choice between fast or cheap’n’deep; and a pretty high controller:capacity ratio. On the branded vendor-oriented side, fortunately there aren’t usually so many replicas expected to get decent resiliency, but the other factors remain – and of course there is a massive price uplift that comes from putting a traditional vendor’s logo on the box. In today’s world, costs of raw storage capacity and compute still add up dramatically, workload profiles can vary dramatically minute-by-minute, and four- to five-nines is unacceptable at scale. An ideal solution for today’s large-scale OpenStack environments needs to challenge the traditional expectations associated with both ends of the spectrum.

INFINIDAT believes the lack of storage solutions that address all of these challenges for large-scale OpenStack environments is a major reason why so many organizations are:

  • stuck dreaming of the values of OpenStack – category 1
  • struggling with scaling DIY solutions – category 2
  • just plain frustrated with the clash of old vendor architectures trying to adapt to new platforms – category 3

And we think we have a better way to move forward. For more about INFINIDAT’s storage solutions for OpenStack, please read our white paper, and stay tuned next week during the summit for a post about our specific integrations and values from my colleague Gregory. If you’re heading to OpenStack Summit in Austin, we’d love to meet you and learn more about your OpenStack storage challenges – comment on this post, talk to your local INFINIDAT rep, or tweet to us @INFINIDAT to book a private meeting.

About Erik Kaulberg

Erik Kaulberg is Vice President Strategy and Alliances at Infinidat, leading Infinidat’s overall strategic development, key alliance partnerships including VMware, and special projects. He has broad expertise in enterprise storage and frequently engages key customers, partners, and analysts. Erik previously ran worldwide enterprise storage strategy and business development for IBM, after he sold all-flash array innovator Texas Memory Systems to the company.