An OpenStack and Infinidat Unified Storage
Everything is big in Texas. So is the OpenStack Summit happening this week in Austin. So many opportunities to meet old – and make new – friends and partners. It’s going to be a very busy week for us in Austin – and here we are, ready for action!
The proliferation of OpenStack creates an incredible growth opportunity for our unified highly-reliable, dense, high-performing storage solution. In last week’s post, my colleague Erik explained some of the challenges clients who deploy OpenStack think about as they consider storage solutions. This post will discuss how INFINIDAT’s unified solution can be used to serve the storage needs within an OpenStack cluster.
Each OpenStack cluster includes multiple components. I will list only a subset of those relevant for this post:
- Compute (Nova): allows the user to create and manage virtual servers
- Block Storage (Cinder): persistent block storage for running instances
- Object Storage (Swift): stores and retrieves unstructured data objects through the HTTP-based APIs
- Image Service (Glance): discovery, registration and delivery services for the disk and server images
- File Storage (Manila): file shares management
INFINIDAT unified storage within OpenStack cluster – catering to all storage needs
Let’s cover each one of these components and explain how INFINIDAT plays a role within it:
Block storage (Cinder)
Since OpenStack Havana back in 2013, INFINIDAT has provided a Cinder driver for InfiniBox which supports both FC and iSCSI connectivity. This driver provides all the necessary functionality, including volume management, snapshot management, migration, and more. In addition to INFINIDAT’s Cinder driver, we offer a CLI utility to simplify the configuration. Our OpenStack CLI is a command line tool for adding the InfiniBox system and volume backend types to Cinder’s configuration file. Instead of manually editing OpenStack configuration files, the administrator just needs to run one simple command:
This interface simplifies volume driver configuration, which otherwise would require manual modifications of the Cinder configuration.
Object storage (Swift)
InfiniBox offers a high-performance (as much as 1M IOPS), highly reliable storage solution (99.99999% uptime) with built-in data redundancy. Swift can be configured using InfiniBox volumes with iSCSI or FC connectivity to Storage Nodes with XFS filesystem. Usually, Swift internal data replication is used to provide better availability but wastes disk space; Swift Erasure code calculations consume more CPU cycles – InfiniBox backend reduces demand for extra disk space and allows to use less powerful CPUs while providing higher data availability. In addition, InfiniBox offers additional functionality with data encryption for security and SSD caching for increased performance, as opposed to bolt-on functionality that has to be manually architected in common Swift deployments. Deeper integration with Swift is planned in our future releases.
Every VM running on a Nova compute server has a root volume and potentially additional ephemeral drives. In many installations a local disk of the physical compute node is used to store these ephemeral drives in the /var/lib/nova/instances directory. In this scenario live migration for a running VM may take time and impact performance. By default, such migration is not supported at all:
Some of our customers boot virtual machines with root disks on iSCSI or FC-connected Cinder volumes.
However, it’s also possible to use an InfiniBox NFS filesystem for this purpose. In this case you can treat compute hosts as stateless. As long as you don’t have any instances running on a compute host, you can take it offline or wipe it completely without having any effect on the rest of your environment. This simplifies the maintenance of compute hosts. If a compute node fails, instances are usually easily recoverable. It is possible to seamlessly increase the capacity of this shared drive and InfiniBox highly scalable NAS capability provides optimal performance. It also simplifies live migration of VMs between different Nova compute nodes – as long as they’re using the same shared InfiniBox storage.
InfiniBox APIs can be used to further automate provisioning or resize of the instances file system as part of your OpenStack automation environment. Here are sample commands you can use to provision and export an NFS filesystem on InfiniBox for Nova instances:
Image service (Glance)
In the basic Glance setup, the local disk of the OpenStack controller node is used to store images in the /var/lib/glance/images directory. However, it is possible to use InfiniBox NFS mount to store images. This provides an ability to scale the capacity available for Glance, and removes a need for backup or replication of the images. The highly scalable InfiniBox NAS capability takes performance concerns out of the picture, and provides similar benefits as outlined above for Nova.
INFINIDAT offers an easy-to-understand REST API and Python SDK, which can be used to control everything within our system. This provides tremendous flexibility to our customers who are interested in integrating InfiniBox management and monitoring within their environment using Ansible, Chef, Puppet and the like.
OpenStack is growing in multiple environments, and we see more and more projects coming up under the OpenStack umbrella. Part of the INFINIDAT mission is to help clients with all of their storage needs. As we think about data at scale and storing the future, the fundamental requirements clients need are management simplicity, reliability, performance, scale and most of all better economics. We build our InfiniBox solution to ensure clients who are building data center architectures of all kinds get these benefits, and certainly OpenStack environments are high on our priority list. And, our philosophy as we evolve is to include all future features with no extra licenses required.
If you’re interested in learning more about our product and integration with OpenStack, contact me at the Summit or after it at gtouretsky At infinidat Dot com or @gregnsk.