Infinidat Blog

Infinidat CSI Driver

Two years ago I wrote about containers and our first integration with Kubernetes in a blog post. Back then we released a dynamic persistent volume provisioner for InfiniBox. This solution got quickly adopted by some of our advanced customers and helped us to gain more experience with stateful Kubernetes workloads.

In containerized environments, the pace of change is incredible. We see more demand for scale, flexibility, and standardization. We’ve decided that the best route to offer new functionality for our customers in the Kubernetes ecosystem is through CSI (Container Storage Interface) support.
 

CSI is a specification for orchestrating control plane operations on file and block storage. It’s an evolving standard which is described here. This interface is supported not only by Kubernetes, but also other containerized ecosystems - such as Cloud Foundry and Mesos.

It’s important to understand that CSI is an evolving standard, and new functionality gets added with every new Kubernetes release. To gauge maturity of each feature, Kubernetes defines three stages of API readiness for production:

●    Alpha: recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support
●    Beta: recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases
●    General Availability
    
The table below can help to understand readiness of a specific CSI feature for production use based on the Kubernetes release version:
 

For example, Kubernetes 1.17 release has CSI provisioning capabilities at the GA level. Several additional features such as clones, snapshots, and volume expansions are at the beta level and should become generally available soon.

More and more of our customers are asking about CSI support, and we’re glad to announce General Availability of our first InfiniBox CSI Driver version today. With this release our customers are able to do the following:

●    Manage Kubernetes Persistent Volumes using iSCSI, NFS, or Fibre Channel connectivity
●    Control multiple InfiniBox arrays within a single Kubernetes cluster
●    Manage InfiniBox snapshots and restore from snapshots
●    Clone PVs
●    Extend (grow) PVs
●    Manage raw block PVs
●    Import PVs created outside of our CSI driver

Our CSI driver supports all access modes available with Kubernetes persistent volumes:
●    ReadWriteOnce - the volume can be mounted as read-write by a single node
●    ReadOnlyMany - the volume can be mounted read-only by many nodes
●    ReadWriteMany - the volume can be mounted as read-write by many nodes

We love to work with customers at scale. They can create 100s of 1000s of persistent volumes on a single InfiniBox, meeting requirements of very large Kubernetes deployments.

  $ kubectl get pvc -n ibox | wc -l
  100124
  $ kubectl get pv | wc -l
  100124

A sample Kubernetes cluster with over 100k PVs on a single InfiniBox

One of the important features is the ability to take over an existing PV and later control it as any other “natively” created volume. It can be used by customers migrating PVs created with our old Dynamic Volumes provisioner, or those migrating PVs from legacy storage systems into their new InfiniBox array. It can be used also to recover an environment at the DR site, leveraging InfiniBox replication. And, it can be used in conjunction with our Neutrix Cloud to migrate containerized workloads among Kubernetes clusters on-premises and in the public clouds - including managed offerings, such as AWS Elastic Kubernetes Service (EKS).

Customers may leverage all standard InfiniBox functionality to enrich their Kubernetes experience. For example, QoS policies can be applied to all PVs associated with a Kubernetes StorageClass to ensure proper performance for stateful applications. InfiniBox real-time performance metrics, as well as historical performance information can be tracked using InfiniMetrics and InfiniVerse tools.

We see customers using different distributions of Kubernetes. Some of them are running open-source, while others are going with commercially supported distributions such as Red Hat OpenShift, VMware Tanzu Kubernetes Grid Integrated Edition (formerly known as Pivotal Container Service and VMware Enterprise PKS), Docker Kubernetes Service or Google Anthos. Users can choose between Helm chart and an Operator as a way to deploy and manage the InfiniBox CSI driver.

While we’re just announcing our first implementation of the InfiniBox CSI driver it is already used by some of our valuable beta customers. A Senior Systems Engineer from a major multinational software development company in the US had this to say, “Our testing looks great. Everything is fully functional. Thank you for working with us on this!  I'm very excited to get my hands on the final product in Operator form.” An application engineer from a multinational IT and electronics company, based in Japan noted, “It’s better and simpler than [a CSI driver from a competitor].” "Infinidat CSI Driver helps us in our efforts to bring closer together our infrastructure and development teams by merging the great features Infinidat has to offer and the ease of use for this driver,” said Technology Architect - Unix, Storage and Data Center at a Global Consulting organization with 18.5PB of InfiniBox capacity across 3 different locations.

We’re also getting recognition from the analyst community - for example, see recent GigaOm Radar for Data Storage for Kubernetes.

I’m excited about our new offering for this fast-growing ecosystem. I’d love to talk to everyone about our new capabilities, whether at KubeCon and elsewhere, or via Twitter @gregnsk. Happy containers!

 

About Gregory Touretsky

Gregory Touretsky (@gregnsk) is a Senior Director, Product Management at INFINIDAT. He drives the company’s roadmap around NAS, cloud and containers topics. Before that Gregory was a Solutions Architect with Intel, focused on distributed computing and storage solutions, data sharing and the cloud. He has over twenty years of practical experience with distributed computing and storage. Gregory has an M.S. in Computer Science from Novosibirsk State Technical University and an MBA from Tel-Aviv University.