Escaping Public Cloud Lock-in with Fast, Reliable Cloud-adjacent Storage
My 8-year-old son overheard me talking to someone about cloud storage. He asked me, “How can one really store anything in the clouds? Aren’t they just a bunch of droplets, and if you put something there, wouldn’t it fall down to Earth?”
In computing clouds, it is actually the opposite – once you put your data in the cloud, it usually stays there. And by “there,” I mean “with that specific cloud provider you originally selected.” One of the main reasons for such cloud vendor lock-in is the prohibitive costs of data transfer between different public cloud providers.
Would it be possible to offer an alternative, and allow customers to ease transition between different cloud offerings? Wouldn’t it allow further freedom of choice and an ability to leverage the best of all words?
What if someone could place reliable storage outside of the public cloud, and keep full control over their data, while still being able to benefit from the compute elasticity of the cloud? Would it ease transition between clouds, as could be required by the business?
One of the main gating factors for such environments that usually comes up is the performance impact of high latency. But is it that bad?
INFINIDAT is exploring these questions because our clients are interested in hybrid cloud strategies and are painfully aware of the tradeoffs of centralizing with individual cloud providers. To test our thesis, INFINIDAT installed an InfiniBox storage system in a colo facility adjacent to the major public cloud regions. We enabled a Direct Connect link to AWS public cloud and ExpressRoute to Microsoft Azure, as shown:
Block storage is exposed via iSCSI and file storage via NFS to AWS VPC and Azure vNet environments.
With this setup we compared performance of native block and file offerings by the major public cloud providers with the performance available from cloud adjacent InfiniBox, using the standard vdbench utility and the following infrastructure:
- INFINIDAT storage: InfiniBox F2260, running InfiniBox 3.0 software. For block storage comparisons, XFS file systems were created on top of native InfiniBox iSCSI volumes. For file storage comparisons, native InfiniBox NFS storage was used.
- AWS components:
- client: m4.10xlarge instance, running Red Hat Enterprise Linux 7.2
- block storage: 400GB EBS SSD provisioned IOPS with XFS file system
- file storage: EFS file system
- network: 500 Mbps Direct Connect
- Azure components:
- client: DS5_v2 instance, running Red Hat Enterprise Linux 7.2
- block storage: Premium SSD 5000 IOPS/200GB with Read caching with XFS file system
- file storage: SMB file system
- network: 2 Gbps (redundant) ExpressRoute
Here’s what we found:
- Throughput between public cloud instances and InfiniBox was limited only by the Direct Connect and ExpressRoute links. We were able to fully utilize these links even with IPsec tunnels between our storage and the cloud virtual private networks.
- Latency for InfiniBox NFS access was by far better than native public cloud file services.
- For block access, latency was usually comparable and for some workloads significantly lower versus native offerings – despite an extra 2 ms of network latency between the InfiniBox and cloud servers.
This performance is impressive and when you add in the ability to share data between several clouds as well as the economics, simplicity, and functionality of INFINIDAT storage, it becomes a positively compelling solution for just about every large enterprise considering using public cloud environments.
Cloud storage reliability is another concern for some traditional enterprise customers – especially in light of recent Amazon, Google or Microsoft Azure storage service disruptions. No solution is bulletproof, but INFINIDAT’s reliability record is impressive.
INFINIDAT has our own beta environment, available now. We are interested in working with clients to demonstrate this value for client use cases. Additionally, I am happy to share actual numbers and discuss our experience – contact me at [email protected] for more details.