BURST TO THE… PRIVATE CLOUD?
For years we sized our infrastructure based on the busiest time of year, leading to low-utilization most of the time. After virtualization this problem is better but far from resolved, which is why cloud bursting has been such a topic of interest.
No one wants to pay for infrastructure in advance, especially at this time of financial uncertainty. However, your stakeholders are frustrated when you don’t have enough resources to serve a spike in demand. Again, financial uncertainty is causing unpredictability. Companies who successfully build their infrastructure with the ability to burst when they need to will spend a minimal amount up front, and the optimal amount during their seasonal peak. If that’s the case, how is it that only a small percent of companies do it successfully?
Before We Dive In
The root cause of this challenge is at the heart of solving it:
Traditionally, the only way to build IT infrastructure was to prepay for what you estimate you’ll need for the near-term, accepting that:
- Undersizing will result in delays of new projects
- Oversizing will increase infrastructure costs and lower ROIs
- Private cloud consumption models are cheaper, but lack the elasticity of the public cloud.
Let’s talk about the challenges of bursting to the public cloud:
Bursting to the cloud requires operations teams to be able to maintain their SLAs, and meet compliance, governance and all other requirements. The mixing of cloud infrastructure creates new challenges:
- How to get the data to the cloud, without disrupting the service?
- How to get the data out of the cloud when the peak is over?
Most cloud providers focus on tools to on-ramp workloads to the cloud, not to get workloads out of it.
- How to keep backups consistent when they are fed from multiple sources?
What happens if your backup from last week is on-premises and you now need to restore it to the public cloud?
- What is the egress cost of moving the data back on premises?
For large datasets this could impact the TCO of the entire project.
- What governance tools are required? Are they implemented in the public cloud?
- What security tools are required, as we increase the attack surface?
- How do you test inherently more complex DR solutions?
Some of these operational challenges are made simpler with microservices, enabling customers to run the same application in 2 places at once, but they introduce other challenges:
- How to guarantee data consistency across multiple platforms?
- Orchestration / automation tools vary between private and public clouds.
- How do you control what portion of your service runs in each location?
What is the implication of each such decision on the overall TCO?
In every event at which I’ve spoken, (back in the good old days of face to face events) I would survey enterprise IT managers the same 3 questions:
- Are you taking new applications to the cloud?
All hands would go up.
- Are you doing it because it’s saving you money?
No hands go up. People looked at me as if I lost my mind.
- Are you doing it to gain business agility?
All hands go up again, and people realize I haven’t lost my marbles.
Unlike SMBs, enterprises enjoy economies of scale in their private cloud, and so they end up paying a premium when going to the public cloud — known as the “cloud tax” (and I will skip the discussion on how big that tax is - suffice to say that all customers I asked estimated double-digit numbers).
So enterprises are faced with the following equation:
- Public cloud = more agile but costly.
- Private cloud = less agile but cost effective.
Since enterprises need both cost efficiency and agility they have three potential avenues:
- Make the public cloud more cost effective.
Until someone actually meets the tooth fairy in person, that avenue is not possible.
- Consume hybrid cloud services, leveraging the agility of the public cloud only when necessary, and therefore minimizing the additional costs.
This approach is usually limited by the operational challenges mentioned above.
- Make your private cloud burstable.
To do that, we need to go back to the root cause that started all of this.
Making Your Private Cloud More Burstable
We started from the root cause of our problem: The consumption model of private cloud requires you to size in advance, which means you bear the risk. But just like the public cloud can take that risk by serving many customers, your private cloud vendors should offer you the same capability.
With compute virtualized / containerized it’s very easy to scale, and the network doesn’t scale often, so it is not often the limiting factor. It’s the data / storage layer that is the limiting factor. Solving storage bursting solves the “cloud tax” financial problem but also removes all the technical issues of bursting to a public cloud.
There are storage vendors offering this burstable private cloud storage, but with large caveats:
- It is expensive, effectively bringing the cloud tax into your private cloud.
- It’s only applicable to a small percentage of your storage.
- It forces the customer to use OpEx consumption models, which not all customers want / can use.
- It relies on physically shipping new capacity when you grow, which adds operational overhead as well as the risk of delays.
These vendors don’t really de-risk the infrastructure investment — they roll the cost of the risk back to the customer, not really solving the problem.
A Truly Burstable Storage
Infinidat’s Elastic Pricing model removes these vendor limitations, allowing customers to burst up to 500% without having to ship in new capacity, and lets customers use CapEx, OpEx or both simultaneously as they see fit, even within a single system. It reduces the cost of storage and allows customers to only pay only for what they use, avoiding inaccurate capacity planning. Most importantly it requires no compromises on performance, reliability or availability. Learn more.