Among the challenges of loading applications with large data sets into cloud-based workloads is the sheer cost of storing that data. Up until recently, that storage was a fixed and immutable difficulty. I’ve said it before, the cost of storing that data is painful, but worse than that is the adjacent cost of the egress of that data. Let’s say you’ve decided to re-home your application inside your own datacenter. Well, the deletion or movement of that data from the larger cloud providers has a significant cost associated with it as well. Certainly, the associated cost is factored into the overall value of housing, even temporarily, that workload in the cloud. The idea that so many companies have made the decision to migrate some of these workloads out of the cloud, and back home has been drastically affected by this associated cost.
Recently, many of the major storage vendors have taken steps to mitigate these costs by leveraging what I refer to as “Proximate” or locationally close datacenters. The idea is that a storage vendor will standup, in a datacenter nearby, large storage architectures that show very little latency against the processing workload housed within the cloud provider’s architecture, such that the datasets housed there can interact with very little degradation to performance, with the applications, and yet not necessarily be housed on that far more costly storage within the cloud. In ideal circumstances, the application will function at a similar level, or even with no impact as it would were the entire application housed within the cloud provider. In this way, the benefits of the individual chosen storage platform can also be leveraged to provide insights to the storage platform, as well as a more opex related model, based on real usage, rather than building that storage overhead on-premises and purchasing that array on-site.
Many tasks can be achieved in such a way as the existing tasks are accomplished as well. For example, backup. A company may leverage their existing backup/recovery/replication protocols by simply adding another peer to their existing backup architecture.
In addition, the benefits of the particular storage model, like for example, Infosight on Nimble and 3Par can provide the same level of data analytics, and predictive analysis on that data in the proximate data that it would provide against a Nimble or 3Par architecture would were that a non-hosted platform.
HPE uses a technology called either “Cloud Volumes” for Nimble, or “CloudBank” on 3Par. Certainly the first to market this type of technology. More recently we’ve seen such solutions from Infinidat, in their product “Neutrix,” from Pure storage, with ES2, or the Evergreen Storage Service, which adds SLA’s, as well as providing both FlashArray X, and Flashblade, and solutions from NetApp and Dell/EMC. A number of these solutions presented at #SFD16, and PureAccelerate just this past year.
These benefits go well beyond that of moving that data from the traditional large cloud providers, or even on-premises arrays, that of the inherent particular use-cases that these vendors already bring to the table, and as well, a more “Leased” Opex model, which add a SAAS approach to the use of the storage.
I anticipate this to be a new paradigm and a service-based approach, facilitating benefits to the customer in their choice to migrate to the cloud, as well as helping to make these architectures more robust and cost-effective to these customers. I also believe we’ll be seeing more similar approaches and options across the spectrum of major storage provi