This is the first of a five-part series on HCI. I hope you enjoy and that this prompts discussion. Please feel free to reach out to me, either here or via Twitter @MBLeib.
HyperConverged Infrastructure is one of the new hot areas of technology in the IT Datacenter space. But, like most areas of technology, there’s the marketing words and the definition. So, what is the definition? I’ve written about this before. There’s no industry-wide meaning, but in my opinion, it involves the management of an architecture based on hardware, software, storage and a hypervisor. As this audience is familiar with hypervisors, I’m happy to skip the “What is a Hypevisor” conversation, but in certain cases, the architecture supports, for example, VMware, but not KVM, Hyper-V, or some flavor of that. In other cases, the architectures will support all of them.
Let’s be clear here, though: I believe that the original concept of this category is all built around the hypervisor, hence the term HyperConverged.
I don’t believe that it involves a compute/storage environment, but without the hypervisor. So, for example, in the backup space, Rubrik and Cohesity, with no disrespect, are converged but not hyperconverged. And, believe me, there are many advantages in the converged arena as well, but by my definition, this is not that! I lay no claim, by the way, to the veracity of my definition.
The history of such an architecture goes back to the launch of the EMC/Cisco product, the vBlock. The idea when this was created, was that it was a compute environment, powered by VMware and Cisco servers (UCS), a switched environment powered by Cisco Nexus, and of course storage by EMC. The product was chosen by size requirements. Your compute engagement would be built around supporting the VMware load, and the storage would involve all your storage requirements. Seems easy, right? It wasn’t. These were first generation builds, and required much in the way of fine tuning, and technical support. At the same time, NetApp introduced their answer to this, with the FlexPod. These were generation 1 products, and though they were built quite robustly, they were tougher to manage than it was ever intended.
Soon, came the launch of products from Nutanix and Simplivity. Designed around industry standard X86, and initially, a shared spinning disc storage environment, with a virtual SAN architecture spread across nodes. This became a far more viable build, with sizing around three or four node X86 clusters. Scalability was initially difficult, as once you outgrow your sizing or your storage, the requirement would be to spend once-again on a full cluster.
Alternative builds arrived on the scene from brands like Datrium, and NetApp, VMware’s VxRail, as well as others, which had the idea of using storage nodes and compute nodes as separate components. This gave the customer far more friendly ways in which to grow the architecture. No longer were you limited by the storage/compute limits. If you needed more storage, you would place a storage node into the cluster, and if you needed compute, well that would be easy as well. I do find these architectures compelling.
As you can see, there are many approaches to resolve a converged architecture, with varying approaches designed to solve a variety of inherent issues. The beauty of having so many options to draw from when pursuing this option is that your data center needs will be likely resolved by one of these.
I’d also like to stress, as has always been my opinion, that the idea of convergence may not actually be appropriate for some scenarios. Orchestration elements have become far more sophisticated, such that “Pools” of resources can be provisioned from the whole using a variety of methods, depending on the hardware to be leveraged. Sizing, needs, scalability, and other particular variables can be used to achieve either the same or similar goals. The build of Servers, fabric, storage and network are still viably options. Also a potential needs can be solved by using a newer version of the converged architectures available, as HPE has done with the fully managed Synergy architecture.
Before endeavoring to implement an approach, be sure that your goals are actually being met by the solutions you pursue.