Some thoughts on choosing Hyper-Converged Platform

Let me please start by saying that Software Defined Storage is not Hyper-Converged. However, in my opinion, Storage is the key to HCI. Some of these architectures utilize what I’ve previously stated are Software Defined Storage models, and some can utilize both internal and external storage as their approach to providing storage to the virtual environment.

The goal in the Hyper-Converged environment is not creating necessarily to create a small version of a more traditional converged architecture (A-la vBlock or FlexPod) but the idea is that the hypervisor being the glue that holds the virtual environment together. We’re really not discussing bare-metal servers here, as you can do with the larger converged architectures. On these, you can carve out some servers to perform bare-metal services, though they didn’t really start out that way.

Hyper-converged really means appliance type architecture, leveraging a Pod type model in which each chassis provides some level of storage, compute and even some type of networking. They rely on specific hardware reference architectures. Today, most of these Pods have a scalable style in which growing the architecture by adding more chassis into the architecture adds storage, compute and networking adds exponential scalability. More often than not, there is some reliance on consistency of architecture. In this cryptic statement, I mean, there are some differentiations between the chassis, and quite often the addition of another chassis requires the same config.

One big use-case for HCI is the Remote Office-Back Office (ROBO) environment wherein the remote office(s) have a built-in redundancy and replication of their virtual workloads back to the central office. These provide architectures that can and often do build DR environments, backup and recovery environments to perform in a single shot, an element that many data centers lack even today.

Many of these HCI vendors are hoping to replace the entire existing environment in a rip and replace approach. Others (and my preferred architectures) recognize that these environments grow, but the customer shouldn’t be required to abandon an existing environment even with the inefficiencies of the inherent non-hyper-converged architectures. I do think that the migration to hyper-converged is not a bad idea in the long term, but can be untenable without strong planning.

Remember that before choosing ANY platform, try your best to address your potential issues, understand your goals, and ensure that the equipment you do choose to accomplish this has the capacity to resolve your problems both now and into your future.

A number of these systems are proprietary, and their goal is to lock you in to their stuff. Is this something you’re willing to tolerate? I make no judgements. If “In for a Penny, In for a Pound” is acceptable to you, please go ahead.

In terms of considerations both now and into the future, there are some questions I recommend prior to making a choice over to who’s platform you choose, should you decide that for your environment, moving to Hyper-Converged is something to which you’re motivated.

What storage is used in your environment and are there reliances on it? Do you only support Block or File? What connectivity do you use? Is it Fibre or Ethernet? If you use Ethernet, what connection speed do you use? Do you want that to stay the same? Where does Object storage fit in to your plans? What is the entire size of your virtual environment today? Have you some idea how large your environment will grow in terms of storage over the course of the next five years? How expandable/scalable is the storage delivered in the Hyper-Converged architecture? Nobody wants to run out of space before processor capacities. In some cases, external storage providers can be leveraged to add storage outside the internal system. The key with that last point, is that not all of these vendors support something like this.

Compute: What workloads are you virtualizing? Are you using a specific Hypervisor? XEN, HyperV, and VMware all have differing specifications in terms of processor compatibility, etc. How many procs/cores are in use by your existing environment? While Intel is increasing the number of cores per processor, one must try to ensure that the new environment can support the old system plus the desired coming machines.

Network: Do you want to use anything specific in terms of Software Defined Networking? OpenFlow? ACI? NSX? There should be some consideration in your decision making for this.

How about your hypervisor? Some of these hyper-converged systems support only one of the hypervisors. Is that acceptable for you?

Honestly, I could go on for a long time on this. A lot of these questions are mission critical, and can make or break the architecture, or add a level of complexity that you certainly don’t want. As an architect, my first question to a customer is: You want to go Hyper-Converged? Why? I’m not questioning the choice, I just want to ensure that the goals sought by the customer are well established, well responded to, and that the best choice is made.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.