This is the second of a series of five postings on the market space for HyperConverged Infrastructure
To be clear, as the previous blog post outlined, there are many options in this space. Evaluation of your needs, and clear understanding of why you want to go with a particular solution should not be made lightly. Complete understanding of what you hope to accomplish and why you wish to go with one of these solutions should be evaluated, understood and that information should guide you in your decision-making process.
Here’s a current listing of only some industry players in hyper-converged.
- HPE Simplivity
- Hitachi Data Systems (Ventara)
- There are actually quite a few more, but these are the biggest names as of today
Each technology falls toward the top of the solutions set required by the Gartner HCI quadrant. But, the question is really which is the right one for you?
Questions an organization should be asking, in an effort to determine which vendor(s) to pursue should be based not on the placement in the Gartner Magic Quadrant, but rather your organization’s direction, requirements and what is already in use. I wouldn’t ignore the knowledge base of your technical staff. For example, I wouldn’t want to put a KVM only hypervisor requirement in the hands of a historically VMware only staff, without understanding that learning curve and mistakes may be made. Are you planning on using or planning on using containers? There’re considerations to this. What about a cloud element? While most architectures support cloud, but then the questions are what cloud platform and what applications will you be using?
One of the biggest variables to be considered is and always should be backup/recovery/DR. Do you have a plan in place? Will your existing environment support this vendor’s approach? Do you believe that you’ve evaluated the full spectrum of how this will be done? The elements that truly set one platform aside from another are how does the storage in the environment handle tasks like replication, deduplication, redundancies, fault tolerance, encryption and compression. In my mind, the concern as to how this is handled, and truly how it might be able long to integrate into your existing environment must be considered.
I also would be quite concerned about how the security regulations your organization faces are considered in the architecture of your choice? Will that impact the vendor you choose? It can, but may not even be relevant.
I would also be concerned about the company’s track record. We assume Cisco, NetApp or HPE will be around, s they’ve been there with support and solutions for decades. To be fair, that’s not the only method for corporate evaluation, but a very reasonable concern when it comes to concerns about supportability for the life of the environment, future technology breakthroughs, and enhancements, and maybe the next purchase, should it be appropriate.
Now, my goal here is not to make recommendations, but to urge my readers to truly narrow down what is a daunting list, and then to truly evaluate the features and functions that are most relevant to your organization. Should a true evaluation be undertaken, my true recommendation would be to do some research into the depth of your company’s needs, and those that can truly be resolved by placing a device or series of them in your environment. It’s a decision that can last years, change the direction of how your virtual servers exist in your environment, and shouldn’t be undertaken lightly. With that being said, HyperConverged infrastructure has been one of the biggest shifts in the market over the last few years.