Should it really be “Hybrid Cloud Infrastructure?”
At NetApp Insight last week, Gabriel Chapman presented on the NetApp HCI offering, which was not just another “Join me in celebrating the magic of Hyper Converged” conversation. The conversation really had a lot to do with the features and functionality of NetApp’s solution, but we did spend a fair amount of time discussing the reality of where and what HCI actually is. Oddly, I wrote a piece on this not too long ago. Words mean things… I also wrote, in the lead up to this show that I’d hoped to hear a deeper dive into the HyperConverged offering from NetApp here. Suffice to say, I was not disappointed.
Gabe made a compelling case for why, particularly as the space has matured, the “Traditional” market definition of HCI is not really valid today. Let’s simply agree that any conscripted or reference architecture designed to serve up their workloads in a predictable way, wherein the functionality is defined and reliable.
Wherein the more traditional model, a more pod-like design, when, for example, the storage or compute begins to run low, a new device needs to be acquired. It does make for a large jump in outlay for scalability to be achieved. The model, while truly functional, doesn’t work in many cases. The approach promoted by NetApp allows for storage or compute to be added individually as needed.
A while back, I wrote about Datrium and their more modular approach. It seems that NetApp has taken this model into account and expanded it into a functional model that works within the NetApp ecosystem. The idea is that to expand on either of these functional categories as needed. And, using simple iSCSI initiators, one may even add storage to the environment with standard NetApp FAS devices, so that many existing customers will be able to leverage their existing storage base to expand their storage platform without even adding storage nodes to the environment.
In all, I’d have to say an engaging and interesting conversation, well presented and insightful regarding the state of the HCI nation. Thanks, as always to @SFoskett and the Tech Field Day crew for including me, and to Gabriel (@Bacon_Is_King) for the interesting presentation. Here’s a link to the full conversation
I was wondering what is your sentiment toward NetApp HCI entry point level…. do you think they should try too lower it by replacing the SF part of the solution with something like e-Series storage ? Curious to have your thoughts on this.
I was wondering what is your sentiment toward NetApp HCI entry point level…. Do you think they should lower it by replacing the SF part of the solution with something like e-Series storage ? Curious to have your thoughts on this.
Is my opinion valid for changing the corporate direction? Let’s leave that by the side… I have very much confidence in a SolidFire approach, and considering that the AFA/SF backbone is the build this was originally built on, I feel it’s still the best approach. It does make for a more pricey solution, but let’s face it, most, if not all HCI is based on AFA today. I simply feel that the SF approach is more robust and feature rich, as well as being the direction that the industry is moving in terms of SSD versus Spinning Rust. Rather than building an alternative build, I’d rather see that R&D put into enhancing the config along these lines.
But they could go with ALL Flash E-Series… they would lose the sexyness of interacting with SF via APIs that were really created with DevOps in mind…. yet, this would help us, partners, to compete against Nutanix for example…
Honestly, if you cannot come up with a more compelling reason for this architecture to compete w/ Nutanix/Simplivity, then the cost factor will never be enough. I believe one must have both these types of solutions in their arsenal depending on how the conversation goes. Of course, I’m in the channel, and not working at the vendor’s level any longer.