Much like myself over the past few months, X-IO dropped off the face of the world. Again, like me, they’d been radio silent as they worked diligently to reinvent themselves. As I stated in my last posting, I was working in the limbo of the unemployed, but have found myself a fantastic new role. XiO has reinvented themselves.
In the early days of solid state storage, Xiotech impressed me with their approach to the newly minted world of all flash array. I thought that they were unsung heroes of the space, but for some reason, the world at that time moved around them. Other players were a bit more glitzy, a bit more public, and achieved the pole position in this space. It became clear to these storage scientists that they needed to do something to make themselves into something new and compelling. These stories are so intriguing to me. I think that it’s both exciting and intriguing to see how these organizations re-evaluate the market, and come out with a retooled product set that not only is differentiated from their previous stance, but as well, is differentiated from the rest of the vendors in the space.
So, like the Phoenix, Xiotech is now X-IO, with the Intelligent Storage Element. Retooled, rebranded, and newly focused, Axellio has changed up high-powered, high-IO architectures while taking into account the need for processing, storage, and rapid IO in a small form factor with the intended goal of a Petabyte of NVMe storage in a 2U device by the end of 2017.
One of the key take-aways from their presentation was that, as Bill Miller said, while “Big Data is great, we need Big Fast Data.” We need straming analytics “In Memory.” But we need to remember that In Memory doesn’t necessarily have in-line data supporting those databases.
These are the keys toward which X-IO built the Axellio platform.
One of the problems that they’ve set out to address is that of scalability. The thought that it becomes an issue of Ethernet versus PCIe is like thinking that it’s a choice of Apples or Helicopters. (Love that quote). The thing is, that PCIe is really tough to get right at scale. Scaling out can be quite compelling, but if you’ve got the ability to do more in a single node, than the interconnectivity issues (packet encapsulation, Ethernet trunking, etc.) become a non-issue. You’d be able to read the data as fast as you can write it.
They’ve not rewritten the storage stack, but are using that which is native to the Linux OS, which reduced their engineering overhead. They could concentrate on the price/performance metric.
They’ve also made a key determination on the question of deduplication. There is a penalty to having dedupe turned on. Yes, there’s space optimization, which reduces the storage overhead, but the cost to IO and processor can debilitate the system prohibitively. The goal here is a speed oriented functionality, so with that in mind, no Dedupe has been implemented.
While I’ve been in this space for a long time, when Richard Lary took the podium, and endeavored into some of the math and science, I have to acknowledge that I was so far under water, and beyond my skillset that while I tried to assimilate his information, boy, did I not. There’s a reason that his role is that of Chief Scientist. Meanwhile, the interesting math he presented hinted to the Storage Field Day crew that at some point, he’s going to be able to diminish the dedupe penalty and who knows, maybe get that functionality into the environment. I believe that he can do it, if I could get my eyes to stop spinning.
I was continually impressed with the passion that these folks, from Bill Miller, David Gustavsson, Richard Lary, and Gavin McLaughlin presented. But passion borne of a superior approach and technology with a solid, and accessible interface to me makes the future seem bright for these guys indeed.
I’ve only just scratched the surface.
Videos of these presentations are available on the Tech Field Day Youtube channel, Vimeo, and at TechFieldDay.com. http://techfieldday.com/appearance/x-io-technologies-presents-at-storage-field-day-13/
Learn more at http://xiostorage.com and on twitter at @XIOEdge