It seems as one of the most interesting areas in computing today is that of memory as storage and storage as memory. The use case here is very interesting. If we begin with the premise that the constant get and push from storage, even if that storage is local is inefficient, slow and cumbersome, and we realize that as databases are growing in size to such vast proportions as to outgrow capacities. We’ve used clustered servers, larger and faster storage, and broken the databases into component parts in efforts to gain access to the data appropriately.
As technologists have put all this effort into handling this, attempted to establish different approaches, the greatest reaches of capacities have been exploited. But, the fragmentations of the dataset have always been an issue. The only way to place more of our databases into the server environment was to extend the RAM as much as possible. The problem was that while we’ve often been willing and able to pay to add as much ram to our servers as we wanted, but what we wanted was limited by the servers themselves.
When in Tech Field Day 10, in Austin we met with Diablo Memory, who had a very interesting approach to extending the memory, consisting of a new form of memory which is still volatile, but far larger in capacity. It’s very interesting and I wrote about it here .
However, while this was a very interesting approach, it was apparently by no means the only way to address this issue.
Plexistor (http://www.plexistor.com/) uses a software based tool, essentially a driver within the (currently only) KVM hypervisor, which grabs ahold of local, network, and even cloud storage, and puts it together presenting it as memory to the workload in question.
This seems to me, a revolutionary thought process but, in all fairness, it seemed to be, while not impossible, potentially with far more overload than could possibly make the model functional. Sceptical, I would say, was how I viewed the viability of this approach.
Oddly, the function of disc presented as storage in a virtual manner seems to work beautifully. From all I’ve seen, the reduction of load on processor may actually bring load down and potentially improve the performance of the database, while requiring fewer cores and greater efficiency on the server side.
We had presentations from both Amit and Sharon, from this very interesting and way-cool start up out of Israel. As it seems the product is still in pre-Beta, and that some work will need to be done before things reach General Availability, companies with new approaches to the problems facing IT today really point to a really interesting and bright future.
One thought on “Plexistor – More hope on the horizon for Large In-Memory databases”