Diablo Technologies: A unique approach to In Memory Databases

At the Tech Field Day vendor presentations, I was privileged to see a presentation from the group at Diablo.

At first glance, Memory1 appeared to me to be yet another DIMM (Dual in-line memory module) based memory technology. To be fair, there’s a ton of stuff going on in that area. We’ve seen ULLTraDimm from SanDisk extend the Memory Socket Dimm into what is quite actually a memory socket based Solid State Disc.

The idea here is that with incredibly fast storage, we’ve realized that much of the latency that exists with fast storage is the connectivity of that storage to the processor. Traditional SSD is typically connected via SATA or SAS connections to the motherboard via some level of PCIe bus adaptor or motherboard based controller. We’ve been thrilled with the speed and durabilities of these discs for the most part.

The goal of moving the storage closer to the processor, and thereby eliminating the gap between the processor to the storage brought us to the need for new paradigms of storage. A number of years back, FusionIO developed the PCIe based memory card, which certainly eliminated the cable connection from the PCIe bus to the storage, and improved the latency numbers. A number of the companies who manufacture Solid State have come to market with solutions similar to the original FusionIO product. Intel and Samsung as well as SanDisk have achieved this tech, which while improving and coming down in cost have removed that typically SATA latency which has historically been limited to less than 600Mb/s or 6Gb/s which is the Serial ATA theoretical limit. These vendor’s products are by no means the only ones in this space, but a representative sample or the most obvious choices.

So the ability to create a storage tech sitting directly on the motherboard, and close to the processor bank brought us to these newer technologies. In all fairness, the costs of these truly “Memory Bank” storage platforms can be prohibitive.

Enter Diablo Technologies. What they’ve done here is bridge the gap between Volatile DRAM and Storage. While the product from Diablo (Memory1) appears to simply slower Volatile memory, it’s actually quite a bit more elegant than that. This is not a storage product. It is truly positioned as memory, but a very different approach than standard DRAM. Less expensive than DRAM, and hugely greater capacities.

Memory1 sits in the DIMM socket on the motherboard, as close to the processor as standard memory, but through the use of the magic of Diablo’s tech, they are substantially larger in capacity than standard memory. To a factor of many multiples, actually.

Currently, you can put 128Gb in a traditional memory socket with Memory1, which, could extend the potential of a server’s onboard RAM to 4Tb in a 2 socket server. If you’re running SAP or Spark databases as your application, the ability to load the entire database into system memory, while not requiring read/write activity to the storage will massively improve the performance of your database.

In addition, this ability to load the entire database into memory can easily reduce the sheer number of devices required to feed that database to the rest of the system. In so doing, one can easily imagine reducing the number of servers existing within that architecture. Diablo claims up to 90% fewer servers in this environment. Disclaimer: I have not seen the reality here, but I do see it as a reasonable possibility.

They have also claimed that through the limitations of data paths on the motherboard, a 2 socket server outperforms a 4 socket server, in many cases. This is due to the number of paths the memory feeds through the processors being substantially fewer than the architecture of a quad-socket server, and thus reducing latencies. Again, numbers here to be validated, but I can definitely see the likelihood.

I’ve tried to envision the benefits of this technology into environments that require lots of memory, for example Virtualization Hosts, but honestly, I haven’t seen a lot of evidence that this is as powerful a use-case. I imagine that we’d see distinct improvements wherever and whenever the server should experience memory overload.

Please note that the use cases for this memory are specific, but if you think about it, there’s a compelling reason for it. I think this approach is unique and quite an interesting evolution to server based memory/storage.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.