NetApp’s Max Data at #NetAppInsight and #TFDx

Quietly, this past June, NetApp acquired the cool memory product I’d written about back in March of 2016, from a Storage Field Day Event here. At the time, they opened up my eyes to the possibility of adding some amazing functionality to the handling of larger databases. I had the pleasure of being an ad-hoc delegate to this Tech Field Day presentation while attending #NetAppInsight last week in Las Vegas.

The tech expanded the available onboard memory capacities on the motherboard, such that much larger chunks or even entire databases could be loaded into memory, increasing the functionality of these, allowing for far less swapping out of read/write cache into those databases, and even potentially eliminating those barriers to database performance.

Fast forward a couple years, to find that not only have they carried along on that path, but technology has allowed for even more capacities. For example Non-Volatile memory, and the inclusion of 3D XPoint memory onto that motherboard gives far more function to the often cumbersome loads of massive SAP, Sparc, JDE type databases and even Oracle workloads. My belief is that the chunking of those databases in the past, wherein discrete functions had to be broken into components, such that the database becomes fragmented to portions of compute and memory. The promise that these shards disappearing in the future adds even more promise to the functionality of the x86 world particularly as it relates to databases.

Such a smart acquisition by NetApp, though the name “Max Data” to me feels a little markety. It seems that they’ve hit the ground running with a product that had some significant maturity even before acquisition.

The conversation we had at Insight was relevant. One of the things I found out, which I’d misunderstood previously, is that the way in which Max Data handles the management of this data is via an embedded tiering architecture, with intelligence, I’d previously believed it to be managed in a caching architecture. This opens up the door for even larger workloads, with the adding in of NVMe SSD and Optane leveraged as “Near” memory bus architectures and the promise of far larger data control, still with very quick processor interaction.

Please note, I’m making reference to 3D XPoint in both a memory and a disc based configurations. What this means is that the value of 3D memory layers, gives far more capacities to both the memory (RAM) and to Disc (Optane). This marks the inclusion of the promise of this expanded architecture directly as RAM on the memory bus. Thus, the improvement and development of this new method in technological advancement has allowed for the promise of these database handlings to become a reality. To me, the dovetail of the software with this cutting-edge hardware is mission critical to ensuring that these types of approaches become more vital, cost-efficient and functional.

I think that this was an incredible acquisition, and truly look forward to the product as it grows both in tech and in significance to the market.


Clearly, NetApp is no longer a “Filer” company, but far much more. Not just the acquisition of this product, but the rapid integration into the NetApp family and product stack shows corporate agility also noted by the rapid integration of SolidFire into the storage family. I look forward to more and interesting things on the horizon for NetApp


2 thoughts on “NetApp’s Max Data at #NetAppInsight and #TFDx

  1. I think NetApp current proposition is very enticing for traditional customers AND cloud oriented ones… they made all their products cloud ready and they keep pushing the traditional products too.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.