We’re used to, at this point, the concept of HyperConverged, a model wherein Storage, Compute and often network is built into an appliance of modular approach. Once the hypervisor has reached node capacities, the expansion of that is by simply purchasing another building block, or node. Each node is comprised of a number of compute units, and a series of disks. Players in that space include Nutanix, Simplivity (now an HPE product), and similar architectures. Often the concept being used of one called SDDC or Software Defined Data Center, because regardless of the model, the software managing the environment is the critical component. Storage is often the differentiator. Elements like replication, deduplication, and compression in the storage layer, as well as fault tolerance across nodes, redundancies, and failover, also part of the storage layer can be keys to the overall storage platform. Another key component is Hypervisor support. While VMware is table-stakes in most of these, KVM, HyperV, and XEN, as well as the custom Acropolis from Nutanix are options.
Enter Datrium, a platform built very differently. While it may seem to be one of diverse parts and pieces; for example, the compute or speed layer is built on standard X86 servers, and the storage or capacity layer is built on X86 based storage nodes, the scalability in this case can be taken at a much more granular approach. The idea is “Reference Architectures” but allows for mixed vendor servers. If you need more speed, you simply build another X86 based server using the Datrium software onto the bare metal, incorporate it into your infrastructure, and you’ve got more speed. If you need more capacity, you simply take the same approach on a storage node. In differentiation to the other brands in the space, growth is far more organic, and much like the trusted old and modern cloud architectures we used to build prior to the advent of converged architectures. More an as-needed approach or brick by brick, rather than block by block.
To be sure, while VMware is the hypervisor of choice, here, incorporation of Containers into the modality is truly supported.
Now, one of the categories that distinguishes Datrium is it’s backup process, which makes for an uniq1ue approach. Backup of deduplicated and encrypted data (in flight as well as at rest) makes for a very light storage footprint for that backed up data whether on prem or in the cloud. An administrator may choose to backup that data into another Datrium storage infrastructure either on-premises, or in another data center of their network, or alternately into the cloud. The incorporation, though, of cloud-based storage could account for large storage footprints, but truly, the deduplication model employed by Datrium into an AWS infra carries a much smaller data footprint than one would usually anticipate. One could store that data in encrypted format onto cloud-deployed virtual or bare metal images inside the AWS architecture in just as easily deployed images. This is a highly emerging and incredibly valuable approach. Imagine not just replication, but true backup and recovery from within the same interface, and without any requirement of adding personnel, or additional management layers.
As can be easily inferred, the concept is different than HyperConverged Infrastructure as it had been, and yet, in many ways far more efficient. They’ve emerged from a use-case based approach (a VDI project, or a remote office exclusively) toward a fully fleshed out approach to data center automation on-premises or in the public cloud.