At Tech Field Day 10, (#TFD10), I had a chance to sit down with Chris Evans ( @ChrisMEvans ) to discuss, off channel, the concepts of storage we’d seen discussed, and we agreed that SDS or Software Defined Storage was a concept that really had yet to be defined, though so many storage vendors had claimed it as their own. We had other conversations with other personnel in the delegate group, and while opinions differed, we had all come to the hopeful thought that we could overlay a definition on this. As a result, he and I worked a to try to clarify, and hopefully come to define the concept.
Back in May of 2013, when it was a brand new concept, I wrote a post about Software Defined Storage http://bit.ly/1KX1OnX . At the time, I was working at Nexenta, one of the definers of the space. I still believe that they’d come to a level of understanding on what it meant to be truly software defined in that space. Yet, the world’s moved on. Today, more players exist in that space, and the definition of that concept is, if anything, even more muddy.
On a recent @InTechWeTrust podcast, the brilliant Mark Farley (@GoFarley) who’s long been a bit of a pundit on storage worked hard to arrive at a definition, and to paraphrase, his ultimate statement had to do with the derivation of the control plane from the underlying hardware. Of course, hardware is necessary for the data to sit on, but the software is where the magic is.
Think about things like deduplication, metadata, tiering, file-type, replication, etc. These concepts almost universally reside in the management software, and rely far less upon the hardware upon which it sits. The industry is moving quickly to a model away from the monolithic array, to one in which the reference architecture, that of a server or many, with connectivity to what is essentially JBOD behind them, with high availability and commodity equipment disc, enclosure, connection, etc. are simply part of the equation. I like to think of it as a Separation of Church and Solid State. Please forgive the pun.
Again, the key here is the management plane being removed from the physical infrastructure. We’ve seen this recently in the networking world with manufacturers like Action and SDN models like Cumulus. We have networking built at full-scale all in software with NSX and ACI, though again, there is an underlying reliance on the hardware. Servers will need to have connections to switches, after all, but much of this can take place at the virtual layer in software. For whatever reason, the adoption of SDN as a concept has been arrived at easier than has the concept of SDS.
So, in what vendor-cases do we truly feel that software defined storage really is an entity? In the case of StoreVirtual (previously Lefthand Networks, and acquired by Hewlett Packard) and also vSAN by VMware, I think that we can safely place these aggregated server based storage platforms in that category. Essentially, but with some differences, these platforms take the discs within a cluster of physical hosts, combine them together in a sort of virtual SAN, and allow the administrator to present them to the virtual infrastructure. I do think that this qualifies as the separation of the physical layer from the control plane management software. Probably the best examples of this concept, in my opinion.
But, I believe that the amorphous nature of the phrase SDS could be expanded to include other types of approaches. Previously, I mentioned Nexenta, and I still believe these guys have a concept that falls into the category of SDS. In this case, they’ve an approach to ZFS based storage which utilizes Commodity Off the Shelf (COTS) equipment on the physical end (A cluster for Nexenta includes two X86 servers, connected by SAS controllers to a JBOD stuffed with SAS based spinning and Solid State disc). The key is that the operating system running and managing that disc is an Open Illumos (Open Solaris) based OS that provides provisioning, analytics, orchestration, replication, and management of these pools of disc. Should an upgrade be required, say on the server side, the cluster is broken, and the servers get updated one-at-a-time, allowing for the storage to remain available to the user community the entire time non disruptively. Because the magic of the OS/ZFS side of the equation are all handled within the servers themselves, and not relying on some proprietary vendor reliant hardware build, this could be deemed Software Defined.
At this point, it’d be valuable to look more granularly at the history of SDS, and attempt to define the various steps along the way toward a fully mature model of this. Chris, with his own history has really broken it out to following five discrete steps. I do believe that his demarcations make for real accurate generational milestones. These demonstrate how the original tenets of simply putting software onto commodity hardware, and how these steps evolved SDS into a more advanced definition of the genre.
SDS 1.0 – typified by the dual controller architecture on commodity hardware. Examples include Nexenta, Open-E, DataCore, StorMagic and StarWind. In many cases these are a simple VSA (Virtual Storage Appliance) and offer no other intrinsic benefit than separating the hardware and software planes from each other. However end users can benefit from savings overall using the build & buy strategy.
SDS 2.0 – a move to more scale-out rather than scale-up designs. These introduced other protocol support including object and file. Examples include HPE StoreVirtual, Scality, Cleversafe, Caringo, Ceph, Gluster and SwiftStack.
SDS 2.5 – hardware comes back to prominence with vendors delivering software-defined features through an appliance model. Examples include SolidFire, Zadara and Nasuni. What’s interesting about this stage of development is how these solutions are starting to leverage the public cloud and move end users away from the comfort of understanding the hardware configuration.
SDS 3.0 – coming towards the present day and we’re starting to see full hardware abstraction, management via API and full shared-nothing scale-out architectures like Storpool, Primary Data and ScaleIO. Vendors are pivoting their solutions towards hyper-convergence and containers. Vendors include Springpath, Hedvig, Maxta, Plexistor and Cloudbyte.
SDS 3.5 – moving right up to date, software-only vendors are now moving further towards hyper-convergence and releasing their own reference architectures and predefined appliances.
What should we expect from SDS today? The function of hardware still needs to be separated from the software, as more features move into the software layer and disks (HDD and SDD) simply provide storage capacity and IOPS. SDS solutions should deliver capacity and performance based on the application rather than the underlying hardware and be consistent as the hardware is ameliorated or changed. This means providing support for multiple media types, offering quality of service and from a management perspective delivering resources through simple APIs.
2 thoughts on “Software Defined Storage: What’s… Uh, The Deal”