Storage Differentiators, and Food for thought from Duncan Epping

Duncan

Duncan Epping@DuncanYB

Duncan Epping, was voted recently as one of the top 100 “Cloud Experts on Twitter” by the Huffington Post: http://goo.gl/9aodcl and a blogger at Yellow-Bricks.com, and is an architect at VMware’s R&D department.

Well, Duncan did it again… He got me thinking with a very simple query in a tweet a few weeks ago. I happened upon it late at night. It got me thinking about all the “Storage 2.0” vendors out there, Nexenta included, and what we’re doing to manage this critical aspect of what it means to be a storage vendor, and how this will drive the market moving forward.

As an SE at Nexenta, I think about the mode of differentiation often. The worst argument at which I can ever arrive, and the one to which I choose to always go last is that of price. In fact, I choose to do this only if the customer or salesperson drives the conversation that way. Otherwise, I’ll bring it up last. While it can be a very big determining factor in the conversation regarding why an organization may choose to go with one vendor over another, it also may provide only the smallest of rationales. In all fairness, the conversation should be one of technical requirements first, and not money, for if the storage doesn’t resolve the technical requirements, then no cost savings can really make sense.

*) On to the technical decision making process. Many of these decisions are based on existing technical issues and infrastructure, and many are or will be due to future requirements. The requirements may be infrastructural, potentially a VDI project coming up, or maybe because some architecture that already is in place. For example, the previous storage may have been 8Gb fibre, and for that reason, there’s no impetus to replace that fabric with Ethernet.

So, what kind of connectivity do you need? This will lock out many pure NAS devices. Many NAS vendors have no support for FibreChannel, or maybe the desire for 16Gb FC has a role in your environment. Though this technology is emerging, there may be a need in your RFI for 16Gb, and if so, this may even lock out some traditional SAN providers. In some ways, the same issues apply in terms of Ethernet connectivity. If you’re looking for iSCSI, and desire 40Gb Ethernet, this could be an issue on many infrastructures.

*) Do you have a protocol that you require, and is that a difficult requirement? For example, is there an application that requires a specific version of SMB? Or, is your requirement an “Object Storage” file system, like Swift or Nova? These have emerged in recent years, and many storage subsystems do not support these protocols. However, it would be a wise to check the vendor’s roadmap, as your implementation on any of these above technologies may very well correspond with the storage vendor’s support of the same. There’s a great WIKI article that links to all the FileSystems currently available, and some additional specifications regarding them: http://en.wikipedia.org/wiki/Comparison_of_file_systems

*) Scalability is another issue. Many environments have the ability to scale outward, and others can scale upward. To scale upward would mean that a single infrastructure might accommodate larger and larger size environments. Thus, those environments usually can be incorporated into single environments that are more sizable. That doesn’t necessarily mean that these systems can handle filesystems that scale upwards. All it really means is that the size of the discs capacities have the ability to grow larger. In an ideal world, these would correspond, and thus the need for viable research by the purchaser.

*) Scalability outward is a different issue. In the outward vein, you, as a purchasing considerer need to understand that the overall storage size of your environment and the future growth becomes a huge issue in the consideration of this. Today, your requirements for storage could be 500Tb, but down the road, you may need upwards of 5Pb. Will your choice in vendor scale to that size? Can they federate their management layer to take care of all of these disparate devices into one distinct infrastructure? This too is a distinct consideration. Again, today, it may not be a huge issue, but as your environment is growing, your need for this may become more of an issue, so Federation as a roadmap item will be very important.

*) Perhaps one of the biggest issues is the ability to handle IO. Many older environments, or legacy storage environments have retrofitted their arrays to handle burst and sustained IO, write versus read IO thrust under different loads, and at different sizes can very possibly be the single biggest determining factor in the choosing of a storage environment. The one thing that I draw attention to in this conversation is the difference from a pure backup environment and a VDI environment. If you imagine that the former would have huge storage sizes with relatively low IO requirements, while the latter would very likely have low storage requirements, but at scale the IO requirement would very likely be hugely significant. This is not a trivial assessment.

I’ve done many presentations about the IO issues surrounding VDI. Imagine that at a 50/50 read write ratio, your POC runs 100 virtual desktops at 25 IOps per desktop. That’s a total of 2500 IOps, and only 1250 read and equal write. This can easily be accomplished on spindles. With 10kRPM discs (roughly 100IOps per disc), this is no more than 25 disc dedicated to the project. That’s not a huge investment in a POC. But scale that up to the full deployment of 10,000 VDI devices, and suddenly your environment becomes far different. At peak, that’s 25,000,000 iops, or 12.5MIOps read and the same write at peak. The question is how would a spinning disc environment handle that?
So the newer “Storage 2.0” firms have more modern file-systems that handle the IO load in different ways. My personal favorite (and a disclaimer is that I work for a firm that produces a product based on it) is ZFS. Usually, these leverage Solid State Disc, or PCIe cards with incredibly fast IO capacities to put spinning disc to shame. These have historically been cost-prohibitive, but at this point, while still more pricey than spinning disc, they’ve become quite a bit more commoditized and are in many ways an awesome form of leverage against the requirements of some of these workloads.

Then, the question is, how do these firms leverage these Solid State technologies? In some cases, the firm will build arrays that are exclusively SSD, or PCIe based. These are amazingly fast, but tend toward lower on the spatial capacities. Other firms will build architectures that have tiered storage architectures that leverage SSD for read and write, while leveraging spinning disc for the bulk storage. This tiering puts the IO where it is best utilized, and the bulk storage where it is best used. Again, the buyer should understand their needs, and ensure that the architecture must match their requirements.

**) Small plug for ZFS: The architecture ZFS employs supports both of the above environments: All SSD, as well as tiered. This flexibility and unified architecture creates a huge bonus for many customers.

*) Tunability is another important concept within storage tiering. So, sure, you’ve built your architecture as a finely designed, and well-contained environment. But, what happens when things change? You’ve decided to add an HPC database, or a VDI environment to your suite of corporate apps. Can you easily modify the architecture to accommodate for the heavily increased IO load even if you’re not necessarily increasing the requirement for space? Many of these environments may not have the ability to do this. This is a key consideration and one that I feel makes for compelling rationale when considering a vendor.

*) What about replication, or DR capacities? How difficult or costly will it be to ensure that your environment can replicate to another site? What if your storage environment at the other location is the same vendor? What if it’s a different vendor’s solution? How will you accommodate for that? What are the costs for a solution for this disc-to-disc replication? What kind of speeds or fault-tolerance methodologies can you anticipate given your bandwidth or distance limitations? Does your vendor of choice charge for their replication software? Do they give you the option to selectively choose which segments of data within the environment are replicatable? Or, for that matter, do they force you to replicate the entire volume?

*) While we’re on that subject, what is your chosen vendor’s methodology for embracing OpenStack? Will they, or for that matter, do they have support for Nova, Swift, or Cinder? Do they have this support in their roadmap and what is the timeframe? Most organizations have no plan on what their move to the cloud will actually be. For that matter if they’ll even need a cloud-storage model. But if they do, will this be important? And if it is, how will your storage vendor accommodate for this?

*) Perhaps, one of the most practical issues that a potential customer may face is that of licensing models. I find that one of my biggest pet peeves is when a vendor’s storage appliance goes “End-of-Life” and you’ve decided to replace it due to hardware being outdated, or lease expiration for example, as a customer, if you choose to replace with the same vendor’s equipment, you end up having to pay for the licensing all over again. In my opinion, this simply isn’t fair. Why is it that you can’t simply pay for fresh hardware and move your licenses over to that new hardware? Well, this is a very important issue, is it not?

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.