IT Trends for 2016 Part3 – Network Virtualization

Amongst the key trending technologies moving forward in the enterprise and data center space is that of the virtualization of the network layer. Seems a little ephemeral in concept, right? So, I’ll explain my experience with it, its benefits, and limitations.

First, what is it?

NFV (Network Functions Virtualization) is intended to ease the requirements placed on physical switch layers. Essentially, the software for the switch environment sits on the servers rather than on the switches themselves. Historically, when implementing a series of physical switches, an engineer must use the language of the switch’s operating system, to create an environment in which traffic goes where it is supposed to, and doesn’t where it shouldn’t. VLans, routing tables, port groups, etc. are all part of these command sets. These operating systems have historically been command line, arcane, and quite often discretely difficult to reproduce. A big issue is that the potential for human error, while not disparaging the skills of the network engineer, can be quite high. It’s also quite time-consuming. But, when it’s right, it simply works.

Now, to take that concept, embed the task into software that sits on standardized servers, and can be rolled out to the entire environment in a far more rapid, standardized and consistent manner. In addition to that added efficiency, and consistency, NFV can also reduce the company’s reliance on physical switch ports, which lowers the cost in switch gear, the cost in heating/cooling, and the cost in data center space.

In addition to the ease of rolling out new sets of rules, with the added consistency across the entire environment, there comes a new degree of network security. MicroSegmentation is defined as: The process of segmenting a collision domain into various segments. Microsegmentation is mainly used to enhance the efficiency or security of the network. The microsegmentation performed by the switch results in the reduction of collision domains. Only two nodes will be present as a result of the collision domain reduction.

So MicroSegmentation, probably the most important function of NFV, doesn’t actually save the company money in a direct sense, but what it does do is allow for the far more controlled aspect of traffic flow management. I happen to think that this security goal, coupled with the most important ability to roll these rules out globally and identically with a few mouse clicks make for a very compelling product stream.

One of the big barriers of entry in the category, at the moment, is the cost of the product, and a bit of differing approach in each of the product streams. So Cisco’s ACI, for example, and while it attempts to address similar security and consistency goals has a very different modus operandi than NSX from VMware. Of course, there are some differentiations, but in addition, one of the issues is how would the theoretical merging of both ACI and NSX within the same environment work? As I do understand it, the issues could be quite significant… A translation effort, or API to bridge the gap, so to speak, would be a very good idea.

Meanwhile, the ability to isolate traffic, and do it consistently and across a huge environment could prove itself to be quite valuable to enterprises, particularly where compliance, security, and size are issues. I think about multi-tenant data centers, such as service providers, where the data being housed in the data center must be controlled, the networks must be agile, and the changes must take place in an instant are absolutely key for this category of product. However, I also think that for healthcare, higher education, governmental, and other markets, there are big adoption that will take advantage of these technologies.

 

 

 

 

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.