Server virtualisation has played a major part in business technology for several years now, and is still growing in popularity. Gartner re-emphasised its importance a few months ago by highlighting the top five virtualisation trends for 2012. The analyst house stated that virtualisation is hitting 50% penetration and still growing, although not like it used to1. According to a PC World poll of IT managers, over a third of businesses are already using server virtualisation. However, as managers leverage virtual machines to maximise efficiencies, there are a number of issues they need to bear in mind.
Despite the notable advantages that server virtualisation presents, its rapid growth and development have revealed a number of problems that will block its progression if left unaddressed. Last year, Gartner noted that: “Most organisations will be well-served by focusing attention on the challenges within the server rack.” In my experience, a major challenge to the expansion of server virtualisation is that when the number of virtual machines increases so does the number of cables associated with the increased levels of I/O. This can cause headaches for data centre managers during server maintenance, hinder the ability of the business to adapt as its requirements change, and even result in hardware failure.
To illustrate, if there are six or seven cables connected to each server, it’s easy to unplug the wrong one during maintenance. Cables can become damaged as a result of frequent handling, making it difficult to diagnose problems. Bundles of cables at the back of servers can also block cooling systems – this overheating can be disastrous.
The server virtualisation market leader, VMware, currently boasting around 84% market dominance, has two solutions for the increased levels of I/O experienced by servers that are working to capacity for the full time.
The first involves traffic segmentation using a series of 1Gb Ethernet links. This allows the server to physically keep traffic flows separate and prioritise as appropriate. However, it drives up port, cable and switch count and costs, and limits the total number of applications that can economically deploy VMware. It also by its very nature increases the complexity of server management.
The alternative option offered by VMware is to use heavy duty full 10Gb links alongside its traffic QoS feature. While this method does reduce the cable count, it can prove expensive as no data centre manager wants to be deploying 10Gb connections that will knowingly be underutilised. The other issue with this method is that physical separation no longer exists in the server environment and multiple traffic flows are now pushed down the same pipe, causing increased management complexities in a process that would normally be segmented.
So while infrastructure virtualisation is a 21st century necessity, as no business wants to waste its data centre resources (not when the difference could be up to 75% in terms of server utilisation), its very nature causes associated I/O problems. The question becomes: how do you optimise the optimised? The answer lies in how the subsequent data is handled. While VMware offers two solutions, both involve increased cabling, and increased costs linked to capital and operational expenditure. This was never the goal of virtualisation, so it’s only right that the same principles VMware started with are applied to the resulting traffic. So what’s the solution?
There are two major strands of thinking in terms of I/O consolidation and where it should take place: either at the access or the network layer. The access layer sees I/O consolidation taking place between the server and the network, while network layer I/O consolidation takes place within the fabric. While various market forces have proposed network layer solutions that rely on Ethernet and FCoE to reduce network switches and adaptors, it’s a methodology that doesn’t support InfiniBand, SAS, or other host adapters. That leaves the prudent future thinking data centre manager with access layer solutions and dedicated top of rack devices designed to address the I/O excess.
So let’s think about it logically. I/O has always relied on dedicated PCI Express connections which have resided inside the server. Up to eight cables per rack are connected to them using multiple switches to pipe data back and forth between the data centre and the network. Server virtualisation increased the traffic and the cables increased in proportion. Yet, what if all the traffic could be directed to just one PCI unit which then segmented and rerouted it virtually? That is a modern day reality.
Instead of multiple 1Gb lines or major 10Gb Ethernet cables, data centre managers can deploy a single PCI Express cable connecting the server to a pool of I/O resources at the top of the rack. Different vendors offer different rack-based solutions, but intrinsically there you have it. So, how do you deal with an increase in traffic born from server virtualisation? Virtualise the traffic as well. The beauty with this approach is that PCI cables are universal and can support a number of different protocols (Ethernet 1Gb or 10Gb, Fibre Channel, InfiniBand, SAS/SATA, FCoE, iSCSI, GPU’s etc.), and even better, if you need more bandwidth it can be easily updated.
I/O virtualisation is only starting to gain traction in the market, but it’s essential that data centre managers consider aspects such as the type of transport used and protocol adaptability. You need to make sure a solution is compliant with storage protocols such as SATA, for example, if you have an incumbent SAN, as they will all impact the time it takes to deploy new servers and therefore the cost savings and efficiencies your business will receive.
As virtualised data centre environments are becoming the norm, so too will virtualised I/O as the next part of the efficiency puzzle.
1. Top Five Server Virtualization Trends, 2012, 21st March 2012:
Tags: Design & Facilities Management