Recently we were implementing a backup site for a web-based application. The primary site uses a blade chassis, with multiple 1Gb Ethernet switches, and 8Gb fibre channel SAN connectivity. The backup site requires less processing resource, so a few beefy rackmount servers were used, rather than a whole blade chassis. These servers run a bunch of virtual machines, using around 5 different VLANs for various functions.
We’re not big fans of fibre channel, and adding HBAs plus switches was looking expensive, so we started looking at iSCSI. IBM have recently added 10Gb connectivity to the Storwize v7000, so this seemed a good option. The recommendation was to use Emulex dual-port 10Gb Ethernet cards in the servers, connected to a pair of BNT G8124 24-port 10GbE switches. We could use the Virtual Fabric feature to carve the NICs up into vNICS, applying different bandwidth and security policies to each vNIC. Brilliant. Well OK, so they were Nortel/BNT switches, but they’re not that bad. Much better price/performance than certain other large vendors.
The plan was to have redundant connections from the servers, storage array and firewalls all going to the same pair of switches. vNICs would be used to present multiple NICs to the operating system. 2Gb could be allocated for data, and 8Gb allocated to storage traffic. We wanted to run a trunk over the 2Gb vNIC, breaking out to different VLANs on the switch, for connectivity to the upstream firewalls, and other devices. The 8Gb would be straight access mode for our iSCSI traffic. Good clean separation of traffic types. The presentation of multiple NICs to the OS gave us a few more options, letting us play around with multipathing for the iSCSI traffic, but active/passive bonding for our normal traffic. So we’ll have the network design we want, but reduced cabling and administration. Great, let’s do that!
Yeah, not so fast. Turns out that the vNIC implementation has some annoying limitations if you’re working in a small network. vNICs work by adding an outer VLAN tag to frames. This is done by the card – the OS does not see this outer tag. vNICs, and other non-vNIC ports are placed into “vNIC groups,” with a VLAN ID associated to the group. The switch looks at that outer tag, and uses it, along with the destination MAC address, to work out which ports to switch the frame out of. If it is to be switched out another vNIC port, then the outer tag is left intact. The receiving card will strip the outer tag, and present it to the OS via the right interface. There may be an inner VLAN tag, which the OS will handle as if it had received it via a normal trunk port. This diagram from the BNT Application Guide shows the flow:
If the frame is forwarded out a non-vNIC port, the switch will strip the outer tag as it leaves. At no time does the switch look for the presence of any inner tag. If there is an inner tag, it will be left intact, and will not form part of any forwarding decision.
So what happens if you have your server trying to send normal 802.1Q tagged frames via the vNIC to the switch, and you want those frames to have the normal 802.1Q tag stripped, and the frame delivered to an access mode port? You can’t do it. Not unless you deliver the frame to another switch, stripping the vNIC tag as it leaves the switch. The other switch can then receive normal 802.1Q-tagged frames, and strip tags as it forwards out access-mode interfaces.
Or what if, for some reason, you had another trunk port that you wanted to receive traffic from different vNIC groups? You can’t do it – a non-vNIC port can only be associated with one vNIC group. A vNIC group will have multiple vNIC members, but a vNIC can only be a member of one vNIC group.
Obviously, It Depends (TM) on your network as how well vNICs will work for you. But if you’ve only got a few devices, and you’re trying to do something ‘interesting’ then be aware that they may not be for you. Or maybe someone out there has some other ideas on how this could be made to do what I want to do?