So we all know that vSwitch is software connection between guest VMs in a single VMware server. And it seems natural, even intuitive that communications between two servers that connect to the same vSwitch must be good.
Well, yes but mostly no. Here is my logic.
Lets take this scenario of four VMs in a single server chassis:
and there are traffic flows between server that are on the same vSwitch / VLAN. Like so:
Now this seems like a goodthing™ and all is well. Traffic flows are localised in the chassis and the packets are all sorted. You’d think that there is nothing to worry about, no need to tell the network team, just leave it at that etc etc. The server will burn a bit of CPU & Memory to handle the packet forwarding but that’s easy.
Lets move on a bit as your VM farm grows into something resembling a cloud. And you start migrating your VMs from chassis to chassis as you need more resources. Maybe a bit more memory for VM B, or more CPU cycles on VM D. Then you system looks like this:
This is still a very simple deployment with just a single pair of ethernet switches and each connected to the same edge or top of rack switches. So let’s consider a tougher east-west switching challenge (more on this in a post on bisectional bandwidth)
In this case, the traffic no longer uses the vSwitch and therefore the network must be able to cope with the project traffic load between VM-A & VM-B and VM-C & VM-D.
Design Failure for Scaling
In design terms, this is a scaling failure. The single use case looks good, simple and it works. It doesn’t require any hardware, or any team engagement. Any implementation that depends on the vSwitch within a chassis is not a scaled design. ‘.
The ongoing use case for the next phase of growth is a failure. This means that any design that relies on using vSwitch as primary networking technology is a design failure. Ergo, the VMware vSwitch is not a network technology in the proper sense.
If any of those Guest Vms need more memory or CPU in the future, and that chassis is not able to deliver, then the value of virtualisation is that movement can create a pool of resources. This is, of course, the supposed value of the ‘Cloud’. If you want a cloud then you have to dsign for that from the very beginning.
I’ve not mentioned any storage issues here, but they are equally important as the storage networking issues are identical. If you want understand why FCoE / iSCSI are so important, this is the same use case.
The EtherealMind view
In my view, the correct perspective on using vSwitch in a network is to think of it as a shared network adapter. It’s not a switch. It has no networking features – no STP, limited QoS, no SPAN or RSPAN, no NetFLOW / sFLOW etc, no filtering, no VACL and so on. vSwitch features such as VLAN trunking, link bonding, frame forwarding are features of any network adapter as well as an Ethernet switch
Don’t fall into the trap of thinking that the vSwitch is a high performance connection or even a feature complete technology. Because the day you move those two servers apart, that connection cannot be sustained and you will need to redesign the network. And you should have done that the first time. A vSwitch is not an Ethernet switch, and doesn’t replace the Network, and what’s worse it leads into poor design choices if you are not paying close attention to the overall design.
With that said, there are different things happening in networking that may change this, so stay tuned and I’ll see if I can write some more about this.