Yeah, so I’m at VMworld in Copenhagen. It’s nice here.
VMware vCloud and networking
VMware vCloud is an interesting spin on the virtualisation story that focusses on automated provisioning and deployment of virtual servers. And part of provisioning servers is the networking connectivity. Most people will know that VMware has the vSwitch and Distributed vSwitch as part of the VMware that already removes a lot of networking configuration from the physical devices. Lets assume for the moment that this is acceptable from a security and performance perspective. Note that Cisco’s Nexus 1000V rounds out the software switching options with further feature enhancements on the Distributed Switch.
In the vCloudDirector platform, you must create pools of networks resources that are then used by the servers to connect to the physical platform. Firstly, for this to work, the network resources need to be deployed between many servers and implicitly, the physical servers must by hyperconnected to many VLANs.
Network pools can be one of these types:
- VLAN-backed – a range of VLAN IDs and a vNetwork distributed switch are available in vSphere. The VLAN IDs must be valid IDs that are configured in the physical switch to which the ESX/ESXi servers are connected.
- vCloud isolated networks – An isolation-backed network pool does not require pre-existing port groups in vSphere but needs a vSphere vNetwork distributed switch. It uses portgroups which are dynamically created. A Cloud isolated network spans hosts, provides traffic isolation from other networks and is the best source for vApp networks.
- vSphere port groups – Unlike other types of network pools, a network pool that is backed by port groups does not require a vNetwork distributed switch. This is the only type of network pool that works with Cisco Nexus 1000V virtual switches.
Lets pretend, for a minute, that that it’s NOT a really bad idea to enable all VLANs on all ports because of STP/MSTP/RSTP/PVST, or security, or backbone performance, or broadcast storms created by ARP floods etc etc. Right, got that, lets pretend that. As Ivan Pepelnjak at IOS Hints notes (and we have often discussed) the size of L2 domains is a major concern in scaling out VMware clouds, and watching senior VMware engineers simply ignore these issues is somewhat chilling.
So, you would would want to be able to deploy a server to any VLAN, anywhere. OK, I can kind of go with this (KIND OF).
vCloud Isolated Networks
Of the three types of vCloud Networks, the most interesting the ISOLATED network. This means that vCloud servers can have their own private network. This private network is defined independent of the network admins, to quote “it’s too painful to ring up the Network Ops and get VLANs configured so we set this up so you don’t need to”.
The slides then go on to describe this network as using MAC – in – MAC encapsulation. This means that a private Layer 2 Network is being overlaid on the physical network between VMware physical hosts and the vSwitches
Yeah, that’s a problem waiting to happen. I’ve got some research to do on that one to understand the processing impact of de-encapsulating Ethernet frames and what VMware is using for loop prevention. Until now, I think VMware has used software rules to ensure that vSwitch configuration are never able to build looped configurations, but by creating a L2 overlay on top of an existing L2 network is going to have some interesting challenges for Spanning Tree and switch loads and frame MTUs.
The road to the cloud isn’t getting any easier.
HP have provided travel, accomodation and entry to VMworld Copenhagen 2010 however, my opinions and thoughts remain stubbornly my own.