Quick Look at networking Features in vSphere 5.1

TL:DR look like the first version of vSphere that has least amount of compromises for large corporates. In other words, it’s more usable than before. Importantly, VMware has delivered a lot of networking features in this release and it would be fair to say that they either “overdue” or “much anticipated”. Take your choice.

References are linked – quick reading and recommended. 

iSCSI Storage Driver Upgrade for Jumbo Frames

The emergence of 10Gb Ethernet in the datacenter and the popularity of network-based storage have created the need to provide features that enable users to fully utilize larger networking pipes. Jumbo frame support was available in previous releases of vSphere for virtual machine, NFS, and software-based iSCSI network traffic. vSphere 5.1 adds jumbo frame support to all hardware iSCSI adapters, including both dependent and independent hardware iSCSI adapters. Utilizing 10Gb networking and the new vSphere 5.1 jumbo frame support, hardware- based iSCSI throughput can be significantly increased.

I’m guessing that the performance improvement is from the previous version 5.0.

Throughput Improvements When Using Jumbo Frames with 10Gb Ethernet (64Kb Block Size, 100% Sequential)
HWiSCSI +88% +20%
SWiSCSI +11% +40%
NFS +9% +32

vDS Scalability Improvements and Enhancements

The vSphere Distributed Switch is a centrally managed, datacenter-wide virtual switch. It offers the same raw performance as the standard virtual switch but includes advanced networking features and scalability. Having one centrally managed virtual switch across the entire vSphere environment greatly simplifies networking and network management in the datacenter. vSphere 5.1 increases the manageability and recovery of the VDS by providing automatic rollback from misconfigurations as well as recovery and reconfiguration capabilities directly from the local console of the vSphere host.

In addition to the manageability and recovery enhancements made to the VDS, vSphere 5.1 greatly increases the number of switches supported per server and doubles the number of port groups supported per vCenter Server.

Number of VDS per vCenter Server 32 128
Number of Static Port Groups per vCenter Server 5,000 10,000
Number of Distributed Ports per vCenter Server 30,000 60,000
Number of Hosts per VDS 350 500

Reference What’s New in VMware vSphere® 5.1 – Performance

Link Aggregation Control Protocol Support

LACP enables a network device to negotiate an automatic bundling of links by sending LACP packets to the peer. As part of the vSphere 5.1 release, VMware now supports this standards-based link aggregation protocol.

My note – finally. Few people understand why this took so long to ship. Although, there are only a limited number of LACP bundles on a switch and we might be counting them up real soon.

This dynamic protocol provides the following advantages over the static link aggregation method supported by previous versions of vSphere: * Plug and Play – Automatically configures and negotiates between host and access layer physical switch * Dynamic – Detects link failures and cabling mistakes and automatically reconfigures the links

Port Mirroring (RSPAN and ERSPAN)

Users can employ the RSPAN and ERSPAN features when they want to centrally monitor network traffic and have a sniffer or network analyzer device connected multiple hops away from monitored traffic.

Notably, VMware has decided to interoperate at the hardware level ie. work with physical switch SPAN features.

Enhanced SNMP Support

ESXi hosts provide support for running an SNMP agent. Looks like standards compliant support for IF-MIB and the expected SNMP tables including LAG, IP MIBs and Bridging MIBs. Opens the way for a lot of existing tools to get visbility into vSphere guests.

Single Root I/O Virtualization

I still don’t understand SR-IOV’s applications to server architecture and I’m still looking for documentation that can explain it to me. Any help would be appreciated.

Single Root I/O Virtualization (SR-IOV) is a standard that enables one PCI Express (PCIe) adapter to be presented as multiple separate logical devices to virtual machines. The hypervisor manages the physical function (PF) while the virtual functions (VFs) are exposed to the virtual machines. In the hypervisor, SR-IOV– capable network devices offer the benefits of direct I/O, which include reduced latency and reduced host CPU utilization. The ESXi platform’s VMDirectPath passthrough functionality provides similar benefits to the user, but it requires a physical adapter per virtual machine. In SR-IOV, this functionality can be provided from a single adapter to multiple virtual machines through virtual functions.

Reference What’s New in VMware vSphere® 5.1 – Networking -http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Network-Technical-Whitepaper.pdf

  • Paul Jones

    Single Root I/O Virtualization – Most likely was included to add support for FusionIO devices.

  • thehevy

    SR-IOV adds additional functionality to DirectPath I/O by increasing the number of VMs that can be directly assigned to a single network port. Before SR-IOV, we had a 1:1 ratio now we have a many:1 ratio. Dedicating a single 10GbE port to a single VM was hard to justify but now we can assign more based on load requirements.

    We are doing this for network and security appliances, especially workloads that have a lot of small pack traffic or that require lower latency then a vDs can provide.

    It does not make sense for most applications but it does for some that have not been able to be virtualized in the past due to poor software virtual switch performance.

    Brian Johnson

  • http://twitter.com/kurns Matt Hobbs

    And the SR-IOV gotcha:

    Users who are concerned about network latency can employ of this feature and off-load the processing onto the hardware of the PCIe adapter. However, they can’t make use of the VMware vSphere® vMotion® (vMotion), VMware vSphere® Fault Tolerance (VMware FT), and vSphere HA features while using this functionality.

    • thehevy

      I would like to point out that the limitations or incompatibilities are not necessarily related to SR-IOV specifically, they are limitations of VMware DirectPath I/O (PCI pass-through) direct assignment. SR-IOV provides the capability of assigning more than one VM to a single port using a standards based method. It is up to the hypervisor vendor to determine how to best use the VFs. VMware’s model focuses on performance instead of feature compatibility. Intel and VMware worked on a different model a few years back called Network Plug-in Architecture (NPA) that did allow vMotion and other features but it impacted the performance benefit so we moved to the model you see in vSphere 5.1.

  • http://twitter.com/vDanBarr Dan Barr

    No mention of the addition of BPDU Filter (though some have mistakenly been calling it BPDU Guard)? While I agree with Ivan that Guard would have been the better feature to implement, at least this does prevent a DoS of your entire vSphere cluster if BPDUGuard is enabled at the physical switch level.

    • http://etherealmind.com Etherealmind

      No. Is it really worth talking about features that should have been available in 2009 ?