TL:DR look like the first version of vSphere that has least amount of compromises for large corporates. In other words, it’s more usable than before. Importantly, VMware has delivered a lot of networking features in this release and it would be fair to say that they either “overdue” or “much anticipated”. Take your choice.
References are linked – quick reading and recommended.
iSCSI Storage Driver Upgrade for Jumbo Frames
The emergence of 10Gb Ethernet in the datacenter and the popularity of network-based storage have created the need to provide features that enable users to fully utilize larger networking pipes. Jumbo frame support was available in previous releases of vSphere for virtual machine, NFS, and software-based iSCSI network traffic. vSphere 5.1 adds jumbo frame support to all hardware iSCSI adapters, including both dependent and independent hardware iSCSI adapters. Utilizing 10Gb networking and the new vSphere 5.1 jumbo frame support, hardware- based iSCSI throughput can be significantly increased.
I’m guessing that the performance improvement is from the previous version 5.0.
|PROTOCOL||READ THROUGHPUT||WRITE THROUGHPUT|
vDS Scalability Improvements and Enhancements
The vSphere Distributed Switch is a centrally managed, datacenter-wide virtual switch. It offers the same raw performance as the standard virtual switch but includes advanced networking features and scalability. Having one centrally managed virtual switch across the entire vSphere environment greatly simplifies networking and network management in the datacenter. vSphere 5.1 increases the manageability and recovery of the VDS by providing automatic rollback from misconfigurations as well as recovery and reconfiguration capabilities directly from the local console of the vSphere host.
In addition to the manageability and recovery enhancements made to the VDS, vSphere 5.1 greatly increases the number of switches supported per server and doubles the number of port groups supported per vCenter Server.
|VDS PROPERTIES||5.0 LIMIT||5.1 LIMIT|
|Number of VDS per vCenter Server||32||128|
|Number of Static Port Groups per vCenter Server||5,000||10,000|
|Number of Distributed Ports per vCenter Server||30,000||60,000|
|Number of Hosts per VDS||350||500|
Link Aggregation Control Protocol Support
LACP enables a network device to negotiate an automatic bundling of links by sending LACP packets to the peer. As part of the vSphere 5.1 release, VMware now supports this standards-based link aggregation protocol.
My note – finally. Few people understand why this took so long to ship. Although, there are only a limited number of LACP bundles on a switch and we might be counting them up real soon.
This dynamic protocol provides the following advantages over the static link aggregation method supported by previous versions of vSphere: * Plug and Play – Automatically configures and negotiates between host and access layer physical switch * Dynamic – Detects link failures and cabling mistakes and automatically reconfigures the links
Port Mirroring (RSPAN and ERSPAN)
Users can employ the RSPAN and ERSPAN features when they want to centrally monitor network traffic and have a sniffer or network analyzer device connected multiple hops away from monitored traffic.
Notably, VMware has decided to interoperate at the hardware level ie. work with physical switch SPAN features.
Enhanced SNMP Support
ESXi hosts provide support for running an SNMP agent. Looks like standards compliant support for IF-MIB and the expected SNMP tables including LAG, IP MIBs and Bridging MIBs. Opens the way for a lot of existing tools to get visbility into vSphere guests.
Single Root I/O Virtualization
I still don’t understand SR-IOV’s applications to server architecture and I’m still looking for documentation that can explain it to me. Any help would be appreciated.
Single Root I/O Virtualization (SR-IOV) is a standard that enables one PCI Express (PCIe) adapter to be presented as multiple separate logical devices to virtual machines. The hypervisor manages the physical function (PF) while the virtual functions (VFs) are exposed to the virtual machines. In the hypervisor, SR-IOV– capable network devices offer the benefits of direct I/O, which include reduced latency and reduced host CPU utilization. The ESXi platform’s VMDirectPath passthrough functionality provides similar benefits to the user, but it requires a physical adapter per virtual machine. In SR-IOV, this functionality can be provided from a single adapter to multiple virtual machines through virtual functions.