Here is a block diagram showing the functional areas in private & public cloud that I use when working with clients. I’m often explaining the full picture of cloud building especially in relation to how the network can be orchestrated to fully accelerate the cloud process. I hope you find it useful.
When working with Server and VMware people, there is a fair amount of misunderstanding of what is happening in the network. The best technical explanation for what is happening in LAG is, of course described at Ivan Pepelnjak’s IPSpace Blog – vSphere Does Not Need LAG Bandaids – The Network Does while Chris Wahl talks about the server side for VMware but I wanted to add something to the debate.
The VMware versus Cisco thing is overstated. It’s easy to conflate issues with all the excitement. The reality is that many customers have Cisco networks and will use VMware. They want Cisco & VMware to be partners. Customer first is corporate policy at both of these companies therefore Cisco & VMware will be partners. VMware has a software defined […]
Bob Plankers is making the point that the purchasing proprietary corporate software for virtualization (such as VMware vCloud or Citrix Cloudstack) has its own value by avoiding having to build & test your own software: There is an attitude among some now that OpenStack is, or at least will be, our savior from vendor lock-in […]
I was going to call this article “Ethernet Switches for Virtualisation Engineers” but, really, everyone should have some understanding of the internals of an Ethernet switch. But particularly I want to focus on how multicast and broadcasts are handled in a high speed, low latency environment like a Data Centre Network.
It’s vital to understand that latency is critical to your application performance. It is common for a single transaction to take hundreds of round trips so a small increase in latency on each round trip has a large impact on the perceived performance. The client will send a chunk of data and wait for acknowledgement. Even setting up the TCP connection takes a few round trip – remember that TCP sessions are setup, and each data transfer is confirmed.
A modern network switch will have latency around 10 microseconds. The Cisco Nexus 7000 is about 8 microseconds & Brocade VDX 8770 claims less than 4 microseconds. There are many reasons why a switch can be faster or slower but I’ll look at a specific example
Remember, the latency interval is the time taken to receive a packet, decode the address, lookup the forwarding table, switch the packet (and copy it if needed) and transmit out of an Ethernet interface. That’s really fast processing. How does an Ethernet switch do this ?