Here is a block diagram showing the functional areas in private & public cloud that I use when working with clients. I’m often explaining the full picture of cloud building especially in relation to how the network can be orchestrated to fully accelerate the cloud process. I hope you find it useful.
When working with Server and VMware people, there is a fair amount of misunderstanding of what is happening in the network. The best technical explanation for what is happening in LAG is, of course described at Ivan Pepelnjak’s IPSpace Blog – vSphere Does Not Need LAG Bandaids – The Network Does while Chris Wahl talks about the server side for VMware but I wanted to add something to the debate.
The VMware versus Cisco thing is overstated. It’s easy to conflate issues with all the excitement. The reality is that many customers have Cisco networks and will use VMware. They want Cisco & VMware to be partners. Customer first is corporate policy at both of these companies therefore Cisco & VMware will be partners. VMware has a software defined […]
Bob Plankers is making the point that the purchasing proprietary corporate software for virtualization (such as VMware vCloud or Citrix Cloudstack) has its own value by avoiding having to build & test your own software: There is an attitude among some now that OpenStack is, or at least will be, our savior from vendor lock-in […]
I was going to call this article “Ethernet Switches for Virtualisation Engineers” but, really, everyone should have some understanding of the internals of an Ethernet switch. But particularly I want to focus on how multicast and broadcasts are handled in a high speed, low latency environment like a Data Centre Network.
It’s vital to understand that latency is critical to your application performance. It is common for a single transaction to take hundreds of round trips so a small increase in latency on each round trip has a large impact on the perceived performance. The client will send a chunk of data and wait for acknowledgement. Even setting up the TCP connection takes a few round trip – remember that TCP sessions are setup, and each data transfer is confirmed.
A modern network switch will have latency around 10 microseconds. The Cisco Nexus 7000 is about 8 microseconds & Brocade VDX 8770 claims less than 4 microseconds. There are many reasons why a switch can be faster or slower but I’ll look at a specific example
Remember, the latency interval is the time taken to receive a packet, decode the address, lookup the forwarding table, switch the packet (and copy it if needed) and transmit out of an Ethernet interface. That’s really fast processing. How does an Ethernet switch do this ?
Arista has announced the 7150S device. It’s low latency, 10 Gigabit and VXLAN terminating. What’s interesting to me is that Brocade and Arista are solving the same problem in different ways. Ivan has determined that Arista have decided to use the Intel chipset (I’m guessing the SM6000?) and then enable the tunnel termination features in the software.
I have to agree in part with Trevor Potts at The Register and object to VMware’s solution to the vCenter client platform problem. He got to ask VMware why they are using Flash instead of HTML5 and he runs down the list of options. Java – too much versioning, not much customer love (ie none, […]
TL:DR look like the first version of vSphere that has least amount of compromises for large corporates. In other words, it’s more usable than before. Importantly, VMware has delivered a lot of networking features in this release and it would be fair to say that they either “overdue” or “much anticipated”. Take your choice.
VMware testing shows dramatic performance improvements, especailly CPU reduction, when performing vMotion over RoCEE networks. Implications for future network designs are enormous.
The VMware networking blog talks about the new networking features in vSphere 5. Well, talk would be overstating it. Mention, maybe. Post It Note, perhaps.
In fact, you could probably burp out the feature list on a single beer.