VMware “vFabric” and the Potential Impact on Data Centre Network Design – The “Network Trombone”

There has been some speculation about VMware announcing a new hypervisor core capability in the upcoming VMworld. Stu Miniman at Wikibon first noticed and it and I worked with Stephen Foskett on this article about how a hypervisor for networking could be used to deliver network services (as opposed to server or applications) from within a virtual machine.

The Real Impact

However, I was mulling over the what the purpose of this would be and what would drive VMware to invest what are obviously significant resources to develop a new hypervisor kernel that would support real time guest operating systems. What is the business motivation ?

I have a hypothesis. And it’s related to vMotion.

The Problem with vMotion

Today we have a situation where a given Guest OS can be readily migrated between servers, but the data flow over the network must follow a fixed path. That is, you can move the server but the network equipment doesn’t move. The keystone problem is that the DNS name resolution mechanism does not allow for sub-second IP addressing changes and the OS vendors are highly resistant to developing new TCP/IP implementations and, equally, customers are resistant to change.

Therefore, if you follow the VMware methodology, the network needs to become dynamic as well. Now the paths that Ethernet frames take are always fixed from network appliance to the server. So, lets take a reasonably simple, service oriented, data centre design with firewall and load balancer and the servers. In the following diagram, you can see the typical packet flow from say the Internet comes through the front end and through the stack to the servers.

vmware-hyper-1.jpg

Introducing vMotion

Now, consider what happens to your data center if you extend the Layer2 domain between data centers.


Caption Text.

Dual Data Centre with L2 Extensions.(Click for a full size image)

But the real problem is how the client application accesses that server. Consider a flow that comes in from the Internet and attempts to connect to the server that has moved to an alternate data centre:


Caption Text.

The Traffic Trombone.(Click for a full size image)

At this point you have a lot bandwidth and latency problems. If the distance between those data is more than say 100 milliseconds (who cares how many kilometres that is, its the propagation delay that matters), then EVERY USER SESSION is going to become significantly delayed (meaning calls to the hep desk complaining about slow performance)

Where the VMware Real Time Hypervisor comes in

So let’s make the following assumptions:

  • That VMware is ready to announce and ship a hypervisor that offers some form of CPU scheduling that is acceptable to network appliance manufacturers such Bluecoat, F5, Citrix, Vyatta etc (yes, Vyatta is a big winner here, and their CEO even hinted at this in a recent blog post
  • That customers actually believe it will work.
  • The VMware doesn’t over price the licensing: because I think this is a way for VMware to charge for hypervisor again, which is what they want. Giving away VMware for free really bites their CEO.

Now lets project what happens if the routers, load balancers and firewalls are all virtual appliances on a VMware infrastructure and what could happen if they were capable of using vMotion to migrate at the same time as the servers


Caption Text.

Caption Text.(Click for a full size image)

So, with a bit of service orchestration, if a service was able to have separate instances of network appliances that handled the security, load balancing and routing, and those appliances were virtualised using VMware.


Caption Text.

Caption Text.(Click for a full size image)

The EtherealMind View

A lot of guesswork here, but the vMotion technology is suitable within a data centre, but is not suitable for use between data centres because of the reliance on L2 in the absence of a better name resolution process or better TCP/IP protocols ( and I agree completely with Ivan Pepelnjak on his points ).

For VMware to continue to grow as a cloud technology, they have to solve or help to solve the networking problem. This might be a useful technique for certain workloads for private clouds (as hinted by Vyatta CEO here).

The missing link is the orchestration. That is, you would have to define rules for the group failover or migration of network devices as the server migrate in the internal compute space. Therefore you would need a unique set of network devices for every application. That’s a lot of network appliances, and a lot of licenses.

Still, it could work for certain limited use cases. I wonder if this is what VMware has in mind ?

IOSHints Live San Jose – Meet Greg Ferro and Ivan Pepelnjak

NIL Data Communications is organizing the first ever IOSHints Live event with technology blogger and Cisco Press book author Ivan Pepelnjak (CCIE#1354) in San Jose, California on September 15th 2010. IOSHints Live is your chance to meet Ivan, discuss emerging technologies and review typical network designs using them. Greg Ferro (#CCIE6920) from the Packet Pushers Podcast and EtherealMind Blog will also be joining Ivan to discuss wider issues includes servers and storage networks and their integration with modern network designs

The morning session will cover Data Center design and migration from traditional data center toward private and public cloud solutions. Afternoon session will focus on resilient and highly available VPN solutions needed to connect remote sites with the redesigned data center.

The sessions will focus on the design aspects of real-life issues relevant to the sessionís participants. Itís highly recommended that you submit a network design youíd like to discuss or challenges youíre facing in your network at least a week before the event; this will ensure that their key components will be discussed during the session.

You can Book the Event or find more information at http://www.ioshints.info/IOSHints_Live_San_Jose.

  • Pingback: VMware vFabric: A Hypervisor For Networkers? – Stephen Foskett, Pack Rat()

  • http://blog.ioshints.info Ivan Pepelnjak

    Inter-DC vMotion still doesn’t make sense. If you want to solve the “traffic trombone” you have to pull the whole app stack with you (DB server, App server, Web server …)

    Likewise, if you want to pull the appliances with the VM, you either get low-granularity mobility chunks (you have to move a lot of things all at once) or high-granularity appliances (one appliance per VM or per few VMs) … in which case you’ll have numerous network appliance managed entities and a management nightmare.

    Last but not least, I’m still not convinced the general-purpose silicon is cheap enough to be wasted on networking tasks.

    • http://etherealmind.com Greg Ferro

      In general terms I agree, but this is sheer speculation at the momemnt. But considering these options creates some interesting ideas for multi tenant data centres

  • Tim H

    They say that all computing Science problems were solved back in the 70’s and that all we do today is reinvent the problems.

    Novell solved this some 10+ years ago with what they called virtual IP, http://www.novell.com/documentation/bcc/bcc11_admin_nw/data/anaz7cg.html whereby the host it’s self becomes a router the target LAN. This enables a site to site failover without a service changing the IP address of the target.

    • http://etherealmind.com Greg Ferro

      This looks like a the loopback address solution. Novell and Microsoft both deployed this technology at about teh same time. It had some significant limitations, of which, the OS was performing routing and often burned a lot CPU. Asymmetric routing was also a problem.

      Either way, this isn’t the answer as the migration occurs at Layer 2 not at Layer 3.

  • Pat M

    One thing to keep in mind with vMotion, the server configuration doesn’t change, just the hypervisor it is running on. So the DNS wouldn’t change since the IP address of the server didn’t change.

    Pat

  • http://sabotage-networks.blogspot.com Matt Bennett

    What kind of options are there to avoid the traffic trombones? You could inject a /32 for the vmotioned address but that’d make for some interesting routing challenges, especially if you lose a data centre and 10000 VMs flip over to the backup site!

    I guess that’s one problem that could be resolved by the vfabric stack, automatic address summarisation when it realises multiple VMs are moving as the hypervisor can talk directly to the router.

    • http://etherealmind.com Greg Ferro

      Injecting a /32 is part of the answer, but you also need to manage the default gateway by using ARP filtering. The problem is how to automate the route injection so that the IP route is propagated according to its location ? How do you detect that a L2 device has located locally instead of remotely ?

      I still don’t have a solution for that part yet.

      • Josh Gant

        I would think that if vCenter was coordinating the injection of a /32 to the ISP then it could selectively GARP each vMotioned guest for the local gateway. Good article.

        • http://etherealmind.com Greg Ferro

          ISP will not let you inject a /32, must at least a /24 and possibly only a /20. Internally you could inject a /32 into IGP, but you ARP is the underlying problem not the IP address.

          • Josh Gant

            It still seems that vCenter could manage the GARP in the vSwitch/vDs for the current DC. If you can only inject a /24 how would you move over a handful of guests/vApps and keep the sessions live? Isn’t that a bigger problem?

        • http://etherealmind.com Greg Ferro

          VMware cannot interfere with the Guest OS and force GARP in windows or Linux, that is something that only the OS can do reliably. Until the OS becomes VM ‘aware’ (unlikely) that’s not a solution.

  • knujlla

    Have you thought about LISP that Cisco has been talking about. It seems to address several of these challenges of Inter-DC vMotion. That said, I am with Ivan that granularity of that vMotion event will be critical to determine how useful this use case is.

  • Mark T

    I’ve seen you post in the past on F5’s – have you seen http://www.f5.com/pdf/deployment-guides/vmware-vmotion-dg.pdf ?

    VMware hooks into LTM/GTM to initiate the vmotion. EtherIP is used to tunnel Layer2 across the sites while the live vmotion occurs. GTM sits on top of the VIPs and controls DNS, pointing to the other site once the vmotion has occurred.

    as Ivan says, it doesn’t yet solve the problem of the DB/storage tier, unless you have the $$ for active/active db. some shops would be able to maintain small async delays on storage replication with homegrown levers to pull to switch the master.

  • solomon abavire kobina

    I am Solomon Abavire Kobina a Network/Design/Security/Systems/VSAT Engineer from Ghana working as a consultant for most large companies and I just want to tell Greg Ferro to keep it up and he’s doing a good job. Thanks

  • Pingback: Technology Short Take #5 - blog.scottlowe.org - The weblog of an IT pro specializing in virtualization, storage, and servers()

  • Pingback: Long-distance vMotion and the traffic trombone – Gestalt IT()

  • Pingback: Show 27 – Layer 2 Data Centre Interconnection ó Packet Pushers()

  • Pingback: Show 27 – Layer 2 Data Centre Interconnection ñ My Etherealmind()

  • Pingback: Show 28 – vCloud Network Overlays, OTV, VEPA and networking appliances ñ My Etherealmind()

  • Pingback: Responding: On optimizing traffic for network virtualization — My Etherealmind()