• Home
  • Who Am I ?
    • Contact
    • What does Ethereal Mind mean ?
  • Disclosure
    • Disclaimer
    • Comment Policy
    • Privacy Policy
  • Just Three
  • Archive

EtherealMind

Software Defined & Intent Based Networking

You are here: Home / Archives for Ivan Pepelnjak

The TRILLing brain split

6th July 2010 By Ivan Pepelnjak Filed Under: Rant

The TRILLing brain split

The split personality Cisco has exposed at Cisco Live 2010 is amazing: on one hand you have the Data Center team touting the benefits of Routing at Layer 2 (an oxymoron if Iíve ever seen one), on the other hand you have Russ White extolling the virtues of good layer-3 design in the CCDE training (the quote I like most: ìIt all meets at Layer 3 … thatís why CCDE is layer-3 centricî). If youíre confused, youíre not the only one, so letís try to analyze whatís going on.

The clash of interests. Letís be perfectly clear: the best design of your data center network is not the focus of vendorsí activities. Having a well-designed and stable network is definitely in your best interest, it might be in the interest of your external consultants or your system integration partner (assuming they are able to focus beyond quarterly results), but what the vendors want most is to sell you more boxes and/or services. Cisco would love you to upgrade from Catalyst 6500 to Nexus. Introducing a new technology that supposedly brings world peace to data center networks but only runs on a Nexus 7000 could be an enticing motivation for a forklift upgrade.

Virtualization and convergence. This is no news. Servers are getting virtualized. Storage is moving from embedded drives or SAN into converged LAN. LAN network and servers are getting tightly coupled. However, Cisco almost owned the LAN market, Brocade was big in the SAN market and HP is a major player in the server market. After the three components converge, someone is bound to lose big. Thatís why Cisco has launched UCS, Brocade is preaching that the Earth is flat and HP is trying to sell you high-end switches.

They need large-scale bridging. You donít need large-scale bridging in your Data Center. Your server team might think they need it to support inter-site vMotion, but even that can be solved (assuming itís a good idea to start with). Vendors need large-scale bridging if they want to sell you FCoE (remember: itís bridged) or if they want to sell the server managers a vision of seamless private clouds. We all know the drawbacks and complexities of spanning tree, so theyíre introducing a magic technology that will solve all those problems. It doesnít matter that it hasnít been tested, it doesnít matter that it requires new hardware (even Nexus 7000 requires TRILL-enabled blades).

Who are they talking to? In most organizations, the ìserver+storageî budget is bigger than the ìLANî budget (and the server team is bigger than the networking team). If you want to sell unified solutions, you have to sell them to the server managers. Their view of the network is exceedingly simple: it should be transparent. Now go and read the Scaling Data Center with FabricPath white paper and tell me whose sore spots itís addressing.

What can you do? If you have a feud with the server team, dump it. You will have to work very closely with them or they will go over your head and install something youíll be forced to support anyway. Try to understand their concerns and priorities. And, most importantly, start from the business perspective: what is it that your company is trying to solve and what are the true business requirements.

Last but not least, if you need a comprehensive overview of data center, server and storage technologies, you might consider registering for my Data Center 3.0 for Networking Engineers webinar.

TRILL: its a deja vu all over again

1st June 2010 By Ivan Pepelnjak Filed Under: Featured, Opinion

If youíre old enough to remember what ZX-81 was all about, youíll probably experience a weird sense of dÈj‡-vu when being exposed to the beauties of TRILL. For those of you that have never been exposed to brouters, hereís a short summary:

In the early 1990ís we started building large WAN networks, first with host-to-host links, then with WAN bridges and finally with routers. Not surprisingly, networks built with WAN bridges were experiencing catastrophic failures (extending a single broadcasting domain over slower-speed links is never a good idea). Unfortunately, some networking engineers love to fail multiple times, so theyíve reinvented WAN bridging again and again (if youíre interested in VPLS woes, read the VPLS article I wrote for SearchTelecom).

Learning from the failures of WAN bridging in early 1990s, network designers turned to routers. However, some companies trying to enter the game without the prerequisite engineering prowess tried to cut corners by introducing Layer-2 Protocol-Independent Routers, which looked and acted very similarly to what TRILL is trying to introduce: theyíve used SPF algorithms to compute the shortest path to individual MAC addresses and used all paths in the network (not just the spanning tree) to forward the traffic. Alas, a bridge remains a bridge even when you call it a brouter or a switch and Iíve seen several spectacular meltdowns of brouter-based networks.

The idea of SPF-based bridging got such a bad name that nobody even tried to resurface it for over 15 years, but with the fading memories and (supposedly) completely different landscape, the same technology has made a Phoenix-like reappearance. Its designers added interesting bells and whistles (support for VLANs and hierarchical bridging structure similar to 802.1ah), but itís the same story: a bridge remains a bridge.

The proponents of TRILL are positioning it within the Data Center, and itís probably a valuable addition to the Data Center designer toolbox, but Iím positive once TRILL gets standardized and implemented, some vendors will go out and sell it as a plug-and-play low-cost replacement for routers … and generate a few more spectacular failures.

The sad part of the whole saga is that we had the technologies that could solve the fundamental Data Center issue that requires large-scale bridging (live migration of virtual machines between physical servers) for almost 15 years: Cisco IOS supported Local Area Mobility since (at least) IOS release 11.0 and weíve implemented a LAN with hundreds of hosts using Local Area Mobility in mid 1990ís. Properly designed proven technologies combined with a few boring solutions introduced in the recent years (for example, inter-chassis link bonding available in Ciscoís Virtual Switching System) could solve most of the Data Center virtualization problems, but of course itís more interesting to develop yet another complex technology.

Maybe we should finally grow up and stop playing MacGyver trying to save the world with Rube Goldberg-like contraptions. Maybe we should admit every once in a while that we canít work around every stupidity thrown at us, impose some structure and sound engineering practices in our networks, and tell the host/OS/application vendors how networking is done properly. Until such time, people will gladly tell us that networking is not even close to any science


Network Break Podcast

Network Break is round table podcast on news, views and industry events. Join Ethan, Drew and myself as we talk about what happened this week in networking. In the time it takes to have a coffee.

Packet Pushers Weekly

A podcast on Data Networking where we talk nerdy about technology, recent events, conduct interviews and more. We look at technology, the industry and our daily work lives every week.

Our motto: Too Much Networking Would Never Be Enough!

Find Me on Social Media

  • Facebook
  • Instagram
  • Linkedin
  • RSS
  • Twitter
  • YouTube

Return to top of page

Copyright Greg Ferro 2008-2017 - Thanks for reading my site, it's been good to have you here.

Opinions, Views and Ideas expressed here are my own and do not represent any employer, vendor or sponsor.Full disclosure