If youíre old enough to remember what ZX-81 was all about, youíll probably experience a weird sense of dÈj‡-vu when being exposed to the beauties of TRILL. For those of you that have never been exposed to brouters, hereís a short summary:
In the early 1990ís we started building large WAN networks, first with host-to-host links, then with WAN bridges and finally with routers. Not surprisingly, networks built with WAN bridges were experiencing catastrophic failures (extending a single broadcasting domain over slower-speed links is never a good idea). Unfortunately, some networking engineers love to fail multiple times, so theyíve reinvented WAN bridging again and again (if youíre interested in VPLS woes, read the VPLS article I wrote for SearchTelecom).
Learning from the failures of WAN bridging in early 1990s, network designers turned to routers. However, some companies trying to enter the game without the prerequisite engineering prowess tried to cut corners by introducing Layer-2 Protocol-Independent Routers, which looked and acted very similarly to what TRILL is trying to introduce: theyíve used SPF algorithms to compute the shortest path to individual MAC addresses and used all paths in the network (not just the spanning tree) to forward the traffic. Alas, a bridge remains a bridge even when you call it a brouter or a switch and Iíve seen several spectacular meltdowns of brouter-based networks.
The idea of SPF-based bridging got such a bad name that nobody even tried to resurface it for over 15 years, but with the fading memories and (supposedly) completely different landscape, the same technology has made a Phoenix-like reappearance. Its designers added interesting bells and whistles (support for VLANs and hierarchical bridging structure similar to 802.1ah), but itís the same story: a bridge remains a bridge.
The proponents of TRILL are positioning it within the Data Center, and itís probably a valuable addition to the Data Center designer toolbox, but Iím positive once TRILL gets standardized and implemented, some vendors will go out and sell it as a plug-and-play low-cost replacement for routers … and generate a few more spectacular failures.
The sad part of the whole saga is that we had the technologies that could solve the fundamental Data Center issue that requires large-scale bridging (live migration of virtual machines between physical servers) for almost 15 years: Cisco IOS supported Local Area Mobility since (at least) IOS release 11.0 and weíve implemented a LAN with hundreds of hosts using Local Area Mobility in mid 1990ís. Properly designed proven technologies combined with a few boring solutions introduced in the recent years (for example, inter-chassis link bonding available in Ciscoís Virtual Switching System) could solve most of the Data Center virtualization problems, but of course itís more interesting to develop yet another complex technology.
Maybe we should finally grow up and stop playing MacGyver trying to save the world with Rube Goldberg-like contraptions. Maybe we should admit every once in a while that we canít work around every stupidity thrown at us, impose some structure and sound engineering practices in our networks, and tell the host/OS/application vendors how networking is done properly. Until such time, people will gladly tell us that networking is not even close to any science