In this blog post I’ll make an attempt to summarise Overlay Networking in a couple of paragraphs to act as reference for upcoming blog posts that discuss the nature of Tunnel Fabrics in Physical Network environments. It also has pictures.
I’ve been reading a presentation from Sharkfest 2012 where a engineers from Microsoft are presenting on their
Microsoft’s Demon – Datacenter Scale Distributed Ethernet Monitoring Appliance. The whole presentation is interesting but this particular slide caught my attention:
i’ve writer about OpenCompute hardware standards a few times. Today has seen a few announcements that make me think networking could be about to change significantly. In this post on Gigaom, Rackspace is planning to build their own servers based on OpenCompute standard: Rackspace is contracting with Wistron and Quanta, two server manufacturers that also […]
Ivan Peplnjak posts an outstanding summary of the myriad networking challenges when designing a dual Data Centre . Complete with cynical commentary and live action diagrams, he explains the problem and some suggestions for the solution. Recommended for everyone! We have a network with two data centers (connected with a DCI link). How could we […]
I’ve talked bit about the problems of using Category 6 copper cabling in the data centre. The sheer size and weight of the cable is a serious problem. Here are some photos showing the comparison of Category 5 and Category 6 cable bundles.
There is an old saying “A man with his eyes fixed on Heaven doesn’t see where he is going”. It’s an almost perfect description of how the major vendors are bringing Software Defined Networking to the market.
The consistent message from all the vendors and especially the Cisco, Juniper and Brocade is that there are “no use cases for SDN”. In the last three months, this has been a constantly repeated statement both publicly and privately. This beggars belief that vendors can’t see immediate needs that deliver long term gains.
I suspect that the root of this problem is the big companies want to solve big problems. And by solving big problems they figure that they can make big revenue. Alright, I get that. It’s understandable that large organisations need a constant revenue stream to feed the insatiable maws of their shareholders. However, the vendors re also missing the most real and immediate problem of networking today. Simply, Networking is too hard.
Vendors haven’t developed tools that keep the complexity of networking under control. Complexity can be reduced to this: “I don’t have big problems, I have lots of small problems.” You can have debates about addressing complexity and how to attack it, but it nearly always boils down to this: start small.
In this post, I’m looking at network designs with ECMP cores using TRILL or SPB, I’m realising that STP is equally improved in terms of risk and performance by reducing the STP domain size which leads to better stability, reduced risk and impact mitigation
A couple of weeks back I posted this article comparing pricing and features on Cisco Fabric Ethernet Transceivers as a low cost option compared to 10GbaseSR SFP+ optics in when building 10GbE networks – Cisco Nexus 5000 / 2000 Pricing Bundles and Fabric Extension Transceivers (FETs) vs 10GbaseSR SFPs.
I was reading a white paper by Panduit that claims that 10GBaseT is suitable for use. I’ve been critical of Cat6A cable and believe that it’s not suitable for data centre use.
With all the talk about Layer 2 Multipath (L2MP) designs going on, I just want to point out a fundamental change in the way many people approach network design. It’s seems that this point has been lost somewhere in the discussion of protocols.
The Spanning Tree Protocol blocks looped paths, and in a typical networks this means that bandwidth is unevenly distributed. Of course, we might use PVST or MST to provide a rough sharing of load by splitting the spanning tree preferences for different VLANs, but the design still doesn’t change overall. The basic point is that there is a LOT of bandwidth that is never evenly utilised – and that means wasted power, space and cooling (which costs more than the equipment itself).