TL:DR A recent project bought a low cost network for the data centre. It cost less one-third of the the market leader & half the cost of a well known merchant silicon vendors. As a result, it is planned to last for two, maybe three years before it will be replaced. From this project I learned that “fast & cheap networking” could make a big impact on new data centre designs and business attitudes. Plus it was much more satisfying as a professional project. I’m now wondering – is networking too expensive ?
Most people refer to the “Data Centre Network” as though it was a single network. In practice, data centres have a number of individual networks. Each network is specifically designed for a specific purpose and function. A typical data centre network design has about five individual networks that connect together to form the “data centre network” which many people fail to recognise. I’ll define these networks and then look at the future of data centre networks with overlays. What seems clear, today, is that networking will provide different networks for different use cases and the customer will decide.
Overlay networking has been around for a year or so now and the ideas behind it are well established. It was about 3/4 weeks ago while researching VTEP functionality in Dell and Arista switches that I realised I could build manually configured tunnels with VXLAN and get the same results as an EoMPLS x-connect with almost zero effort. More importantly, I don’t have to pay for expensive hardware that has MPLS functions or pay again for software licenses to upgrade with MPLS features.
I’ve been reading a presentation from Sharkfest 2012 where a engineers from Microsoft are presenting on their
Microsoft’s Demon – Datacenter Scale Distributed Ethernet Monitoring Appliance. The whole presentation is interesting but this particular slide caught my attention:
i’ve writer about OpenCompute hardware standards a few times. Today has seen a few announcements that make me think networking could be about to change significantly. In this post on Gigaom, Rackspace is planning to build their own servers based on OpenCompute standard: Rackspace is contracting with Wistron and Quanta, two server manufacturers that also […]
Ivan Peplnjak posts an outstanding summary of the myriad networking challenges when designing a dual Data Centre . Complete with cynical commentary and live action diagrams, he explains the problem and some suggestions for the solution. Recommended for everyone! We have a network with two data centers (connected with a DCI link). How could we […]
I’ve talked bit about the problems of using Category 6 copper cabling in the data centre. The sheer size and weight of the cable is a serious problem. Here are some photos showing the comparison of Category 5 and Category 6 cable bundles.
There is an old saying “A man with his eyes fixed on Heaven doesn’t see where he is going”. It’s an almost perfect description of how the major vendors are bringing Software Defined Networking to the market.
The consistent message from all the vendors and especially the Cisco, Juniper and Brocade is that there are “no use cases for SDN”. In the last three months, this has been a constantly repeated statement both publicly and privately. This beggars belief that vendors can’t see immediate needs that deliver long term gains.
I suspect that the root of this problem is the big companies want to solve big problems. And by solving big problems they figure that they can make big revenue. Alright, I get that. It’s understandable that large organisations need a constant revenue stream to feed the insatiable maws of their shareholders. However, the vendors re also missing the most real and immediate problem of networking today. Simply, Networking is too hard.
Vendors haven’t developed tools that keep the complexity of networking under control. Complexity can be reduced to this: “I don’t have big problems, I have lots of small problems.” You can have debates about addressing complexity and how to attack it, but it nearly always boils down to this: start small.
In this post, I’m looking at network designs with ECMP cores using TRILL or SPB, I’m realising that STP is equally improved in terms of risk and performance by reducing the STP domain size which leads to better stability, reduced risk and impact mitigation
A couple of weeks back I posted this article comparing pricing and features on Cisco Fabric Ethernet Transceivers as a low cost option compared to 10GbaseSR SFP+ optics in when building 10GbE networks – Cisco Nexus 5000 / 2000 Pricing Bundles and Fabric Extension Transceivers (FETs) vs 10GbaseSR SFPs.
I was reading a white paper by Panduit that claims that 10GBaseT is suitable for use. I’ve been critical of Cat6A cable and believe that it’s not suitable for data centre use.
With all the talk about Layer 2 Multipath (L2MP) designs going on, I just want to point out a fundamental change in the way many people approach network design. It’s seems that this point has been lost somewhere in the discussion of protocols.
The Spanning Tree Protocol blocks looped paths, and in a typical networks this means that bandwidth is unevenly distributed. Of course, we might use PVST or MST to provide a rough sharing of load by splitting the spanning tree preferences for different VLANs, but the design still doesn’t change overall. The basic point is that there is a LOT of bandwidth that is never evenly utilised – and that means wasted power, space and cooling (which costs more than the equipment itself).
I was doing a Data Centre Design recently and did some numbers around the numbers of 10 Gigabit Ethernet ports that need to be deployed. I got a bit of a realisation shock.
I tweeted “an I haz a “I ♥ OpenFlow” sticker pleez” – just got a call….
Someone made a comment that Packet Pushers hasn’t discussed SPB as alternative to TRILL or other Fabric solutions. Here’s why.
Kurt Bales has a customer who wants to buy a new Data Centre Network and the three main networking vendors (Juniper, Cisco & Brocade) have pitched at him and the customer. Kurt then contacted the Pushers and said “This would make a great podcast to talk about how it looks, works and the reality of the so-called “Data Centre Fabric networks, plus I’ve got some questions that I’d like to get some second opinions.”
So we rounded up Ivan from IOS Hints and Greg from EtherealMind to record a fast, furious and focussed look at the state of play with the three data centre fabrics today. Lots of speculation, wild guesses and deep diving followed. I learned heaps.
Breaking down the definition of North/South and East/West Bandwidth with some nice pictures and examining Layer 2 Multipath and why it fits virtualisation so well.
I was intrigued and excited about the Junipers announcement last week of QFabric. I was vaguely aware of TRILL and Cisco implementation (Fabric Path), but came to the table (so to speak) with no pre-conceptions of what I might expect. SCI-FI – Is this just me? Is the Q in QFabric taken from sci-fi […]
The current technologies of data centre networks don’t address the fundamental scaling issues. You can’t scale to hundreds of independent switches, we need to have less control planes for more coherent functions. Here is my take on next wave of networking in the data centre beyond DCB and TRILL.