In a recent project, I noted an astonishing difference in price of the optical and copper cabling assemblies associated with 10Gigabit Networking. I conclude that some companies are including a “license fee” in the cost of these components and making the overall cost harder to determine.
TL:DR A recent project bought a low cost network for the data centre. It cost less one-third of the the market leader & half the cost of a well known merchant silicon vendors. As a result, it is planned to last for two, maybe three years before it will be replaced. From this project I learned that “fast & cheap networking” could make a big impact on new data centre designs and business attitudes. Plus it was much more satisfying as a professional project. I’m now wondering – is networking too expensive ?
Most people refer to the “Data Centre Network” as though it was a single network. In practice, data centres have a number of individual networks. Each network is specifically designed for a specific purpose and function. A typical data centre network design has about five individual networks that connect together to form the “data centre network” which many people fail to recognise. I’ll define these networks and then look at the future of data centre networks with overlays. What seems clear, today, is that networking will provide different networks for different use cases and the customer will decide.
When working with Server and VMware people, there is a fair amount of misunderstanding of what is happening in the network. The best technical explanation for what is happening in LAG is, of course described at Ivan Pepelnjak’s IPSpace Blog – vSphere Does Not Need LAG Bandaids – The Network Does while Chris Wahl talks about the server side for VMware but I wanted to add something to the debate.
Overlay networking has been around for a year or so now and the ideas behind it are well established. It was about 3/4 weeks ago while researching VTEP functionality in Dell and Arista switches that I realised I could build manually configured tunnels with VXLAN and get the same results as an EoMPLS x-connect with almost zero effort. More importantly, I don’t have to pay for expensive hardware that has MPLS functions or pay again for software licenses to upgrade with MPLS features.
Stumbled over “AgilePorts” feature in Arista products this week: Arista’s AgilePorts technology enables the combination of four 10GbE SFP+ interfaces into a single 40GbE interface leveraging the parallel lane technology present in the 40GBASE-CR4 and 40GBASE-SR4standards. With AgilePorts, each 10GbE interface emulates one of the four parallel lanes, which are then driven by a 40GbE […]
I’ve been working on a lot of diagrams lately and pondering how to represent network architectures. I’ve been reading The Visual Display of Quantitative Information to get some inspiration on different approaches. I continue to be fascinated by the power of a network diagram that is well thought out and visually pleasing. And this fascination has led to my own focus on different network diagrams. In this post I’m thinking out loud one the different ways to represent information.
I was commissioned by GigaOmPro to write a report on “SDN Challenges in Large Scale Deployments”. I spoke with a number of network and virtualization engineers about their perspectives on SDN, the challenges they faced and how they would use Software Defined Networking in their data centres. It was evident during the research phase is that many people are not clear on what Overlay Networking is and just how deeply Overlay Networking will change Data Centre architecture and especially the nature of the networking and security domains.