For the last 20 years, L2 tree-based network topologies meant that the only practical design methodology was to buy large, vertically scaled switch chassis for the core of the data centre. This limitation was largely due to the tree-structure forced on LAN networking by Spanning Tree Protocol. For every new device at point Access/1 we […]
Ethernet Hard Drives are coming. Nothing to do with networking.
Lede: In discussions with a stealthy networking startup today, we were discussing how their overlay network technology for the SDN WAN was able to to detect network blackouts and brownouts in the physical network. Their answer was to run Bi-directional Forwarding Detection (BFD) in the overlay tunnels. Now you have effective quality and service detection in the overlay network.
In a recent project, I noted an astonishing difference in price of the optical and copper cabling assemblies associated with 10Gigabit Networking. I conclude that some companies are including a “license fee” in the cost of these components and making the overall cost harder to determine.
The performance of appliances is complex topic. There are four broad aspects of the dedicated hardware to provide the higher performance and three reasons why vendors like selling appliances instead of software VMs. I cover them here.
TL:DR A recent project bought a low cost network for the data centre. It cost less one-third of the the market leader & half the cost of a well known merchant silicon vendors. As a result, it is planned to last for two, maybe three years before it will be replaced. From this project I learned that “fast & cheap networking” could make a big impact on new data centre designs and business attitudes. Plus it was much more satisfying as a professional project. I’m now wondering – is networking too expensive ?
Most people refer to the “Data Centre Network” as though it was a single network. In practice, data centres have a number of individual networks. Each network is specifically designed for a specific purpose and function. A typical data centre network design has about five individual networks that connect together to form the “data centre network” which many people fail to recognise. I’ll define these networks and then look at the future of data centre networks with overlays. What seems clear, today, is that networking will provide different networks for different use cases and the customer will decide.
When working with Server and VMware people, there is a fair amount of misunderstanding of what is happening in the network. The best technical explanation for what is happening in LAG is, of course described at Ivan Pepelnjak’s IPSpace Blog – vSphere Does Not Need LAG Bandaids – The Network Does while Chris Wahl talks about the server side for VMware but I wanted to add something to the debate.
Making diagrams aesthetically appealing with visual impact is better documentation. Choosing the correct fonts on network diagrams will improve your network diagrams significantly. Here is some work on how to choose a good font and some recommendations on the best free fonts for your machine.
Overlay networking has been around for a year or so now and the ideas behind it are well established. It was about 3/4 weeks ago while researching VTEP functionality in Dell and Arista switches that I realised I could build manually configured tunnels with VXLAN and get the same results as an EoMPLS x-connect with almost zero effort. More importantly, I don’t have to pay for expensive hardware that has MPLS functions or pay again for software licenses to upgrade with MPLS features.