Steven emailed me with the following:
Just curious, I’ve been reading many articles and you seem to contradict conclusions between articles. One article you tout the benefit of Network Agents running on hypervisor and using the power of the CPU for tunnels, in another article you dislike this model because of scalability issues (more cpu load = slower network packet processing).
Just curious if you’ve changed your mind after many bad experiences or if you’re suggesting a lighter weight agent would be better in some cases than a vSwitch type module.
I don’t think software switches on shared resources will ever scale to something that dedicated silicon can deliver such as the ASR1000 with a 40-core Quantum Flow CPU.
These are good points. I started to write back and it turned into this post.
Originally I was of the opinion that hardware server and general purpose software such as VMware ESX, MS Windows or Linux KVM would not be able to handle the packet processing load for encapsulation and packet handling. I’ve used and tested software routing on Linux in the past but realised it was relatively slow, clunky and hard to justify. And, of course, ITIL prevents networking engineers from using servers to deliver network services.
However, I’ve seen information from Intel roadmaps that clearly shows that current generations of server hardware can easily handle 10Gbps of traffic and probably more than 40Gbps with just a single CPU Core. Recent server CPUs and motherboards typically have 8 to 16. While it requires some co-operation with intel to use the correct network drivers and software but this is already underway with a number of vendors.
When you dig deeper, it is clear that intel is investing heavily in technology that improves the network performance to attract network appliance manufacturers to their hardware. Faster buses, new memory architectures, DMA access from NIC to memory and CPU avoidance for certain network operations. In addition, Intel purchased Fulcrum Micro (network silicon manufacturer) a few years back and this technology is on the roadmap to be added to server motherboards. Think about what that means, network technology inside the CPU or support chips.
In short, server hardware is much more capable at networking than before because there is hardware support for it. Intel is building server hardware that performs at the same level as a lot of networking routers on the market today. Linux, VMware and Microsoft all co-operate with Intel to improve their networking performance on the newer generations of CPU and motherboards.
It’s worth noting that networking performance of Intel x86 servers is highly dependent on the motherboard features including bus speed, network adapter chipset, I/O chipsets etc. Older x86 technology does not perform well but modern servers such as Cisco UCS & HP Gen8 series have lots of newer hardware that deliver far greater networking performance.
Given this information there are many more servers than switches, edge networking in the server is practical solution. Scaling is the same as MPLS since the network edge performs the largest amount of processing and because there are many “network edge as servers” then the scaling is performed in the server hardware and not the network hardware.
Custom Hardware Is Just One of Many Solutions or Options
I agree that an ASR1000 with the “40-core Quantum Flow CPU” will always have a performance advantage but the question is whether this technology is relevant in the future ? For example, an IBM mainframe has significantly better performance than an x86 server but the majority of companies chooses to run x86 servers for efficiency and cost effectiveness. Companies can choose a mainframe if that solution meets their need but most, in fact the the vast majority, choose other systems because they are better suited to their requirements.
How many companies have purchased a Cisco ASR1000 or other hardware router that could be replaced with an x86 server ? How many ASR1000’s run at heavy load ? Does every customer need the performance of an ASR1000 ? I don’t think so. I think that many companies could use alternate solutions.
Consider the following scenario, do you really need a custom hardware device to connect to Internet ?
For example, I know several sites that using an ASR1000 to connect a single 10GbE Internet connection and nothing else because of the existing security isolation practices. These companies could seriously consider using a recent model of whitebox x86 server running Brocade Vyatta router software at 10% of the cost (over 5 years TCO) of the Cisco ASR1000. An simple Internet connection requires no “advanced” routing features (for most deployments). Exactly what would be lost in this kind of design when using a software router compared to a hardware router ?
One other enabler for X86 routers is that Ethernet is everywhere. Carriers are avoiding using ATM, DS3 or Frame Relay interfaces and delivering Ethernet connections as WAN connectivity. I don’t need dedicated hardware modules to connect to legacy WAN protocols like ten years ago. From what I can research and discover, an x86 server has all the Ethernet hardware you will ever need and features are determined by the servers.
Having x86 hardware offers some interesting changes to the way we operate networks. The hardware maintenance contracts on the custom silicon are comparatively expensive. There are other opportunities too – better testing, shorter lead times, predictable and reasonable capital cost.
The EtherealMind View
In the end, I take the view “software networking” will replace some significant percentage of the network market. But, in the same way that TV did not replace Radio & that x86 servers did not replace mainframe, there will be both hardware & software routers in the networks of tomorrow. I have the view that there will be a lot more software routing than people expect – but it is still just networking. Networking is not about a specific piece of hardware, chipset or device. Networking is a group of technologies that forward data between connected systems and doesn’t care about the hardware that makes that happen.
The good news is that there will be more networking devices than ever before. Low cost software routers on x86 make is possible to use ten “software” routers instead of one big hardware device and this will make operation and change control a lot easier.
At least, that is what I think today. I don’t have access to the resources, research and discussions that vendor are having. I can only comment and consider on what I can observe in the marketplace.
Bring on software networking. We need more choices, more options, more solutions to give us more networking.