On the whole, the x86 technology in data centre servers is quite predictable. Intel gives everyone plenty of warning about what they have planned and vendors lack confidence about what to build so they talk to customers, media and analysts before they make any final decisions.
The result is that everyone knows pretty much what is coming.
The PCI bus is becoming unworkable and reaching the limits of signal propagation in an electrical medium. Moore’s law much predict ever increasing silicon performance but it doesn’t work for communication rates when using copper tracks on a PCB.
The PCIe V4.0 bus is 15.75 Gbps per lane and supports up to 16 lanes for 252 Gbps. 1 This could easily be handled in a standard Ethernet silicon today using 25GbE Ethernet on the motherboard, possibly over fibre optic in the motherboard using silicon photonics. Consider that Cavium Xpliant 2 Ethernet switch fabric has 128 x 25 GbE ports and 3.2 Tbps capacity today that could support 64 PCIe 4.0 devices.
The Intel Rack Scale Architecture project is a step in this direction3 and showing the a strong networking technology could disaggregate server components. Instead of buying 20 odd HP or Dell servers to put into a rack, you might end up buying a single rack that contains shelves of components 4.
And think of how this impacts virtualization and overlay networking. Instead of using software for encapsulation / tunnelling, the switch fabric could perform all of these function in hardware accelerated mode.
Now, think about having an Ethernet network in the data centre that could support a rack of CPUs, a rack of memory, a rack of storage ……
The EtherealMind View
I surely do not know what will happen but the abuse of the Ethernet standard and protocol seems to be limitless. Replacing PCI seems practical as it reaches physical limits but am not well enough informed or expert to be sure that practical design issues are solvable.
And there will be a great deal of work that get customers to adopt the idea of buying a rack at time. That said, VCE Vblock and EVO:Rack product strategies show that it can be done.
I’ll be here waiting. Networking will never die, its just keeps changing.