Most people refer to the “Data Centre Network” as though it was a single network. In practice, data centres have a number of individual networks. Each network is specifically designed for a specific purpose and function. A typical data centre network design has about five individual networks that connect together to form the “data centre network” which many people fail to recognise. I’ll define these networks and then look at the future of data centre networks with overlays. What seems clear, today, is that networking will provide different networks for different use cases and the customer will decide.
The Five Networks In Your Data Centre
Lets take typical enterprise data centre where it’s common to have five different networks where each network has a specific function and use case. For example, the Core Network is designed for maximum reliability and capacity because the reliance L2 networking for “mobility” forces vertical scaling of the core network hardware. The DMZ network is operated as with security as the primary goal and is used to connect firewalls and security appliances. Some companies will have more but my experience is that five is a good enough number. Add in a couple more like Pre-Production and Storage and even the Top-of-Rack is often considered separately from the other (the network switches are physically very different and operated differently).
At this point, I’ve found that most people agree that the data centre is not a single network but many individual network types joined together into a single whole.
The Networks of Tomorrow
This leads to the question of the data centre network in the near future in a time of disruption.
Overlay and underlay networks, software-defined and application-defined networks are just a few of technologies that we know are planned and there is no clear leadership position.What if the data centre network had many different types of network for each use case. An ECMP Network that is optimised for virtualization with low-cost, simple devices that have high performance to provide support for software overlays from VMware NSX, Midokura etc.
A DMZ using Cisco ACI product strategy with it’s physical orchestration capabilities would combine well with VMs and appliances in a high security deployment.
Or an more open solution based on industry standards is best suited for Pre-Production and uses OpenDaylight to manage OpenFlow devices.
And finally, the legacy network where the sleeping dragons lie with Spanning Tree in traditional L2 operation.
The network market is fracturing. Instead of “one way” there are now “many best practices” and makes it more likely that customers will deploy different types of network to address the use case for each application. Examples of this today can be seen in vendor offering from Oracle where customers are effectively forced to buy Exalogic infrastructure using Oracle’s Infiniband networking, VCE demands that customers to use Cisco ACI while VMware is less aggressively promoting the use of it’s NSX platform.
The following diagram shows a possible outline of the future of data centre networking. The legacy networks that use VLAN/L2 Extension will remain for existing applications.
Lets discuss the different areas and what they mean.
L2 Legacy and Storage
I don’t perceive that the existing networks in data centres will change significantly or be removed. In 2014, many companies will be upgrading from L2/STP designs to MLAG designs that simplify the network operation and remove risk around STP protocol instability. However, the complexity of operating MLAG technology is very high and the maturity of vendor software on proprietary technology like Cisco VPC & VSS and HP IRF. And while growth will continue in the next year or two to meet immediate needs, the long term trend for L2 networking is down.
Although we are deploying more security applications to reduce risk I think that the concepts behind the purpose of DMZ are largely finished. I expect to be replacing hardware firewalls with software appliances contained securely within multitenancy features of a virtualization platform. DMZ networks will exist in the overlay networks and I believe this will happen rapidly because the operation cost of DMZ networks is very high.
The use of the FibreChannel or dedicated Ethernet networks for transport of storage data is reasonably common today. Although FibreChannel is making valiant efforts, the growth of IP Storage platforms is inexorably extinguishing the long term future. New technologies like Object Storage and NoSQL distributed databases like RIAK & MongoDB are removing the need for these obsolete technologies. The decline of storage networking will be gradual because the storage industry is slow to change.
Overlay and Underlay Networking
I’ve talked previously about overlay & underlay networking. I take the view that the momentum of overlay networking is significant now that many of Cisco’s SDN strategies are overlay-centric. Cisco has a significant incumbent position in the networking market and are likely to impact the market.
The EtherealMind View
I believe that it is human nature to desire simple things to help understanding and remove time to l and sometimes we let that desire impact our mental image of a problem. The desire to simplify “the” data centre network into a single system is reasonable but wrong. We should recognise that any network is made from many smaller networks. WAN, Campus and WiFi are also networks of networks.
Is it time to recognise that the Enterprise Data Centre is a “Network of Networks” which has best solutions for each use case in the data centre ? Do you really have just one Data Centre Network ? How much cost and resources would it require to build a single network that could support every possible function ?
Answers in the comments. Let take up the debate ?