Cisco is known for shipping products early to deliver new features quickly. But this leads to a reputation for buggy code which has customers report bugs (and Cisco fixing them). This means that you should never buy a newly released Cisco product unless you are willing to take this risk. This post looks a my process for analysing this risk and then selecting an IOS version by performing a bug scrub. In this case, I’ve been asked whether the Cisco C3750-X switches are ready for live deployment.
Juniper QFabric is a new approach to Ethernet Switch Fabrics. When it was announced last year,it was noted that the underlying physical design is a completely different approach to building Switch Fabrics. Here I’m taking a loosely research based approach to understand how Juniper QFabric is different from all other approaches to the problem, and also a look at some of the challenges ahead.
In this post, I’m considering whether the Open Networking Foundation is the correct process for managing and developing the "open standards" for OpenFlow. The Open Networking Foundation is owned and funded by a cabal of large corporations whose requirements for improving their hyper-scale data centres is the primary motivation. But what about the wider marketplace including the Campus and the Enterprise. I also look at what open means at the controller layer.
Had a few conversations, and some articles, where comparisons are being made between Embrane and Nicira and wanted to point out that there are few similarities between these companies.
Short Answer is “It depends, but usually yes.” Long answer follows with a discussion of launch power, receiver sensitivity, and cable losses.
I’m responding to Brad Hedlund’s post “On optimizing traffic for network virtualization” where he seems to missed a key point. It’s about cost of ownership in terms of ability to troubleshoot.
Embrane uses concepts of IP Flows to scale virtual appliances. Embrane does this by managing IP flows and then directing to other appliances, in effect creating what I would call a two tier load balancing.
I got this question and I guess it may not be obvious to everyone so I’ll have a shot at answering it.
Technology advances in ASIC hardware have resulted in substantial improvements in switching performances of routers and switches. However, the routing processes are still dependent on CPU speeds. What are the existing limitations in router/switch models which prevent route computations from being performed in hardware?
I was reading a white paper by Panduit that claims that 10GBaseT is suitable for use. I’ve been critical of Cat6A cable and believe that it’s not suitable for data centre use.
Recently I noticed that Cisco is selling “Fabric Ethernet Transceivers” for the Nexus switch family. Some research shows that these are replacements for 10GBaseSX SFP modules. Importantly, it’s cheaper to install new cabling than to buy 10BaseSR SFP+ modules.
A short summary of the Fibre Cable Connectors, description and some notes on usage. This is summary notes and intended for reference.
There is a significant camp of software developers who are developing software switching solutions for hypervisors. Which is nice, I guess. The use of software switching in the hypervisor has some good points but, in my view they are heavily outweighed by the bad. I present the use case, and show that software
These all suggest that the time for planning and designing Service Modules is over. There are no suggestions that service modules for the Nexus 7000 will be developed that I can see. I can prognosticate that it would slow down the development of the core switch / route / performance functions, and it will be some years before those core capabilities is complete enough that service modules would become viable product development tasks — they might be in development, but not much chance of going into production. [^1]
Do I sound bitter about Service Modules ? A bit. I’ve had a number of hard to solve problems that lasted months before code fixes arrived. I’ve been fan of the NAM but the price is now far removed from it’s practical value. USD$30K List is way over priced for its capabilities and even with a 30% discount, you can buy a lot of network management systems that deliver much better functions and features for that price.
With all the talk about Layer 2 Multipath (L2MP) designs going on, I just want to point out a fundamental change in the way many people approach network design. It’s seems that this point has been lost somewhere in the discussion of protocols.
The Spanning Tree Protocol blocks looped paths, and in a typical networks this means that bandwidth is unevenly distributed. Of course, we might use PVST or MST to provide a rough sharing of load by splitting the spanning tree preferences for different VLANs, but the design still doesn’t change overall. The basic point is that there is a LOT of bandwidth that is never evenly utilised – and that means wasted power, space and cooling (which costs more than the equipment itself).
VMware: Let’s get logical – the case for OpenFlow network virtualization (and their failed network plans)
VMware has made several strategic moves to implement dynamic networking – vSwitch, vDS, Nexus 1000 (in partnership with Cisco), vCloud External Networks (using MAC in MAC of all things) and have basically failed to deliver overlay technology without implementing technology in the network itself. Equally, VMware hasn’t been willing to engage with the networking vendors to develop technologies that would solve this problem – VNtag / VEPA/ VEP combined with TRILL / SPBB, instead letting them argue amongst themselves. VMware attempt with vCloud networking using MACinMAC encapsulation seems to have failed and stalled and is getting another attempt using MACinIP. VMware/Xen/HyperV are all desperate to have a more dynamic network that can be controlled from their software and this might be where OpenFlow gets a big lift – as a configuration engine.
I stumbled across an old diagram I made a long time ago about the direction of flows on a BlueCoat PacketShaper. Since I’ve been looking for it for about three years, I’ve diagrammed it quickly so that it is here for future reference when I’m working PacketWise in the future. PacketShaper PacketWise is one of my very favourite tools for managing traffic flows, and much preferable to PHB QoS aka DiffServ for many types of use cases.
An TCP flow has four possible directional attribute related to the use of a inside and outside networks, and whether the flow was initiated from the client to server which sets the “direction” of the flow relative to the Packeteer. The flow is determined by who initiated the three way handshake. For purposes here, the Client always initiates the TCP connection, and the Server terminates the connection.
TCP Session and Direction
Most people understand the three way handshake, but not many consider the direction of the session.
The connection from the client to the server is outbound, but is inbound on the server. And vice versa, the server outbound session is inbound on the client.
That’s not very useful for being able to define the direction of flows.
Why is direction important ?
For an FTP upload server, you might have the reverse condition where the inbound traffic is far more than the outbound.
To make the most of your Internet connection for this case, you could configure the inbound bandwidth on your Internet connection to be 80% FTP, 20% HTTP and the outbound bandwidth to be 20% FTP and 80% HTTP. This gives a far better utilisation, especially in regards to better TCP Windowing and overall TCP goodput.
Exposing cloud failures The result of the Amazon EC2 failure this week has exposed a number of technology strategies in cloud infrastructure as being less than perfect. Complex systems have complex failures The most vexing problem of Cloud Computing is that these systems are complex, and the more complex system the more complex the failure. […]
A lot of people regard Virtual Trunking Protocol(VTP) as nothing but trouble. Indeed, it’s hard to find many people who will implement it on their network. I find this baffling – it’s a great tool that dramatically reduces time, errors, and troubleshooting is something that we should all embrace and use wherever we can. Naturally, with great power comes great evil. So, lets be clever instead.
I was doing a Data Centre Design recently and did some numbers around the numbers of 10 Gigabit Ethernet ports that need to be deployed. I got a bit of a realisation shock.
Recently Iíve come across some interesting terms for variants of common network topologies, so I decided I’d try to list as many of them as I can for reference. Please suggest others to add.