With the recent debate around Cisco’s proprietary FabricPath version of TRILL in the twit-o-verse and Kurt Bales recent post Proprietary Cometh Before the Standard I want to make the opposing case. Because he isn’t looking deep enough, or encompassing enough history to see the damage that proprietary technology causes.
To try and summarise Kurt’s perspective:
- vendors develop proprietary features
- then those features get standardised
- what’s the problem ?
The Problem
Kurt’s view is not a bad assessment but it’s overly simple. The real world is much more complex. It involves real money, and loads of corporate stupidity ( Hanlons Razor )
Incompatibility
Lets look back at the some ethernet versions of proprietary and standards where it isn’t so clear.
HSRP / VRRP
- While it’s true that HSRP eventually evolved into VRRP, you should note that they are not compatible.
- Thus, anyone who adopted the proprietary version got ‘stuck’ with HSRP.
- in fact, VRRP was a poor standard, and no one used it much.
- until much later when it finally got features.
- why did it take so long to get worthy features ?
- because Cisco didn’t participate because it didn’t need to.
- until customers starting forcing VRRP support.
- It became a tick box on tenders and RFP’s
LACP / PaGP
- PaGP wasn’t a bad protocol, although not well documented, and poorly comprehended by most engineers.
- LACP is a far superior protocol.
- because Cisco got involved in the standards to help interoperability.
- because customers demanded it
- absolutely insisted on it as a prerequisite for purchase in fact.
VTP
- VTP is Cisco proprietary
- Cisco owns patents on VTP
- there are no standards for VTP ((that I know of))
PVST
- great feature
- but not the best solution
- MSTP still not well implemented by Cisco products
- because most customers don’t need it
- because Cisco has market dominance and quasi-monopoly
- standards not developing because there is no need
- because Cisco has no interest in it.
Abusing Dominance
- Cisco has 80% or more of the some market categories
- Cisco can choose to create proprietary features even when not needed.
- Because it locks customers into their technology and products in the early adoption phase
- they can claim “increased” sales and “successful” adoption
- They can claim “market leadership” and “technical developments” and “R&D benefits”
- and Cisco management can then stop funding the standards projects.
- and tell customers that this is a “better solution”
The Impact
- This forces the standards body to accept technology that aligns with Cisco’s choices
- because people want interoperability
- that we would have had, if only standards had been created anyway.
- but why are Cisco’s choices better than anyone else’s ?
- standards allow for competing visions & solutions to be resolved.
Playing Politics
- standards are created by flawed processes
- individuals who create the standards are mainly drawn from the vendors
- thus Cisco actually helps to create most standards.
- And Brocade/Foundry, Avaya/Nortel, Huawei, 3Com and many more
- there are a few unaligned people involved, but not many
- vendors sometimes choose to play politics with standards to suit their commercial needs.
- e.g. It might suit Cisco just fine to delay TRILL and create their own proprietary standard and they can delay the ratification process.
- e.g. It might suit Brocade just fine to slow down the DCB standards because their feature development isn’t hitting the deadlines and they can delay the ratification process.
Too Many Standards Bodies
- It’s also true that there are too many standards bodies, with different processes, methods, politics and history.
- IEEE for Ethernet
- ANSI for FibreChannel
- IETF for IP
- And now these technologies are “melding” there are competitive aspects for each organisation.
VEPA vs VNTag
- VNtag modifies the Ethernet frame.
- it is Cisco proprietary
- requires new silicon to support the modified frame
- VEPA achieves similar capabilities using out of band signalling
- AFAICT, VNtag features could be replaced with out of band techniques
- Therefore VNtag is a deliberate lock in strategy.
- VEPA needs no change to existing switch silicon, no change to ethernet frame
- VEPA work was progressing until Cisco released ‘FabricPath’
- now there is little progress
- AFAICT, Cisco isn’t blocking the progress, but they aren’t helping to progress it either.
- Cisco’s dominant market position means that they must be a part of the process
- They are potentially abusing their market dominance.
- But the IEEE process is closed and we cannot see what’s happening.
- further proving that proprietary and closed is bad.
Proprietary Always Loses
- in twenty five years of IT that I have seen, proprietary networking always loses.
- Always.
- There will always be a few exceptions to this but don’t think that “winning the lottery is an employment strategy”.
- and it stops industry growth
- it retards product sales
- Remember the early versions of proprietary IPsec ? You should. It’s educational.
- The Internet is the greatest success of standards based networking.
The EtherealMind View
- no vendor lasts for ever
- no product strategy lasts for more than a couple of years.
- new technologies are only new for a couple of years.
- never accept that proprietary is acceptable.
- proprietary will always waste your time and money.
- your networks will last longer than this years fashion for technology.
- you want to go home on time, after an organised and planned day.
- that’s what standards are for.
“in twenty five years of IT that I have seen, proprietary networking always loses. Always.”
…
“Cisco has 80% or more of the some market categories”.
=)
Because Cisco adopts open standards in its product. But very few people now use their proprietary solutions. EIGRP, PaGP, Tag Switching, PVST, etc etc.
Boy are you off Etheralmind. EIGRP is the most common protocol used within Cisco CORPORATE networks because it’s superior to OSPF in all ways, except it’s not a standard. OSPF Area issues and summarization only on area boundaries is a big limitation and it takes a lot of tuning to get close to EIGRP convergence time and stability. Rapid-PVST is superior to MST. Cisco is the leader in network technology, but the IEEE standarization process is rightfully slow to respond. So Cisco comes out with their proprietary protocols. Should they wait for IEEE when they have a great idea? Heck no! When the rest-of-the-world and IEEE catches up then Cisco supports that too and Cisco deployments will move in the IEEE direction. Such as the case for LACP. Now it is best to deploy LACP in Cisco networks because IEEE finally caught up.
Because my business relies on it, I’d rather have a more stable, simpler design, and easier to debug EIGRP routing protocol deployed in my corporate network than anything else. If the world catches up someday, then Cisco could move towards that protocol. But to deploy an inferior OSPF routing protocol into a CORPORATE network only makes sense if you are forced into it.
For firewalls and other “edge” devices that do not support EIGRP, proven EIGRP->OSPF redistribution will support those. There is no need to do that if you are a Cisco environment because Cisco firewalls support EIGRP.
Perhaps I’m naive, but I really like EIGRP. ๐
Yes you are naive. EIGRP looks like a good idea until the day you learn why it’s a bad idea.
All closed and proprietary solutions have something that make them look sweet as honey, but taste like lemon.
I’m curious as to why you say EIGRP is a bad idea. To be clear, I’m currently involved in a large EIGRP to OSPF migration, and it’s being done for various reasons, not the least of which is that it locks you to one vendor. I get that. However, from a pure technical perspective I feel there are a lot of things to like about EIGRP, and distance vector protocols in general.
Are you commenting solely on the proprietary aspect of EIGRP? Or are there also techincal reasons you dislike it?
In general terms, EIGRP doesn’t scale well in real life. Not because the protocol is bad, but because it encourages lazy practices around summarisation, route filtering, etc. Secondly, EIGRP isn’t well supported for MPLS (because Cisco isn’t dominant in the carrier market).
But more importantly, OSPF is supported on firewalls, mainframes, solaris clusters, and other products that need to have multiple paths. And redistribution is much worse than having a native protocol.
Ivan at http://iosints.info does talk about EIGRP and why it’s not so good today.
Those arguments for EIGRP not scaling apply to OSPF. You can just as easily configure everything in area0 as you can not use stub features and summarization in EIGRP. It is less about the protocol and more about the implementation.
Support for Opaque LSA’s and TLV’s were written well before LDP, to say that EIGRP isn’t well supported for MPLS because Cisco isn’t dominant in the carrier market is incorrect. http://www.networkworld.com/community/blog/cisco-lost-share-routing-q4
Don’t get me wrong I’m not arguing for the use of EIGRP, the use of an IGP is a religious argument. I believe there are unique circumstances to every network, including the people supporting it and there’s no “right” answer. Calling people naive for using EIGRP is not a fair comment.
All fair points. Just that all companies are driven by their interests. Vendor lock-in may be bad for everyone else but its good for however is locking you in, because they get to keep market share, keep shareholders happy, and ultimately keep their jobs+ their crazy compensation packages.Can’t really blame Cisco for protecting their interests.
Hahaha! I hoped it was only a matter of time before I elicited a response from someone ๐
I will admit that each of the points you make here are valid, but I can help but feel there is a market waiting for the various standards bodies to ratify their work so that vendors will move forward.
In the mean time their networks are waiting for a rebuild on these hopes and dreams. As a consultant I need to pick which is the best solution to my customers needs. Granted those designs and recommendations do have to take into account the life of the network, but all we hear from vendors currently is “Sure, we will probably do some of that – when and if a standard is made that we like” ๐
That’s correct. And those standards bodies are made up of the vendors themselves who are arguing over different aspects. And the vital standards bodies, such as the IEEE and ANSI, hold their meetings in secret so that we don’t know what goes on, or what is said, or why someone votes against someone.
If the vendors recognised the brutal reality is that new standards are part of releasing new features, we be happier. Because many customers won’t pay for proprietary and closed solutions.
Lets face it, DCB & TRILL started in 2004/2005. What’s the hold up ?
Having the proprietary and the standardised options available is one thing. I can live with that because there’s at least a choice. When only the proprietary options are available, despite the standardised ones being mature (and in some cases more well-established), I get annoyed.
PVST+ on low-end integrated switches is a primary example. Cisco loves their PVST+ and it has many technical merits, but it would be awesome to be *able* to tell the device to use CST when having to work with non-Cisco switches. Unfortunately, the option to use the standard is a “feature” reserved for higher-end equipment. This becomes even more of an issue now that former Linksys (CST-only) switches are being branded as Cisco products. Now we can’t even assume that two products from the same vendor can speak to each other properly.
Yes, there are workarounds, but they shouldn’t be necessary.
Great posts on the part of both yourself and Kurt, BTW.
Jody
IMHO it starts out as a way to differentiate your product in the Market. Or it is a way to bypass the inefficiency of the bodies listed above. But I agree that it becomes a way to lock in customers, or push out the competition by forcing your customerbase to follow your way.
Of course vendors all eventually fall in line if customer base screams enough.
Just curious… what exactly is badly implemented in Cisco’s MST?
-Marko.
I would ask the same thing. I have found Cisco’s implementation better than some!
On HP E-Series (ProCurve) switches, you can only assign vlans to instances that exist within the VLAN database (or what ever HP call it). This means that you cannot pre-allocate VLANS to instances in the hopes of having a stable MST region.
This also means that every single time you add a new vlan to the database, it causes an inconsistancy that requires a STP recompute. Very annoying ๐
Partially true on the E-Series. For some time now, the higher end products (8200/5400/3500) have allowed you to pre-assign VLANs to instances before the VLANs are configured on the switch.
Release K.12.44 Enhancements
Release K.12.44 includes the following enhancement:
? Enhancement (PR_1000457691) รณ This enhancement allows the mapping of all
theoretically available VLAN IDs (1-4094) to an MSTP instance, even if some of the VLANs
are not currently configured on the switch.
For the lower end switches though, it is as you describe.
On the topic of STP, (which has burned me too many times), how do people feel about using MSTP but just running everything in the CST/IST and not bothering with mapping VLANs to instances? My personal opinion, mapping VLANs to instances only serves one purpose – to make use of the bandwidth of the redundant links. If however, your network will never utilise that additional bandwidth, I would argue that the complexity of running instances is not worth it. (Especially when running those lower end ProCurve switches that do not support the enhancement mentioned above).
How do others feel about this? Does anyone believe that if the bandwidth required will always be very low it is still a good idea to run instances to take the ‘load’ off the routing cores? I for one don’t buy it and feel the additional complexity will cause you more problems in the long run.
You seem to forget that these companies are in business to make a profit, not give away all their bright ideas for free. Especially not give them away to their competition!
Interoperability is one thing, but you can’t expect a business to ignore a competitive advantage it has over its competition.
You said yourself that Cisco will get involved when their customers demand it. If the customers are happy with the proprietary solution, what’s the problem? Especially when the competition aren’t always very good at implementing the standards!
Greg,
Interesting and I largely agree. I read this article immediately after reading the one posted here:
http://blogs.cisco.com/datacenter/do-you-really-want-a-nanny/
The first thought that came to mind after reading the Cisco article was.
Why is it taking so long to develop TRILL?
Maybe it’s just because I’m a GONER http://etherealmind.com/network-dictionary-goners/ but I cannot help but think that yes, while Cisco will eventually support TRILL they realise that by pushing their own propriety technology first, they will capture a lot of people and will lock them in.
As anyone that has been involved in a technology migration EIGRP -> OSPF, etc… will understand its not a simple exercise and can sometimes be cost prohibitive.
Also as Greg mentions since the standards process is closed/hidden we don’t know what is happening, but I cannot help but wonder, if Cisco can develop a protocol such as FabricPath, and are on the standards board then what is the hold up with TRILL from a technical perspective…
Funny how TRILL completely stopped progress when Cisco announced FabricPath.
Surely, it’s just an unfortunate coincidence.