Are you Ready to purchase a Brocade Ethernet switch ? What ?

One of the more amusing byproducts of the FCoE marketing push, is that Brocade has announced that they will producing FCoE switches for their customers. So, are you ready to buy a Brocade Ethernet switch ? In their report to investors, Brocade announced that “FCoE is Fibre Channel, not legacy Ethernet”(1). Aside from the breathtaking disingenuousness contained in that statement, lets focus on the fact that Brocade will be developing and selling Ethernet switches.

Now Brocade seems to be indicating these are actually FibreChannel switches and that is why they cost more. If the Cisco Nexus range is any guide, they will cost a truck load more. But these are still ETHERNET SWITCHES first, and some other function second.

Converged Enhanced Ethernet (CEE) is going to deliver some real benefits in your data centres with the ability to throttle traffic flows at the source i.e. at the server. This feature has been mooted amongst the IEEE for a number of years, and early attempts haven’t been completely successful. Remember 802.3u Pause/Start flow control. FCoE wants to claim this feature as their own as a way of making storage possible.

This, conveniently, seems to ignore the fact that CEE will equally enable the current Storage over IP technologies, and the one that really matters is iSCSI. (more on this another time perhaps.

But trying to sell these Ethernet features as a Fibrechannel “feature” seems like gilding the lily.

Still, I have every confidence that all those folks with millions invested in Brocade FC switches are going to buy Fancy Ethernet from Brocade. Who, of course, with their years of switching experience, are going produce a quality feature rich product.

And Cisco are going to be right there, shaking the cage and ensuring that any crumbs from this market disruption falls on their plate. And don’t forget Juniper, who could use the opportunity to make a grab for Data Centre market share.

So, are you ready to buy Ethernet switch from Brocade ? Or is it just me ?

(1) Brocade TechDay 2008 report

  • Jason

    In the end FCoE rides on top of enhanced Ethernet. I’m betting Cisco makes a better Ethernet switch then Brocade (if they ever ship one). Oh FCoE that will be a standard too. So as long as my Ethernet switch can guarantee me a per priority loss-less fabric for my FC Frames and provides enhanced services for the rest of my IP traffic I’m pretty happy. When and if Brocade ever ships anything Ethernet I will look at it, but until then someone tell Brocade to stop trying to slow down innovation. How does anyone believe any of the marchitectures Brocade produces.

    • Greg Ferro

      I think that iSCSI will make a better choice on top of Enhanced Ethernet myself. The other thing is that it is years away, probably not until 2010 before we see CEE really available.

  • Jason

    iscsi has different mgmt tools, different drivers, gateways, and performance issues why would it make a better choice then FCoE over CEE/DCE? FCoE has the same mgmt tools as FC, same drivers, and no performance issues? Also, not sure where you get CEE isnt going to be available till 2010? Cisco is selling the Nexus 5k today and will require a software upgrades for the few items that are not yet standardized, not a hardware upgrade. Seems to me your listening to Brocade trying to stall the market. In the end FCoE is not complex at all. However, implementing a Enhanced Ethernet Fabric to provide a per priority loss-less service is… So, I will base my decision not on who’s implementation of FCoE is better (cause it will be a standard by the end of 2008), but instead who provides the best low latency, reliable, fast-converging and loss-less ethernet fabric.

    • Greg Ferro

      OK, here is my logic in point form.

      Because iSCSI is proven, standardised and stable, FCoE is new and, currently not standardised

      CEE is still in the first phases of standards. The committee states standards expected in mid 2009. Then products will start to ship about six months later. Products before then will be ‘pre-standard’ and will (most likely) need forklifting before you will have usable hardware. Already Cisco has announced DCE, their proprietary extensions to CEE, which is highly likely to confuse the marketing process, and thus slow down acceptance.

      Of course, this assumes that HP / EMC / IBM / NetApp / Competitor X do not try to slow the process down, in which case, it will take as long as it takes.

      Because iSCSI will last longer than FCoE, since FCoE is a transition technology to Storage over IP. As clearly stated by Cisco and Brocade.

      FCIP appears to have failed, and the market is choosing iSCSI instead.

      The mid size market is overwhelmingly choosing iSCSI, and this will create market share and momentum by the time FCoE comes out of the garage.

      You still have to have an IP stack for Replication. FCoE cannot do this and thus means a multiprotocol installation.

      MMM, that’ll do for a start.

  • Jason

    Not sure where you get “will be ë will (most likely) need forklifting before you will have usable hardware. Already Cisco has announced DCE, their proprietary extensions to CEE, which is highly likely to confuse the marketing process, and thus slow down acceptance.” Its quite the opposite. Cisco will not require a forklift upgrade. I would love to know where you get your information from.

    DCE is the acronym Cisco has chosen to describe an architectural collection of Ethernet extensions based on open standards, designed to improve Ethernet networking and management in the Data Center. The industry standard term now being used to describe this architectural collection of extensions to Ethernet is Data Center Bridging (DCB). While most of the standards body work is being worked on within IEEE, various other draft specifications are being worked within IETF and INCITS T11 (FCoE) as well. Cisco has co-authored many of the standards referenced above and is focused on providing a standards-based solution for a Unified Fabric in the data center. While the acronym ìDCEî is a Cisco trademark, the technology includes a series of standard-based technologies that reside in the IEEE, IETF, and INCITS T11.

    So Qlogic, EMulex, Intel all support the Nexus 5k with CNAs or 10G NICs with a software stack of FCoE. EMC has publicly stated that they will qualify FCoE on the Cisco Nexus 5000 with customer deployments by the end of calendar 2008. Oh you mean the NetApp that offers a FCoE target today and who has demonstarted a unified fabric with Cisco and the Nexus 5k? Oh you mean these 14 vendors are competitors?

    So, why hasnt ISCSI starting dominating in 5 years compared to FC? So are you saying throw away all my disc and buy all new disk with an iscsi target. I would much prefer to keep my investment and add FCOE at the access layer and have them seamlessly integrate with my existing SAN.

    FCIP has failed? Are you kidding me? 1000s of customers are using FCIP today for Storage replication between geographically dispersed Data Centers. FCIP and ISCSI address two different issues within the DC.

    You are finally correct with FCoE being a l2 technology today. However, with FCOE intergrating within my existing FC deployments; FCoE can leverage any FCIP infrastructure that has already been deployed for replication or mirroring of disk, etc… ISCSI will require another GW. Oh and if you try replicating ISCSI across the WAN, good luck with performance as the tcp windows are going to need tuning, where are you going to get error recovery or compression?

    The beauty of a i/o consolidation with FCoE is I wire once. Even if i dont need access to storage today it is a configuration change to give a server access to storage as opposed to ISCSI which requires new drivers, new mgmt interface, new gateways, potentially TOE cards to offset the cpu impact of the drivers.

    mmm… Need I say more…

  • Greg Ferro

    FCoE will need new line cards on Cisco Nexus – see here in the comments from Omar Sultan from Cisco. Also if you want MPLS and many other such features.

    Also, Cisco confirms that new supervisors will probably also be required in an offline discussion, its not clear yet. The current generaton of supervisors are quite limited.

    The reason iSCSI hasn’t achieved dominance is possibly that Storage has been in an ivory tower pretending that they have some secret knowledge. Not entirely unlike mainframe and telephony people of the past. The Storage industry seems to have taken a ‘better than thou’ approach to storage networks. iSCSI has been unfashionable since 2001 but now the market is grabbing hold.

    Qualifying FCoE on Nexus will make a lot of noise, and probably a lot of sales. But that doesn’t necessarily mean it will sweep the marketplace. That said, the Cisco marketing machine is all devouring and encompassing beast and it probably will convince a lot of companies to put a lot of money into the technology. Still doesn’t make it right. George Bush was quite well regarded once as well.

    FCIP _only_ has thousands of customers. That ain’t a lot, considering there are millions of networks ? I am sure that they are big customers too, but, really, did they have any other choice with the entire Storage industry looking down their nose at iSCSI ?

    Indeed, FC took the GigE phy layer to make their standard. It was good business for Brocade etc to have their own storage switches and thus not complete with Cisco / Nortel etc.

    If you look carefully at the Data Center blog for Cisco, you will notice that Cisco also agrees that FCoE is a transition technology and IP storage will be successful in the long haul.

    I would suggest that iSCSI in the WAN will 1) be solved by onboard switch software technologies with WAAS type features, not unlike FCoE features in Nexus 2) be much less of an integration problem than FCIP / FC / FCOE.

    The beauty of iSCSI consolidation is that I wire once, and protocol once in my entire data centre. I do not have to run two networks as FCoE proponents suggests.

    iSCSI HBA require no more or less work than FCoE, will cost the same or less, have less configuration hassles and a more certain future. Why would you choose FC ? FCoE is I/O consolidation, but not Network consolidation which is the ultimate outcome.

    I don’t suppose my view is popular, and possibly those in the Storage industry will never have a positive view on iSCSI. To be realistic I expect the FCoE marketing machine to make its impact and be quite successful, its has a lot of momentum. I guess I remain a holdout from the days of IP over everything, and everything over IP. Its hasn’t failed me in 10 years, and I don’t suppose it will.

  • Jason

    The Nexus 7k will require new linecards for DCE/CEE. The Nexus 5k will not require new hardware only a software upgrade when DCB is ratified.

    So millions of networks compared to customers that have SANs and DR/BC strategies are two completed different conversations. Customers with a DR/BC strategy are either implementing FCIP or leveraging a metro service for extending native FC with extended buffer to buffer credits.

    Dont get me wrong everything over IP is the right direction but for the foreseeable future customers have a great opportunity to reduce cabling, access layer infrastructure if they go with FCoE or ISCSI. If I have an existing SAN today FCoE makes sense as it migrates seamlessly. If I have ISCSI deployed today, deploying on top of DCE/CEE/DCB makes much more sense for the ability to provide a loss-less fabric and preferential services just like a SAN.

  • Greg Ferro

    Aaaahhhhhh, let me guess, Jason is from Cisco and has a marketing / presales position. How can I tell ? Because every time you get to this point in the debate, they lose their commitment to FCoE and start equivocating.

    The way I see it, Cisco dropped $250 million on Nuova and wants to get some investment back. It a a tedious money grab and customers should be very wary of the technology.

    How many marketing people does Cisco have on this technology ? Sheesh.

  • Pingback: Brocade buys Foundry ? No great loss. Storage switches ahoy! : My Etherealmind()

  • Jason

    I wish that was the case. Unfortunately Im not in marketing, nor work for Cisco. But good try. Funny I thought you worked for Brocade. Or you were just a puppet for Brocade. As you seem to regurgitate all of their FUD and Marketing.

    So, why do I keep adding extra points on besides FCoE. Well for one you continue to make false statements and you need to be corrected. Why have a site where you post information that is highly inaccurate. Two, FCoE is a protocol as well as other protocols will ride on top of DCE/CEE/DCB. Its the lanes that are important not the cars. FCoE, HTTP, CIFS, NFS, SIP, etc.. They all run on top of the lanes within the DC. Am I going to harp on FCoE as it they are deciding on the final stages of FIP or talk about something important like how to make sure I can provide a loss-less fabric for FCoE and other mission critical applications.

    • Greg Ferro

      Surprising. All of your posts come from a Cisco registered IP address. Must be a coincidence.

  • Perry Young

    Interesting that someone says “when Brocade finally ship an Ethernet switch” as they own what used to be Foundry Networks, the long time Ethernet switch manufacturer. It’s funny how some people perceive things.
    On the relevance of FCoE versus FCIP, the former requires flow control i.e. from DCB. DCB is not prevalent in the wide area i.e. VPLS/L2VPN space and instead flow control is triggered by packet loss. Fine for TCP but what about FCoE?
    This is where FCIP becomes relevant in my mind. The alternatives being native FC on WDM on Dark Fibre.
    BTW, you might want to take a look at Juniper Networks QFabric and QFX3500 if you haven’t already. If your going to buy a converged ethernet switch then the QFX is the one. It includes FC to FCoE gateway capabilities or FC transit, zero jitter (PVD) across packet sizes at <Us rates, <63 10GbE operating full line rate across packet sizes with no loss in either Cut-through or S&F mode. Also due for 4x 40GbE QSFP uplinks shortly