Don’t tell me iSCSI is complicated if Fibrechannel looks like this

Put differently if you unplug one, the overall bandwidth does not drop to 12Gb/s etc. it will disconnect a single HBA port on a number of servers and force them to failover to the other path and FC-VC module.

It does not do any dynamic load balancing or anything like that – it is literally a physical port concentrator which is why it needs NPIV to pass through the WWNís from the physical blade HBAs.

What does that mean ?

Unplugging an FC cable from bay 3, port 4 will disconnect one of the HBA imageconnections to all of the blades in bays 4,8,10 and 14 and force the bladeís host OS to handle a failover to its secondary path via the FC-VC module in bay 4.

A key take away from this is that your blade hosts still need to run some kind of multi-pathing software, like MPIO or EMC PowerPath to handle the failover between paths – the FC-VC modules donít handle this for you.

See, easy isn’t it. Give me an IP connection any day, it much easier than this technical masterpiece – NOT.

About Greg Ferro

Greg Ferro is a Network Engineer/Architect, mostly focussed on Data Centre, Security Infrastructure, and recently Virtualization. He has over 20 years in IT, in wide range of employers working as a freelance consultant including Finance, Service Providers and Online Companies. He is CCIE#6920 and has a few ideas about the world, but not enough to really count.

He is a host on the Packet Pushers Podcast, blogger at EtherealMind.com and on Twitter @etherealmind and Google Plus

You can contact Greg via the site contact page.

  • http://pl.atyp.us Jeff Darcy

    Cherry-pick much? The fact that one specialized HP brain-fart layered on top of FC seems excessively complex in no way serves as an indictment of FC in general compared to iSCSI. Why don’t you try comparing apples to apples? Find some proprietary redundant switch-trunking solution at the same bandwidth level, and see if it is free of such complexity. I guarantee it won’t be. If iSCSI is so dead simple, why have you written four articles as a primer on issues such as TCP window sizes and LACP? It’s just complexity you’ve grown comfortable with, so you pay it no mind, while complexity in something you’ve never bothered to learn seems much scarier.

    If you want make an honest argument about the complexity of iSCSI vs. FC (or anything else) don’t just pick one example of one failure mode from one vendor. Get some data on the whole set of failure modes experienced by real administrators in a statistically valid sample of real data centers (not just those you personally entered because they had already decided on the one technology you know). Get figures for time and cost of those failures, and only then can you do a real comparison.

    BTW, I did some comparisons the other week of actual gigabits per second per dollar for various interconnect technologies, and the winner was neither 10GE (which brought up the rear) nor FC but IB. Kinda blows the whole “economically compelling argument” for iSCSI out of the water, doesn’t it? It’s no less complex, it’s no more cost-effective, it’s just familiar to a different cabal interested in maximizing the dollar value of their current skills instead of developing new ones.

    • http://etherealmind.com Greg Ferro

      Well of course I cherry pick. I can (a) only speak to what I know, and (b) the iSCSI posts are a ‘stream of consciousness’ and I am learning as I write them (as stated in the first post).

      In a sense, I have stated the hypothesis that iSCSI is better than Fibrechannel and I am attempting to prove that by moving through a design process.

      WRT IB, I agree the IB is better than Fibrechannel. I haven’t said so here, but from what I know now, high intensity systems should have IB and not Fibrechannel.

      I believe this supports the anti-FCOE/FC stance, since Fibrechannel is not good for low end, and not good for high end.

      I will attempt to work through the failures in future posts. I look forward to you telling me if I am covering the bases.

  • Vinf

    Hi,

    thanks for dropping by my blog..

    as I think someone has already pointed out; my article was more about how HP implement FC within their virtual connect modules. I’ll freely agree that it’s not simple and I’m not advocating it as a fantastic solution, merely documenting my findings (which seem undocumented by HP themselves!) – the HP VC module is essentially a port aggregator (not a switch) so I don’t think it will ever be simple – and I’m pretty sure there is an equivalent device in the Ethernet/iSCSI world should you choose to implement one.

    But it’s not really valid to say that FC is rubbish because of this particular vendor’s implementation of an aggregator/concentrator product.

    A normal (non HP VC) FC implementation into a server (or indeed an HP blade using a pass through module, or integrated switch) isn’t any more complicated than an equivalent iSCSI implementation.. you still need redundant HBA’s (NICs) connected to a pair of redundant FC switches (Ethernet switches) and you can failover/load balance down paths.

    Horses for courses but from a management (PHB) point of view using FC for storage and Ethernet for “network” keeps things isolated in terms of bandwidth to a host/service, and thus supportability – which is why FC keeps on trucking IMHO.

    FWIW I agree with you; I like the iSCSI architecture and that I can use cheap, commodity GigE (or 10GigE) trunked together to scale my storage fabric horizontally rather than go cap in hand to the EMC or HP gods and pay through the nose for a SAN controller/fabric upgrade everytime I need more throughput to my SAN(s).

    In terms of the section with “what does that mean..?” hopefully it comes across in my post that the HP VC module is not a way to trunk FC ports together and get more overall bandwidth (like you could do with iSCSI), it merely maps physical on-blade HBA’s ports over a backplane to an exit socket on the back of the chassis interconnect module…. so basically, it’s one level above a pass-thru connector in that you can adjust the over-subscription rate, but other than that it’s a bit dumb!

    Thanks

    • http://etherealmind.com Greg Ferro

      The thing that leapt out of your post is that configuring Fibrechannel can be tortuous. Yet many discussions with Storage people have the statement along the lines of ‘but Fibrechannel is so easy to use, you just plug it in and it works’.

      As I have been researching iSCSI I understand how complicated getting the Network designed and configured is.

      So if Fibrechannel is as complex as your article shows, then I think that iSCSI is about the same.

      I also agree with you on using a separate network for storage, and I have suggested that in my series of articles starting here” http://etherealmind.com/2008/04/29/iscsi-network-designs-part-1-some-basics/.

      Hopefully this might help people who really want to use proper networking, instead of Fibrechannel.

Subscribe For Weekly Updates by Email

Get a Weekly Summary of Latest Articles and Posts to your Email Inbox Every Sunday

Thanks for signing up. Look for the email from MailChimp & make sure you confirm your email address. You may need to check your spam or gmail settings to be sure of receiving the email.

Note: You can unsubscribe at any time using the link at the bottom of every email.