Blessay: FCoE is JUST a transition Technology

It’s a common fallacy of Storage Experts is that FCoE will “rule the world” and that Storage equipment has some special magical properties that makes it different from any other technology. Chris Evans posted the most common fallacies iSCSI is the home protocol

Let look at the most common mistakes that Storage people make.

The Four Common Falsehoods of Storage

Chris’s post outlines them, and they are quite common. Here is my list:

  • The Networking people don’t know what they are doing and aren’t reliable
  • FibreChannel networks are a controlled environment
  • Networks use cheap and non-standard/non-certified / open-standard equipment, Fibrechannel is expensive, certified and therefore inherently better
  • FCoE harmonises Storage and Networking at the Ethernet layer, and this is the ONLY possible solution that can work

OK. Let’s take these on. Lets start with an understanding of Good Enough versus Overly Perfect.

iSCSI is good enough, Customers buy good enough

The bulk of the Storage industry believes that iSCSI isn’t capable enough. Having been force fed this marketing hype for the last ten years, it pretty well entrenched as mindset.

And yet, entire swathes of the Storage market are using iSCSI. Left Hand and EqualLogic, for example, have demonstrated that customers don’t always want FibreChannel. They don’t need standards, and that good enough works.

Now, if you don’t believe that people will buy good enough then perhaps the most powerful example of Good Enough in IT is Microsoft Windows. Even though it isn’t particularly good, or fast, or cheap customers continue to buy and implement because it’s good enough.

Networking is reliable enough

It’s absolutely true in my opinion that today’s Networks are not ready, as a general concept, to carry storage data. There are good reasons for this.

  1. Data Network protocols are designed to be robust. Therefore the tolerance of low cost networking is very high. Storage Protocols are not designed to be robust and therefore tolerance of the networking and open storageis very low. The fault here is the FibreChannel standards are quite poorly designed, or perhaps, narrowly designed with liimited flexibility.
  2. Data Networks spending is about 30% the cost of an equivalent Storage network. And if I had spent the same money, the Data Network would have the same performance capabilities. For example, in a Cisco environment, I would implement Nexus 7000 instead of Catalyst 6500, with all the fancy go-fast bits, at a price premium of 300%. Pointing out that Data Networks are not as reliable as Storage networks is the same as pointing out the nose on your face.
  3. Storage network are really small. A few hundred ports, maybe a thousand or so in some cases. (and some more exceptional cases). Data networks have several orders of magnitude more ports, and vastly more complex performance requirements. Scaling FC could be solved, but, why bother ?
  4. Data Networks have vastly more complex requirements that a Storage network. A storage network with FC is focussed on a single task whereas a Data Network is sophisticated in it’s ability to handle a wide range of variably functioned loads, tasks, applications and user needs. A storage network can only manage a single thing.
  5. Data Networking people haven’t needed to deliver the capabilities and service levels that FC forced you to choose. But we are already training and moving to new designs that deliver the same outcome. Look at the Nexus 5000, which handles iSCSI, FC & FCoE equally as well as VoIP, SAP, Oracle and other mission critical services.

My point here is that Data Networks could be more reliable and perform to the same level as legacy FC Storage but historically, people have taken the “Good Enough” choice.

The good news is that Data Centre networks will be designed and engineered to the same level as Storage networks have been, and that will deliver the results that are needed.

Standards, Certification and Unicorn Tears

Let remember back to IBM mainframes and the after market addons that required IBM to certify each and every products to be compliant with their hardware and software. It worked for a while, but customers rapidly left Mainframes for new technologies when they became viable.

Claims that Fibre Channel networks are great because “they are expensive and approved” do not match with the reality of the marketplace and history. Customers will rapidly move to other technologies that show they work well enough. Because the Storage market is so early in the technology cycle, the idea of ‘blessing products’ with validation or certifying that this product has been bathed in unicorn tears, is just a passing phase. Eventually you will arrive truly open standards that interoperate properly and correctly without requiring special techniques.

How do I know this ? Because that’s what Data Networking did in the late 1980′s. Hark back to events such as ‘Interop’.

Note: IBM Mainframes still exist today for those use cases where they are excellent, and the future of storage will still have use cases where FC & FCoE make sense, but in the long run people will spend the bulk of their money elsewhere.

FCoE is not the ONLY choice

Read it and weep, IT ISN’T. There are several choices, including iSCSI, FibreChannel, Infiniband and new protocols that haven’t yet been developed. FCoE is the current choice, where previously the Storage industry had declared that several other protocols

Think of FICON, ESCON, iSCSI (in the early 2000′s this was THE storage protocol), Infiniband and today it’s FibreChannel. Tomorrow’s choice is FCoE, and after that ?

After that it’s probably iSCSI, because the 802.1 DCB standards solves the networking problems that stopped iSCSI from large scale acceptance. Or possibly a new protocol that doesn’t use TCP and just encapsulates the data in an IP packet as a block thus removing the processing load of the TCP header.

FCoE is a migration technology

Because FCoE takes the existing FC installed base, and neatly removes all the objections of the storage industry to connecting to an Ethernet network, it sets up the Data Centre for a migration away from FibreChannel. That, ultimately, is the primary purpose of FCoE. FCoE is not the end point, or the final protocol for storage, it just the next step in a rapidly maturing industry.

The Storage Industry appears to be stuck in the past. They have failed to adopt iSCSI in 2002/2003, they failed to adopt Infiniband in 2005/2006. They kept banging away at making FibreChannel work, until it actually did.

Networking isn’t exactly a saint here

Lets remember that Networking has had the same pains and we’ve been through all this before. Token Ring and FDDI were also superior technologies to Ethernet. I wouldn’t agree that Ethernet isn’t perfect, far from it and many years of banging away at making it work have produced a protocol that works. But it’s flaws are many, it cannot readily scale and has a number of inherent limitations that will take years for silicon and software to work around (RDMA over Ethernet is a case in point).

iSCSI works when correctly designed

From a networking perspective, iSCSI has some quite specific design factors that make it difficult to perform at high speed. However, it is much more robust and capable than FCoE and will work in a much wider variety of Data Centre backbone that FCoE ever will. This is what makes it especially attractive to SME deployments.

iSCSI needs to be updated too

iSCSI will always be performance limited since it uses the TCP protocol as the transport mechanism. At some point, someone is likely to develop a new version of iSCSI that run either over UDP (as NFS does) or directly encapsulated in IP as an IP-native protocol.

This type of change would improve the iSCSI performance matching FCoE. However, I think that development of this new protocol won’t be possible until the Cisco FCoE marketing effort finally drives DCB into the data centre and actual deployment. That is, any new developments in storage will stalled until the current changes around FCoE either fail, or reach some level of acceptance.

FCoE is the only solution

It isn’t. Infiniband is significantly cheaper, and is much faster. Whether it can impact the market without a a billion dollar marketing program ‘a la Cisco’ remains to be seen.

And Storage over IP still hasn’t updated to modern concepts. Choosing to move block storage data over TCP isn’t a great idea.

FCoE suits Cisco and Brocade

FCoE as a solution is a marketing move by Cisco that is now followed by Brocade. The networking industry has been marginalised with dumb switches and dumb networking for a number of years. Cisco has constantly attempted to develop products that promote Smart Networks and failed thus leaving them in a commodity market. This is marked by the rise of HP ProCurve products and other merchant silicon plays such as Arista. Equally, it is also marked by the failure of Nortel who were unable to compete.

Both Cisco and Brocade need a ‘smart network’ so that they can add value to their products and not be marginalised into a commodity market such as the Intel servers markets. Because FCoE requires the switches to have significant participation in the storage layer, it’s a way of developing another ‘Intelligent Network’.

What Networking still doesn’t have

The biggest missing element of a modern data network is decent, workable management tools. I constantly battle with the software that manages my network and no product today is up to the task of automation, service management and operational control of a data network. But the focus on the Data Centre appears to be changing this. FCoE mandates the use of management platforms that provide effective tools. These tools are going to have a large impact on the networking industry.

Wrapping It Up

So FCoE is a transition technology that merges storage into an IP network. It may take a few years, but Storage over IP will eventually dominate. The data networks of today are capable, but we need to develop new management platforms and just a few new skills. The addition of serious cash to upscale our network products will see Data Networking deliver a service far better than a legacy FibreChannel system, one that is flexible, multifunctional and better value for money.

So don’t pre-judge technologies like iSCSI. Whether you like it or not, converged Storage and Data Networks will happen. I don’t really mind that another service is coming onto my network, it’s not much different from the ones that we already have in terms of reliability and performance. A few changes here and there, and Data Networking will be up to the job.

About Greg Ferro

Greg Ferro is a Network Engineer/Architect, mostly focussed on Data Centre, Security Infrastructure, and recently Virtualization. He has over 20 years in IT, in wide range of employers working as a freelance consultant including Finance, Service Providers and Online Companies. He is CCIE#6920 and has a few ideas about the world, but not enough to really count.

He is a host on the Packet Pushers Podcast, blogger at EtherealMind.com and on Twitter @etherealmind and Google Plus

You can contact Greg via the site contact page.

  • John Dias

    I agree with much of what you have stated here, all cogent points but what’s the underlying premise? Are you trying to argue that iSCSI will ultimately be the de facto storage protocol? I’ve never considered that my network peers didn’t know what they were doing, but rather that they’re solving/designing for different solutions – this is why storage is, well, special.

    The problem is serializing SCSI, a very intolerant protocol. Taking something that wasn’t really meant to be strung out about the data center floor and trying to make it perform as if it’s still local to the host. Nasty. A kludge, really. But necessary due to backwards compatibility requirements.

    FCoE, in my opinion, doesn’t move anyone closer to storage over IP. There’s no IP involved at all from what I can tell – simply a replacement of the layer 2 transport. Actually an encapsulation of one layer 2 transport within another.

    In the end, iSCSI and FCoE (and FC for that matter) are use case solutions to the point of centralizing storage for better utilization and overcoming the limitations of a protocol designed for local attach. If one becomes predominate it will be because the use case is more applicable in the field. Not because it is inherently better.

    • http://etherealmind.com Greg Ferro

      What bothers me about this debate is the perception that iSCSI is somehow inferior. The Storage industry is quite inbred in its thinking. The fact that Storage continues with the IDEA of using SCSI at all is astonishing. Is there no innovation to move beyond SCSI for I/O signalling ? Why not ?

      Why not use RDMA for storage, or a streaming protocol such SCTP ?

      My guess ? Probably because the Storage business is still early phase and nothing is very mature.

      • http://www.tttech.com M Jakovljevic

        Data storage industry is talking about FC, iSCSI, and XY-variants over Ethernet. How would industry react to Ethernet with add-on service which allows TDM style communication and exact bandwidth partitioning using standard IEEE802.3? so synchronous and asynchronous messages can be exchanged via Ethernet. This would allow low-latency applications to work in the complex network together with standard LAN applications …

        • http://etherealmind.com Greg Ferro

          That was done in the late 1990′s. Twice. One was called VG-Anylan and was invented by HP, and the other was called Isochochronous Ethernet created by the IEEE.

          Didn’t get anywhere though.

          • http://www.tttech.com M Jakovljevic

            My impression is VG-Anylan haven’t made it, because it was too far from IEEE802.3 switched Ethenret and designed to cover both TokenRing and FastEthernet. The principle they used is not really different from existing priority-based (e.g. round robin) schemes added to switched Ethernet few years later. This means it is still a best-effort Ethernet – and cannot accomodate isochronous or low-latency data streams.

            Isochronous Ethernet was not successful because it relies on ISDN and 10MBit Ethernet, so it was almost obsolete before the standard completion.

            Existing congestion management (CM) approaches do not solve needs of losless and low-latency data transfers over Ethernet, and it is questionable if CM can do the job any time soon.

            The solution could be in SAE AS6802 – this set of services (Layer 2) to IEEE802.3 allows losless and low-latency data transfers in open or mixed application Ethernet networks. And it works with 1GbE, 10GbE or 100GbE. This seems interesting for storage application, but it is just a guess …

  • Interesting Perspective

    In my experience I have never seen ISCSI work well for a wide amount of servers pushing tons of data, its a corner case for SMB. Speeds over gig are not the right thing for ISCSI, from what I have seen.

    FC is the right solution, scale/resiliency/futures/etc are best suited here.

    What FCOE offers is the ability to ride the same wire and save space/power/cooling/etc, that is the major hurtle today. Additionally imagine and environment with 40g/100g or higher networks and having storage just ride the wire we use.

    The future is brighter with FCOE that ISCSI. Just have to accept it and move on. Juniper/Brocade/Cisco/HP will have FCOE within the year so grab a hold and hang on.

    • http://etherealmind.com Greg Ferro

      Your experience must be limited. In my experience, I have seen entire data centre of hundreds of servers (i.e. more than a thousand) including VM’s using iSCSI as the primary storage protocol. Some FC was used for certain corner cases. I have also seen the more traditional FC convention.

      The iSCSI price was about 40% that of the FC solution and required a lot less maintenance. Storage admins didn’t need to keep fiddling with it.

      Now that iSCSI is becoming accepted, performance for iSCSI drivers is becoming important. vSphere (VMware) has increased the iSCSI performance significantly with some tests showing a three times increase is throughput and speed.

  • http://storageioblog.com Greg Schulz

    Saying that storage folks are inbreed around SCSI would be akin to someone foolishly saying networking people are inbreed around IP, both of which are absurd assertions or perceptions that IMHO are off base.

    iSCSI has its place, as does FCP, SAS as well as SCSI on IBA eg SRP (native) or iSCSI (mapped onto IP) either with or without RDMA for moving blocks of storage, not to mention co-existence complimentary things such as FCIP for spanning distances when xWDM/SONET/SCH not viable.

    However keep in mind, if you donít like SCSI, why iSCSI as it maps the SCSI command set onto IP, similar to how FCP maps SCSI command set onto Fibre Channel, or SAS using SCSI command set on serial cabling, or traditional legacy parallel SCSI (what many people think of as SCSI) or, SRP SCSI command set natively mapped without IP onto InfiniBand, or of course you can also map iSCSI onto IP onto InfiniBand however that gets into a different discussion.

    Im not clear if its SCSI or FC or block protocols that you are not a fan of which is fine, too each his own and thatís the cool thing about technology options, you can use what you like to address different issues and challenges.

    However, if you donít like moving blocks and want IP, then drop iSCSI and go straight to NFS/CIFS or some other protocol and transport. If on the other hand, you like iSCSI then you in some shape or form you like or at least utilize the SCSI command set.

    Show me something that is not a transitory over some period of time and I will show you a truly revolutionary technology. ;)

    Now to be fair, the key is what is the time span or timeline of the transition?
    Is it months, years or decades?

    FC is a transition technology that started with quarter speeds in the early 90s and will go into the next decades with 16G perhaps even 32G as investment protection for those on that path until they transition to other technologies. That makes for a multi-decade transition.

    Look at base Ethernet how that has transitioned from 10/100 to 10/100/1000 to 1000/10000 to 100GbE and beyond not to mention different mediums and functionality.
    iSCSI has its place as does SAS, FC, FCoE, IP, PCoIP, NFS/CIFS, pNFS and so forth; they all play to different value propositions, price point, feature/functionality and personal preferences that with the exception of SAS, a common denominator transition is towards Ethernet with the ability for different upper level protocols and transports to co-exist.

    Sure things would be great if everyone could get to IP as the common denominator however thatís not going to happen at least not over night, nor, probably not over the course of a decade, further out, perhaps.

    IMHO the trick is to have multiple tools in your technology tool box using what makes sense instead of only having a hammer thus everything looks like a nail, or, needs to addressed with the claw end.

    Cheers gs

  • Jason Gurtz

    What about ATAoE? I’ve never understood why no one talks about it.

    • http://etherealmind.com Greg Ferro

      I understand (and not researched) that ATA is not a very good multiuser technology. SCSI is much better at parallel actions from multiple sources. ATA was used in desktops with a single command pipeline from single OS, whereas SCSI was always designed to handle multiple CPUs (it’s derived from mainframes with multicores and multiple process). Thus ATAoE hasn’t been used for server or for large storage arrays.

      I could be wrong though.

  • Sam

    In my opinion, one of the main issues with storage over and Ethernet and/or IP network is and always has been that Ethernet, with Spanning Tree Protocol cannot effectively mitigate intermediate link congestion using traffic engineering techniques. STP’s purpose is to block looped topologies. To deliver Storage over Ethernet, something like TRILL or 802.1aq (i think) which allows multiple paths to be used. Even TRILL won’t solve signaling a topology change to a new path for a storage flow to reduce and eliminate intermediate link congestion, at least as far as I know.

    IP networks have similar problems and lack a traffic engineering mechanism to move traffic to under-utilized links to reduce congestion. This was resolved in the telco world by developing MPLS with first manual TE, then automated TE capabilities, so that IP flows could be encapsulated in MPLS labels and then have traffic engineering applied to them.

    Perhaps something similar would help solve issues with storage over ethernet and IP to solve problems with FC credits method of congestion management, and with the performance of extended FC topologies over long distances?

    • http://etherealmind.com Greg Ferro

      Your understanding seems confused. TRILL/Rbridges will provide more bandwidth across the network by using L2 Multipath and will help the large volumes of storage traffic in the network core.

      IP network have always had traffic engineering, but ethernet / FC have really simple ones using COS.

      New standards from the IEEE DCB standards working group are addressing these problems such as congestion, QoS, credit congestion etc. You can find more information in other blog posts (search on DCB) or at the IEEE website.

  • Dan

    AFAIK, iSCSI can’t run over UDP, as SCSI can’t tolerate any packet loss. The is no packet recovery in SCSI, and a singe packet loss will trigger a long channel recovery and reset which can take more then few seconds.

    Thats why FCoE is not just FC over Ethernet like VoIP is. Thats why they needed to use the Priority based Pause Ethernet signaling to make sure there are enough buffers to avoid dropping FC traffic.

    My 2 cents are: you need to change SCSI too to fully adjust it to data networks.

    • http://etherealmind.com Greg Ferro

      iSCSI uses TCP which guarantees that packets are not lost (the TCP protocol stack will know about drops and resend as necessary).

      SCSI can tolerate packet loss, and has always been able to do so, but for practical use the loss must be very low and FC specifies a loss rate of one in 10^8. Once priority based pause guarantees the delivery of iSCSI TCP packets, the performance of iSCSI will be very close to FCoE, and be much less expensive and easier to support.

      iSCSI has been shown to perform at up to a million IOPS by Microsoft and Intel. That’s why FC is dying.

Subscribe For Weekly Updates by Email

Get a Weekly Summary of Latest Articles and Posts to your Email Inbox Every Sunday

Thanks for signing up. Look for the email from MailChimp & make sure you confirm your email address. You may need to check your spam or gmail settings to be sure of receiving the email.

Note: You can unsubscribe at any time using the link at the bottom of every email.