Holy schmoly, someone in the storage industry did something new. Geoff Arnold from Speaking in Clouds writes: Yesterday Seagate introduced its Kinetic Open Storage Platform, and I’m simply blown away by it. It’s a truly elegant design, “as simple as possible, but no simpler”. The physical interconnect to the disk drive is now Ethernet. The interface […]
Continuing the series from the Brocade Virtual Symposium. In a special video session that was sponsored by Brocade, we got Chip Copper in the room with Stephen Foskett to talk about storage convergence.
Over the last few years, I’ve been very critical of Ethernet storage protocols like [FCoE](http://etherealmind.com/tag/fcoe/) and the fact that storage protocols are unlikely to work well. There are few times here where Chip was able to give me answers and a different viewpoint that gave me a different take on the solutions.
Vitaly left a comment on a blog post with a clever IOS CLI Regex tip. I though I would pick that apart as an exercise.
I’ve been doing some research into Ethernet and the use of Jumbo frames for some content I’ve been writing and come across something interesting. The documents state that Jumbo frames can only be used on Full Duplex Ethernet connections.
In the recent weeks, J. Michael Metz from Cisco entered a ****ing contest with my friend Greg and decided to prove that the FCoE standards are done.
J Michel Metz attempted a sleight of hand magic trick to astroturf over the lack of progress on the Ethernet standards. Unpleasant.
This week has seen a lot of talk in the industry around the needs for FibreChannel over Token Ring. Lets take a looks at the Myths and Magic of this amazing opportunity
So a while back, Cisco, VMWare and EMC announced that they are forming a partnership to co-operatively sell and support products in a joint venture named Acadia. Selected engineers and sales grunts, USD$200 million bucks and “no large customer left untouched” door to door marketing campaign. Is there anything to it ?
Chris Evans believes that “FCoE” will rule the world. Hah. I believe it’s just a transition technology that drags Storage out of it’s closed, proprietary mindset into the open collaborative world. Data Networks can easily adapt.
There is some misconception by many in the storage industry that FCoE is some type of replacement for Infiniband. My view is that FCoE is cheaper, dumber but MARKETABLE alternative.
The iSCSI protocol carries block data from storage array to server. Typically, that is inside a data centre where loss is not a problem. So why use TCP and have all that overhead ?
FibreChannel over Ethernet (FCoE) is coming to your data centre. The problem is that FCoE doesn’t work properly on ordinary ethernet switches. What do I mean by “properly” ? We need new standards to let it reach its potential but there are too many acronyms.
Buried in this article from the Register is aparagraph detailing Emulex Gen2 chip that does all storage virtualisation in a single chip. The FC boosters will be unhappy, they will have to compete.
A number of storage bloggers have been questioning the relevance of FCoE and why would you bother with it. Given the size of marketing budgets pushing FCoE and the reality distortion this generates, lets look at the case AGAINST FCoE.
TRILL is a key network technology for enabling Cloud Computing by allowing for better migrations of VM’s, and better utilisation of the network switching fabric and much improved stability of the Data Centre Server Fabric.
One of the most amusing parts about Fibre Channel over Ethernet(FCoE) is that Spanning Tree is making a triumphant comeback. And I am talking a Roman style parade after the gates to the city have been built and the streets lined with gold.
I was thinking over the integration of an IT Engineering team to provide a cloud computing services. While discussing team responsibilities and operational “edges” I realised how divisive and dysfunctional FCoE will be to a team.
I note this quote from a recent article on The Register where Brocade talks about their future strategy.
iSCSI Network Designs: Part 5 – iSCSI Multipathing, Host Bus Adapters, High Availability and Redundancy
In iSCSI Part 3 – Server Side – iSCSI Host Bus Adapters and IP Performance I looked at how server side issues would affect the traffic generated on a per server basis. I recommended that you use iSCSI HBAs for high intensity servers to meet the high levels of performance.
The next level is evaluate how the server should connect to the network, specifically, this means how many ethernet ports you need, and what configuration is needed to support them to deliver high availability / redundancy and increased bandwidth.
I got linked from Dante Malagrino at the Cisco Data Center blog yesterday. He writes a good post on why FCoE might be a good idea. Let me just say I am not only anti-FCoE, I am anti-Fibrechannel.
My rebuttal after the jump…