No Such Thing as an End-to-End FCoE Solution

Update: See this follow-up post for more information.

I mentioned yesterday on Twitter that I’d had something of a revelation with regard to Fibre Channel over Ethernet (FCoE). This is probably nothing new to the experienced storage intelligentsia, but I’m just a simple guy so this was a big deal. After a spirited discussion in the Cisco UCS class about how to best leverage “FCoE-capable” storage, I have come to this realization: there is no such thing as an end-to-end FCoE solution.

If you’re impatient and want the short story, here it is: Even if you have an FCoE-capable storage array and you have FCoE converged network adapters (CNAs), you still can’t build an end-to-end FCoE solution. Why? Because you must put a standard Fibre Channel switch into the mix in order to provide fabric services like zoning, etc., because equipment like the UCS 6100 fabric interconnects and the Nexus 5000 don’t provide those services.

Here’s the longer version. We were having a discussion in the Cisco UCS training class revisiting the northbound FCoE connectivity issue that I discussed here. It turns out that the UCS 6100 fabric interconnect runs in NPV (or end-host) mode, so you can’t hook up any sort of storage target, FC or FCoE, directly to the UCS 6100 fabric interconnect. Even if you were to enable the UCS 6100 fabric interconnect to run in switch mode—something that’s not possible today—you still can’t hook a storage target, FC or FCoE, to the fabric interconnect because the fabric interconnect doesn’t provide any fabric services. Further, even if you were to leave the UCS 6100 fabric interconnect in NPV mode and add a Nexus 5000 switch to the mix, you can’t hook the the UCS 6100 and the Nexus 5000 together because FCoE isn’t multi-hop capable (yet). If I understand correctly, the FC-BB-5 standard includes FIP, which will address this limitation. However, according to the information I’m getting here—and I’m fully open to more information from others who are “in the know”—even that won’t fully address the problem because neither the UCS 6100 nor the Nexus 5000 will offer fabric services. So, you will still need a traditional Fibre Channel switch, like a Cisco MDS 9000 series, to provide fabric services.

The end result is that, today, it’s impossible to build an end-to-end FCoE solution. You will still need a traditional Fibre Channel switch somewhere in the mix, either to connect the FCoE equipment together (for example, to link a UCS 6100 fabric interconnect to a Nexus 5000) and/or to provide fabric services.

<aside>Now, there seems to be some confusion within Cisco, as the UCS resources to which I’ve been speaking are confirming my conclusions, but others (consider this tweet by Brad Hedlund) are saying it’s not true. I don’t know who’s correct—I can only go on what I’m being given.</aside>

As a result, it seems completely futile and useless for storage vendors to offer FCoE support on their storage arrays until these issues are addressed. In my mind, this further cements FCoE as an “edge-only” solution. Adding fabric services to the Nexus 5000 and/or UCS 6100 fabric interconnects would address this problem, and perhaps that’s something that is now enabled and made possible via the FC-BB-5 standard and FIP. If so, I have yet to hear a timeline in which these limitations will be addressed.

Either way, if you’re thinking of deploying FCoE today, be sure to keep this in mind or you could find yourself in for a surprise.

Courteous comments and clarifications are welcome!

Tags: , , , ,

  1. Dave Graham’s avatar

    Scott,

    This is precisely why I discuss FCoE as a future storage technology (or Future Fabric) with my customers versus a “here and now” type thing. FIP will assuage SOME of these issues but again, the support has to come from the fabric infra itself (which the storage has little or no influence on). So, definitely see your sentiments being spot on for the near term though I’d argue the longitudinal view of FCoE is much more positive. ;)

    cheers,

    Dave

  2. Brad Hedlund’s avatar

    Scott,
    The statement that Nexus 5000 does not support FC fabric services (such as zoning) is incorrect. The Nexus 5000 has full FCF functionality (Fibre Channel Forwarder).

    Here is link to the Nexus 5000 configuration guide on Zoning, for example.

    http://tr.im/uCgF

    Cheers,
    Brad

  3. slowe’s avatar

    Brad, I’ll defer to your product knowledge. If these services are supported, then, why do we need an FC switch in the mix in order to make all this work?

  4. Dennis Martin’s avatar

    When I tested an end-to-end FCoE solution last year, there was no FC switch, only the Cisco Nexus 5020 switch providing all the switch services, connecting the host servers to the FCoE target storage. Please check my “First Look” report at http://www.demartek.com/FCoE.html.

  5. Michael Hancock’s avatar

    Scott,

    Another thing to note is that current FCoE targets are Pre-FIP. mH

  6. Brad Hedlund’s avatar

    Scott,
    The Nexus 5000 is an “FC switch”. It has a full FCF stack providing full fabric services to both FC and FCoE ports.

    Example: Today you can have FCoE hosts and FCoE arrays all connected to one Nexus 5000, no other switch needed.

    For multi-hop FCoE you need FIP to provide one of two things:

    1) Have the host CNA find it’s FCoE switch that is more than 1 hop away (and ideally the intermediate switch is FIP Snooping capable). This is likely the approach UCS will take.

    2) Have one FCoE switch with hosts attached negotiate a VE_port to another upstream FCoE switch with FCoE arrays attached. This is likely the approach a non-UCS Nexus 5000 implementation will take.

    FCoE has been ratified along with specifications for FIP, so the two scenarios above will be forthcoming in time.

    If what you mean by FCoE not being “End-to-End” because multi-hop is not here yet, you are correct, for now, but that will not be the case for very long.

    I agree that today FCoE is an “Edge” solution primarily at the server access layer, which is not a bad thing because the server access layer is where the economies of scale in favor of FCoE exist.

    Cheers,
    Brad

  7. slowe’s avatar

    Like I said, I’ll defer to your product knowledge. I’ll tell you what I need–I need a deep dive with some Nexus and UCS gurus to work all this stuff out! Can you arrange?

  8. Russell’s avatar

    I would love to attend such a deep dive.

  9. harm’s avatar

    Mmmm, I’m also very curious. We have a Nexus 5020, NetApp MetroCluster and vSphere. We have to configure the FCoE yet (cards are build in already), but from your article, it would not be as simple as connecting the NetApp and the ESX servers to the Nexus?!? And some other configuration ;-)

    I hope Brad can tell us different.

    Regards

    Harm

  10. harm’s avatar

    Mmmm, I was to late :-)
    Thanxs guys!

  11. Justin’s avatar

    I would also be interested in deep dive on the Cisco Nexus and UCS as we’re looking to implement Cisco Nexus 5K next year.

  12. Louis Gray (Emulex)’s avatar

    As mentioned on Twitter, we are in Phase 3 of the FCoE “inflection process” (per Jim McCluney’s blog – http://www.emulex.com/blog/?p=72).

    The fact that today you can’t deliver a true end-to-end FCoE Solution has to do with all the ecosystem players getting their FCoE-enabled products ready for market. By mid-2010 we should be well on the way to having end-to-end FCoE solutions available. This does not mean that data centers should not start evaluating and planning for deployment.

  13. Vikram Desai’s avatar

    Folks, no matter what you read from any vendor, FCoE is not a standard as yet and you shouldn’t expect to see pervasive end-to-end functionality. The fact is that a draft has been released by the T11 committee to INCITS for public review; that’s all. Therefore, you should not invest in this area and expect a choice of vendors available to complete your solution for some time. Also bear in mind that if and when the FCoE standard is approved, it won’t actually WORK until we have lossless Ethernet. To that end, the very first of 3 key CEE (Converged Enhanced Ethernet) standards won’t go to the IEEE Review Committee until March 2010 (earliest) and the other two aren’t scheduled as yet at all.

    Conclusion: don’t expect a complete FCoE solution (end-to-end or any other style) for some time – unless you are willing to accept a vendor’s bet on what the standards (plural) will be that enable that capability across the board. Solutions developed in advance of this time are by definition proprietary. Don’t fall for the marketing hype!

  14. Calvin Zito’s avatar

    Vikram –

    Your comments about FCoE standard being “draft” are not accurate. Quoting from a post that just went live today on our HP StorageWorks blog, “This past June the Fibre Channel Standards T11.3 BB-5 (back bone) working group finalized defining the spec (or ratified) and voted to forward it to INCITS for public review and eventually publication next year. Is the BB-5 spec good enough to develop product? Absolutely!”.

    Louis’ comments are spot on – the reason for there not being end-to-end solutions isn’t because of the FCoE spec and it’s current state. Most vendors are in full swing developing to the ratified, finalized spec. And believe me, HP isn’t pushing FCoE nearly as hard as a few vendors others are.

    You can read the post on FCoE here: http://bit.ly/17aDYM.

  15. Ryan’s avatar

    The question I have is do we need FCoE to be end-to-end? I think of FCoE as a transitional technology to bring costs down at the edge without throwing out existing FC investments. From a technology perspective, the iSCSI vs. FC debate came down to FC being lossless, whereas traditional Ethernet was lossy. The key to making FCoE possible is DCE, and DCE is not exclusive to FCoE. Wouldn’t/couldn’t the DCE lossless property also apply to iSCSI? If so, why would we care about end-to-end FCoE?

  16. Greg’s avatar

    The idea that it must immediately be FCoE end to end right out of the gate is interesting in light of so many storage vendors just recently porting to iSCSI. The fault lies in our storage vendors, where most don’t provide PCIe slot based options but rather build in proprietary modules ….seemingly always evolving if you buy the next controller upgrade.
    My storage vendor for 4+ years has always had this PCIe flexibility, neither locking down ports for front end or back end. And with a manageable 4+year maintenance cost…I can stick with it for the foreseeable future. My second gen controllers will take FCoE just fine next year and UCS let’s me position now with no technology lock.

  17. slowe’s avatar

    Greg, southbound FCoE support is built into UCS, but remember there is no northbound FCoE support. Thanks for reading and commenting!

  18. Cam’s avatar

    HI Scott,

    You bring up a very interesting point. Since the 1st gen FCoE switches are only 24 or 40 ports max, and most of these are intended as Server side port, what is the likely path to scaling out large, CEE based SANs?

    I think the answer is….we just dont know. AS several people mentioned, CEE is single hop only….until yet anotehr standard is defined. What about multipath? Today, best practices are to turn off Spanning Tree….so how do you prevent loops….will FIP handle this….or is this TRILL. Will TRILL work with FCoE and CEE?

    I came from the FC world, where I can tell you scalability and interoperability were no simle task. It appears once again….that the Hype is way ahead of the technology.

Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>