Planning for FCoE in Your Data Center Today

By Aaron Delp
Twitter: aarondelp
FriendFeed (Delicious, Twitter, & all my blogs in one spot): aarondelp

This week I’ve had the privilege of attending a Cisco Nexus 5000/7000 class. I have learned a tremendous amount about FCoE this week and after some conversations with Scott about the topic, I wanted to tackle it one more time from a different point of view. I have included a list of some of Scott’s FCoE articles at the bottom for those interested in a more in-depth analysis.

Disclaimer: I am by no means an FCoE expert! My working knowledge of FCoE is about four days old at this point. If I am incorrect in some of my situations below, please let me know (keep it nice and professional, people!) and I will be happy to make adjustments.

If you are an existing VMWare customer today with FC as your storage transport layer, should you be thinking about FCoE? How would you get started? What investments can you make in the near future to prepare you for the next generation?

These are questions I am starting to get from my customers in planning/white board sessions around VMware vSphere and the next generation of virtualization. The upgrade to vSphere is starting to prompt planning and discussions around the storage infrastructure.

Before I tackle the design aspect, let me start out with some hardware and definitions.

Cisco Nexus 5000 series switch: The Nexus 5K is a Layer 2 switch that is capable of FCoE and can provide both Ethernet and FC ports (with an expansion module). In addition to Ethernet switching, the switch also operates as an FC fabric switch providing full fabric services, or it can be set for N_Port Virtualization (NPV) mode. The Nexus 5K can’t be put in FC switch mode and NPV mode at the same time. You must pick one or the other.

N_Port Virtualization (NPV) mode: NPV allows the Nexus 5K to act as a FC “pass thru” or proxy. NPV is great for environments where the existing fabric is not Cisco and merging the fabrics could be ugly. There is a downside to this. In NPV mode, no targets (storage arrays) can be hung off the Nexus 5K. This is true for both FC and FCoE targets.

Converged Network Adapter (CNA): A CNA is single PCI card that contains both FC and Ethernet logic, negating the need for separate cards, separate switches, etc.

Now that the definitions and terminology is out of the way, I see four possible paths if you have FC in your environment today.

1. FCoE with a Nexus 5000 in a non-Cisco MDS environment (merging)

In this scenario, the easiest way to get the Nexus on the existing non-Cisco FC fabric is to put the switch in NPV mode. You could put the switch in interop mode (and all the existing FC switches), but it is a nightmare to get them all talking and you often lose vendor specific features in interop mode. Plus, to configure interop mode, the entire fabric has to be brought down. (You do have redundant fabrics, right?)

With the Nexus in NPV mode, what will it do? Not much. You can’t hang storage off of it. You aren’t taking advantage of Cisco VSANs or any other features that Cisco can provide. You are merely a pass thru. The zoning is handled by your existing switches; your storage is off the existing switches, etc.

Why would you do this? By doing this, you could put CNAs in new servers (leaving the existing servers alone) to talk to the Nexus. This will simplify the server side infrastructure because you will have fewer cables, cards, switch ports, etc. Does the cost of the CNA and new infrastructure offset the cost of just continuing the old environment? That is for you to decide.

2. FCoE with a Nexus 5000 in a non-Cisco MDS environment (non-merging)

Who says you have to put the Nexus into the existing FC fabric? We have many customers that purchase “data centers in a box”. By that I mean a few servers, FC and/or Ethernet switches, storage, and VMware all in one solution. This “box” sits in the data center and the network is merged with the legacy network, but we stand up a Cisco SAN next to the existing non-Cisco SAN and just not let them talk to each other. In this instance, we would use CNAs in the servers, Nexus as the switch, and you pick a storage vendor. This will work just like option 3.

3. FCoE with a Nexus 5000 in a Cisco MDS environment

Now we’re talking. Install the Nexus in FC switch mode, merge it with the MDS fabric, put CNAs in all the servers and install the storage off the Nexus as either FC or FCoE. You’re off to the races!

You potentially gain the same server side savings by replacing FC and Ethernet in new servers with CNAs. You are able to use all of the Cisco sexy features of FCoE. Nice solution if the cost is justified in your environment.

4. Keep the existing environment and use NFS to new servers

What did I just say? Why would I even consider that option?

OK, this last one is a little tongue-in-cheek for customers that are already using FC. The NFS vs. traditional storage for VMWare is a bit of a religious debate. I know you aren’t going to sway me and I know I’m not going to sway you.

I admit I’m thinking NetApp here in a VMWare environment; I’m a big fan so this is a biased opinion. NetApp is my background but other vendors play in this space as well. I bet Chad will be happy to leave a comment to help tell us why (and I hope he does!).

Think of it this way. You’re already moving from FC cards to CNAs. Why not buy regular 10Gb Ethernet cards instead? Why not just use the Nexus 5K as a line-rate, non-blocking 10Gb Ethernet switch? This configuration is very simple compared to FCoE at the Nexus level and management of the NetApp is very easy! Besides, you could always turn up FCoE on the Nexus (and the NetApp) at a future date.

In closing, I really like FCoE but as you can see it isn’t a perfect fit today for all environments. I really see this taking off in 1-2 years and I can’t wait. Until then, use caution and ask all the right questions!

If you are interested in some more in-depth discussion, here are links to some of Scott’s articles on FCoE:

Continuing the FCoE Discussion
Why No Multi-Hop FCoE?
There Might Be an FCoE End to End Solution

Tags: , , , , , , , ,

6 comments

  1. Justin’s avatar

    Aaron,
    By the same token, you could argue for 10GbE iSCSI over FCoE. Both NFS and iSCSI for VMware environments would have similar benefits.
    10GbE iSCSI is available from several vendors today, and as you said doesn’t require special adapters, nor special switches.

  2. Vaughn’s avatar

    Scott,

    Nice write up. Cisco is demonstrably ahead of the pack with in delivering ‘multi-protocol’ server connectivity. Increased bandwidth, enhanced automation,a nd port count reductions, what’s not to like?

    With that said (as you know) there’s more to the conversation than just protocols and connectivity. VMware, and the concept of large shared storage pools, has a unique way of extracting the benefits while exposing the limits within storage protocols and storage array achitectures.

    As an example… the most well known issue is around FC/FCoE/and iSCSI LUN queue depths. Deploy with FCoE over 10GbE and it wont improve the ability to scale datastore in order to house more VMs (unless you purchase arrays architect4ed without LUN queue limits).

    Conversely, customers who run very VM dense datastores over NFS are challenged to leverage a 1GbE infrastructure in order to serve their most I/O demanding systems (VMs running apps requireing more than ~160 MBs of throughput).

    10GbE solves these issues as it provides bandwidth for the most I/O demanding apps, and the ability to support FCoE, iSCSI, & NFS adds support for virtually every app and tools set under the sun.

    Nice work Cisco and nice coverage Scott.

  3. slowe’s avatar

    Vaughn,

    Thanks for the feedback. I’ll pass your positive comments on to Aaron, who wrote this post. :-)

  4. adelp’s avatar

    @Vaughn – Thank you for your comments on the article.

    @Justin – Good point and something I will address in another post I am writing right now. The problem I see with iSCSI (in a VMWare environment) is that it just isn’t “different enough” from FCoE.

    Again, I’m speaking from a NetApp persepctive but I see FCoE as the future transition path for existing FC shops and I see NFS working in non-FC shops. NFS on NetApp is so much easier than iSCSI (on the NetApp side, not the network side) and has some great design benefits that I will get into in the next article.

    Thank you for your comments!

  5. David’s avatar

    This is an interesting writeup mostly because I sat through a similar class on FCoE from Cisco some time ago and was persuaded to take our clients to your option 4 – NFS over 10GbE using Nexus gear.
    For virtualisation using NetApp and VMware, there’s even a great transtition between FC and NFS, namely iSCSI running on the 10GbE equipment.
    David

  6. adelp’s avatar

    David – You have a very good point and that is actually the point of my next article. I am working on it now but it probably won’t be out until after all the VMWolrd dust settles. No point in publishing it now because it will just get lost in the flurry of activity next week.

    iSCSI vs. NFS mainly comes down to NFS vs LUNs on NetApp. NFS is WAY easier to manage if you are using all the NetApp sexy stuff (dedupe, SMVI, etc.) than LUN based storage in my opinion. More on that in a bit. Thank you for your comment!

Comments are now closed.