FCoE through a Nexus to a MDS-Attached Storage Array

In this post, I want to pull together all the steps necessary to take a Converged Network Adapter (CNA)-equipped server and connect it, using FCoE, to a Fibre Channel-attached storage array. There isn’t a whole lot of “net new” information in this post, but rather I’m summarizing previous posts, organizing the information, and showing how these steps relate to each other. I hope that this helps someone understand the “big picture” of how FCoE and Fibre Channel relate to each other and how they interoperate (which, quite frankly, is one of the key factors for the adoption of FCoE).

The steps involved come from an environment with the following components:

  • A Dell PowerEdge R610 server running VMware ESXi and containing an Emulex CNA
  • A Cisco Nexus 5010 switch running NX-OS 4.2(1)N1(1).
  • A Cisco MDS 9134 Fibre Channel switch running NX-OS 5.0(1a).
  • An older EMC CX3-based array with Fibre Channel ports in the storage processors.

We’ll start at the edge (the host) and work our way to the storage. All these steps assume that you’ve already taken care of the physical cabling.

  1. Depending upon how old the software is on your hosts, you might need to install updated drivers for your CNA, as I described here. If you’re using newer versions of software, the drivers will most likely work just fine out of the box.
  2. The closest piece to the edge is the FCoE configuration on the Nexus 5010 switch. Here’s how to setup FCoE on a Nexus 5000. Be sure that you map VLANs correctly to VSANs; for every VSAN that needs to be accessible from the FCoE-attached hosts, you’ll need a matching VLAN. Further, as pointed out here, the VLAN and VLAN trunking configuration is critical to making FCoE work properly anyway.
  3. The next step is connecting the Nexus 5010 to the MDS 9134 Fibre Channel switch. Read this to see how to configure NPV on a Nexus 5000 if you are going to use NPV mode (and read this for more information on NPV and NPIV). Using NPV or not, you’ll also need to setup connections between the Nexus and the MDS; here’s how to setup SAN port channels between a Cisco Nexus and a Cisco MDS.
  4. Once the Nexus and the MDS are connected, then you’ll need to perform the necessary zoning so that the hosts can see the storage. Before starting the zoning, you might find it helpful to set up device aliases. After your device aliases are defined, you can create the zones and zonesets. This post has information on how to create zones via CLI; this post has information on how to manage zones via CLI.

At this point—if everything is working correctly—then you are done and you should be ready to present storage to the end hosts.

I hope this helps put the steps involved (all of which I’ve already written about) in the right order and in the right relationship to each other. If there are any questions, clarifications, or suggestions, please feel free to speak up in the comments.

Tags: , , , , , , ,

  1. Brian’s avatar

    Thanks Scott!

    We have almost the same setup. We had a LOT of issues with Pause Frames @ the Switch (flapping / shutting ports). We were using QLogic QME8142 mezz card CNAs. Any issues with the Emulex?

  2. slowe’s avatar

    Brian, I haven’t yet seen any issues with the Emulex CNA cards we’re using.

  3. Louis Gray’s avatar

    Nice writeup, Scott. Hope the Emulex CNAs are doing well. Let us know not just if there are issues, but what you find! Would be curious in simplicity of setup, performance, etc. Help us inform others like yourself. :)

  4. Erik Smith’s avatar

    In regards to Brian’s question regarding pause frames. There is a known issue with pause frames with the generation 1 (Menlo based) CNAs from both Emulex and QLogic. Essentially, the CNA’s continually transmit “unpause” frames.

    With the gen 2 CNAs (what you’re using), I haven’t noticed any problems but I did run into the same problem you describe where the interface on the switch goes error disabled. I suspect this is due to continuous pauses being sent from the CNA. However, I cannot get the problem to reproduce at will and as a result haven’t been able to get a trace of the problem as it’s happening. I would be interested in any specifics you could provide regarding the topology, workload, etc. If I could reproduce it at will, it would make getting to the bottom of the problem possible..

  5. Prateek Sharma’s avatar

    Nicely consolidated! Good reference writeup Scott. Thanks!

  6. John Gill’s avatar

    Hi Erik,
    Regarding the errordisable scenario, you need pause rx on the switch with little traffic tx from the switch. The logic says if the rate to the server is between 100kb/s and 5mb/s, and we are receiving 1 pause every .5s for 20 .5s periods (10s), then you presume the port is broken and we don’t want it blocking other traffic in the fabric. You should be sending flowcontrol when you aren’t being overwhelmed. It is logical to think a 10GE interface is not being overwhelmed when it’s throughput is less than 5mb/s. The 100kb/s is put in as a check to see if a flow is actually being received and not just base traffic.

    My question for you and whoever else is seeing this is if you are sending a lot of traffic to that host at that time and what CNA it is. As you mentioned, the gen1 CNA’s sent like 300 xon quanta 0 per second under no load.

  7. Azada’s avatar

    Dear Scott
    If we have one EMC FCoE San storage and NEXUS 5010 and IBM Blade server which we want to implement cross-connected vPC and we design straight link for FCoE interface from NEXUS 4001I to 5000 then by this work we have an isolated environment for FCoE protocol. In this case don’t we need to have mapping any VSAN to VLAN?
    Another question, we have EMC VNX (with FCoE modules) then is there a need to implement VFC? How can I configure the pure FCoE environment?
    If we are going to connect this pure FCoE environment to FC environment then is that enough to implement ISL between NEXUS 5000 and fabric switches and put the VFC on the appropriate VSAN?

  8. slowe’s avatar

    Azada, let me see if i can answer your questions.

    1. You still need VSAN-to-VLAN mapping. It’s just a requirement of how it’s going to work, regardless of the topology.

    2. Even with end-to-end FCoE (CNA in host to FCoE target in array), you must still use vfc interfaces on the Nexus. It’s required.

    3. Yes, you can put an ISL between the Nexus 5000s and the fabric switches and make sure that the appropriate VSAN traffic is allowed on the ISL. You didn’t specify which fabric switches you’re using, but if you’re using MDS then it’s pretty straightforward and is—for the most part—covered in this post.

    Good luck!

  9. dynamox’s avatar

    Scott,

    we have a VNX 5700 FCOE ports connected to Nexus 5k but can’t figure out why the array ports will not login to the switch. We configured vfc interfaces, mapped them to ethernet ports, put vfc interfaces into vsans. Yet when i run “sh fcoe database” ..it shows nothing. Did you have to do anything specific on VNX ? Is it possible that you could email me your nexus config ?

    Thank you

  10. dynamox’s avatar

    I’ll reply to my own question. Apparently on 5548 you have to enable these parameters and then VNX logs-in just fine.

    system qos
    service-policy type qos input fcoe-default-in-policy
    service-policy type queuing input fcoe-default-in-policy
    service-policy type queuing output fcoe-default-out-policy
    service-policy type network-qos fcoe-default-nq-policy

  11. slowe’s avatar

    Dynamox,

    Ah, you got bit by the fact that the 5500 series doesn’t (by default) have the QoS parameters for FCoE. Brad Hedlund mentioned that to me at one point, but I had forgotten all about it. Thanks for posting your solution!

Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>