Setting Up FCoE on a Nexus 5000

Fibre Channel over Ethernet (FCoE) is receiving a great deal of attention in the media these days. Fortunately, setting up FCoE on a Nexus 5000 series switch from Cisco isn’t too terribly complicated, so don’t be too concerned about deploying FCoE in your datacenter (assuming it makes sense for your organization). Configuring FCoE basically consists of three major steps:

  1. Enable FCoE on the switch.
  2. Map a VSAN for FCoE traffic onto a VLAN.
  3. Create virtual Fibre Channel interfaces to carry the FCoE traffic.

The first step is incredibly easy. To enable FCoE on the switch, just use this command:

switch(config)# feature fcoe

The next part of the FCoE configuration is mapping a VSAN to a VLAN. What VSAN should you use? Well, if you are connecting to an existing Fibre Channel fabric, perhaps on a Cisco MDS switch, you’ll need to make sure that the VSANs between the Nexus and the MDS are appropriately matched. Otherwise, traffic on one VSAN on the Nexus won’t be able to reach devices on another VSAN on the MDS. If there’s enough demand, I’ll post a quick piece on this step as well.

Note that this FCoE VSAN-to-VLAN mapping is a required step; if you don’t do this, the FCoE side of the interfaces won’t come up (as you’ll see later in this post). Assuming the VSAN is already defined, perform these steps to map the VSAN to a VLAN:

switch(config)# vlan XXX
switch(config-vlan)# fcoe vsan YYY
switch(config-vlan)# exit

Obviously, you’ll want to substitute XXX and YYY for the correct VLAN and VSAN numbers, respectively.

After you’ve enabled FCoE and mapped FCoE VSANs onto VLANs, then you are ready to create virtual Fibre Channel (vfc) interfaces. Each physical Nexus port that will carry FCoE traffic must have a corresponding vfc interface. Generally, you will want to create the vfc interface with the same number as the physical interface, although as far as I know you are not required to do so. It just makes management of the interfaces easier. The commands to create a vfc interface look like this:

switch(config)# interface vfc ZZ
switch(config-if)# bind interface ethernet 1/ZZ
switch(config-if)# no shutdown
switch(config-if)# exit

At this point the vfc interface is created, but it won’t work yet; you’ll need to place it into an VSAN that is mapped to an FCoE enabled VLAN. If you don’t, the show interface vfc <number> command will report this (emphasis mine):

vfc13 is down (VSAN not mapped to an FCoE enabled VLAN)

As I mentioned earlier, if you haven’t mapped the FCoE VSAN onto a VLAN, you won’t be able to fix this problem. If you have mapped the FCoE VSAN onto a VLAN, then you only need to assign the vfc interface to the appropriate VSAN with these commands:

switch(config)# vsan database
switch(config-vsan-db)# vsan <number> interface vfc <number>
switch(config-vsan-db)# exit

At this point, the vfc interface will report up, and you should be able to see the host’s connection information with the show flogi database command.

From this point—assuming that your storage is attached to a traditional Fibre Channel fabric, which is likely to be the case in the near future—you only need to create zones with the WWNs of the FCoE-attached hosts in order to grant them access to the storage. Refer to my posts on creating zones and managing zones on a Cisco MDS for more information on this task.

In my own experience, once FCoE was properly configured on the Nexus 5000 switch, then creating zones and zonesets on the Cisco MDS Fibre Channel switch and creating and masking LUNs on the Fibre Channel-attached storage is very straightforward. This, as has been stated on several previous occasions, is one of the strengths of FCoE: it’s compatibility with existing Fibre Channel installations is outstanding.

Feel free to submit any questions or clarifications in the comments below.

Tags: , , , , ,

54 comments

  1. Brad Hedlund’s avatar

    Awesome post Scott!! Perhaps a nice follow up post would discuss a configuration using NPV mode on the Nexus 5000.

    Cheers,
    Brad

  2. Kevin Houston’s avatar

    Good article, Scott. Thanks. I’d be curious to find out how the performance is and if you are getting 80% better efficiency using 10Gb DCE over multiple 1Gb links.

  3. Erik Smith’s avatar

    In regards to Brad’s comment about NPV mode. There is a case study in the EMC FCoE Tech book that shows how to configure the Nexus 5000 for NPV mode. We use it to connect the Nexus to non-Cisco FC switches. For more info see http://www.emc.com/collateral/hardware/technical-documentation/h6290-fibre-channel-over-ethernet-techbook.pdf . I’ve heard someone say (complain?) it goes into bleeding-eyeballs detail, so be warned. :-)

    Regards, Erik

  4. Russell’s avatar

    Hey Scott, good post. Where were you last week when my lab gear showed up!

    I kid!

    One thing I had to do on my Nexus 5k to get my vfc’s to link was actually make the switchports into trunk ports. I assume you did this but left it out of your article? Either that or I missed something obvious and I’m open to suggestion.

    Relevant configs:

    vlan 2
    fcoe vsan 1
    name FCoETransport

    interface vfc#
    bind interface Ethernet1/#
    no shutdown

    interface Ethernet1/#
    description ESX server lab-rp-esx1 host uplink to 1000v
    switchport mode trunk
    switchport trunk allowed vlan 1-2,201,1000-1001
    spanning-tree port type edge trunk

    In this case I’m still using native VLAN 1 for some ethernet traffic. I would imagine if this were a windows server I would probably just specify the native VLAN as whatever.

    Interesting things of note: I actually didn’t turn on NPV mode and I’m successfully able to merge with a QLogic fabric in my lab and set up zoning on either side and have things work. I thought I had some issues (I screwed up an igroup on my filers) so I tried it in NPV mode. Either way seems pretty straight forward.

    Neat thing I like about NXOS and the nexus 5k in general.

    I can do things like this:

    int eth1/1-10,e1/20-25,e1/30,e1/37

    Handy when you want to disable the second path of every ESX host for example to illustrate failover:

    int eth1/11,e1/13,e1/15
    shut

    no shut

    Also, I dunno if you could do this with IOS but I thought this was awesome:

    int po1-16

    This was actually more useful on the nexus 1000v where I needed to create a bunch of vPC-HM port channels to uplink my lab servers.

    One thing of note that might seem obvious to some is that you can’t use etherchannel on any ethernet interfaces with a vfc assigned.

  5. CiscoDCDocs’s avatar

    Russell -

    According to the Cisco Nexus 5000 Series Switch CLI Software Configuration Guide, you’re correct – you need to put the physical interface into trunk mode:

    http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli_rel_4_1/Cisco_Nexus_5000_Series_Switch_CLI_Software_Configuration_Guide_chapter31.html#con_1288667

    I don’t find a note about Etherchannel limitation so I’ll pass that tidbit on to the book writer to investigate/add if appropriate.

  6. Vic’s avatar

    Thanks Scott. Excellent article. I am curious though about how things would work with native FCoE targets, like NetApp’s target for example. In your example you are configuring connectivity for FCoE initiators attached to the Nexus switch to FC targets attached to a traditional FC fabric. My understanding is that NetApp supports FCoE natively by adding a Qlogic target mode CNA to their array controllers which would have a 10G FCoE capable port.

    I had assumed that in this configuration the 10G port on the NetApp controller would connect directly to a port on the Nexus switch.

    If that is the case then would VLAN/VSAN mapping still be required? If so, would the VSAN exist on a Nexus switch? And along the same line of thinking, would FCoE initiators be zoned to native FCoE targets on a Nexus switch.

    Best Regards,
    Vic

  7. Marc’s avatar

    In response to Vic’s questions, the answers are yes, yes, and yes.

    The whole beauty is that, when in switch mode (but not in NPV mode), the vfc port acts and behaves just like any F-port. Think of it this way, treat
    it this way, and you will always know hot to configure it.

  8. Hector De Jesus’s avatar

    I have recently started working and learning about FCoE. I have an access port routing all of my network traffic on the FCoE switch to vsan 2 on my Cisco Nexus 5010. My Windows clients only work when I specify the VLAN its connected to for my Linux client I must enable vlan tagging as well or I cannot ping any entity in the subnet is this normal do we have to set the VLAN-ID via the client ?

  9. david’s avatar

    This is what I configurated on the Nexus5548p, but the communitication between SW and Server stuck at initializing. No Tunk Vsan up. I can not see the hose information with the sh flogi database command. What is wrong with it? Is there license issue? Thanks.

    ============================
    feature fcoe

    vlan 20
    fcoe vsan 10

    interface vfc11
    bind interface Ethernet1/21
    switchport trunk allowed vsan 10
    no shutdown

    vsan database
    vsan 10 interface vfc11

    interface Ethernet1/21
    description To ESX11 VMHBA1
    switchport mode trunk
    switchport trunk allowed vlan 1-2,10,20
    spanning-tree port type edge trunk

    SW5548# sh int vfc 11
    vfc11 is trunking
    Bound interface is Ethernet1/21
    Hardware is Virtual Fibre Channel
    Port WWN is 20:0a:00:05:73:af:22:3f
    Admin port mode is F, trunk mode is on
    snmp link state traps are enabled
    Port mode is TF
    Port vsan is 10
    Trunk vsans (admin allowed and active) (10)
    Trunk vsans (up) ()
    Trunk vsans (isolated) ()
    Trunk vsans (initializing) (10)
    1 minute input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
    1 minute output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
    0 frames input, 0 bytes
    0 discards, 0 errors
    0 frames output, 0 bytes
    0 discards, 0 errors
    last clearing of “show interface” counters never
    Interface last changed at Thu Feb 24 16:31:26 2011
    ============================================

  10. slowe’s avatar

    Hector, it’s my understanding the interfaces must be configured as trunk interfaces in order for FCoE to work. As a result, there might be some OS-specific configurations that are necessary in order for traffic to flow, based on native VLAN assignments and such.

    David, one thing that has come up recently is that the Nexus 5500 doesn’t have the necessary CoS/QoS features enabled to support FCoE; the Nexus 5000 had them enabled automatically. I think this article might help you out:

    http://brasstacksblog.typepad.com/brass-tacks/2011/01/fcoe-login-failure-when-connecting-to-nexus-5548.html

    Good luck!

  11. Meryem’s avatar

    Hi,

    I’m configuring UCS involving blade servers with Qlogic CNAs behind Nexus 4000 switches connected to Nexus 5000. My storage is attached to MDS 9148 switches.
    I followed FCoE configuration guides and every thing worked fine and I made zoning in my MDS 9148 switches using pwwns.
    But my servers and storage still don’t see each other though the show zoneset active command states that the interfaces are active:

    zone name TTSVCN1_P4_BCH1_BL1_ZONE vsan 20
    * fcid 0x1b0000 [pwwn 50:05:07:68:01:40:b7:2c]
    * fcid 0xa30000 [pwwn 21:00:00:c0:dd:16:92:f1]

    When I issue the show fcns database command I have the following result:

    VSAN 20:
    ————————————————————————–
    FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE
    ————————————————————————–
    0x1b0000 N 50:05:07:68:01:40:b7:2c (IBM) scsi-fcp:target
    0x1b0100 N 50:05:07:68:01:10:b7:2c (IBM) scsi-fcp:target
    0x1b0200 N 50:05:07:68:01:40:ba:5d (IBM) scsi-fcp:target
    0x1b0300 N 50:05:07:68:01:10:ba:5d (IBM) scsi-fcp:target
    0x1b0400 N 20:34:00:80:e5:1b:f0:88 (Mylex) scsi-fcp:target
    0x1b0500 N 20:32:00:80:e5:1b:e6:a8 (Mylex) scsi-fcp:target
    0x1b0600 N 20:33:00:80:e5:1b:e6:a8 (Mylex) scsi-fcp:target
    0x1b0700 N 20:35:00:80:e5:1b:f0:88 (Mylex) scsi-fcp:target
    0xa30000 N 21:00:00:c0:dd:16:92:f1 (Qlogic)
    0xa30001 N 21:00:00:c0:dd:16:93:ad (Qlogic)
    0xa30002 N 21:00:00:c0:dd:16:94:95 (Qlogic)
    0xa30003 N 21:00:00:c0:dd:16:92:e1 (Qlogic)
    0xa30004 N 21:00:00:c0:dd:16:ae:19 (Qlogic)
    0xa30005 N 21:00:00:c0:dd:16:af:01 (Qlogic)
    0xa30006 N 21:00:00:c0:dd:16:af:2d (Qlogic)
    0xa30007 N 21:00:00:c0:dd:16:93:a5 (Qlogic)
    0xa30008 N 21:00:00:c0:dd:16:b5:3d (Qlogic)
    0xa30009 N 21:00:00:c0:dd:16:93:e9 (Qlogic)
    0xa3000a N 21:00:00:c0:dd:16:af:a1 (Qlogic)
    0xa3000b N 21:00:00:c0:dd:16:93:9d (Qlogic)
    0xa3000c N 21:00:00:c0:dd:16:ae:39 (Qlogic)
    0xa3000d N 21:00:00:c0:dd:16:90:4d (Qlogic)

    Total number of entries = 22

    I’m wondering why the FC4-TYPE:FEATURE of my Qlogic interfaces is blank and if it’s related to the issue!

    Do you have any troubleshooting suggestions?

    Best regards!

  12. slowe’s avatar

    Meryem, I’m a bit confused about your setup, as the Nexus 4000 is an IBM-only part that is not used at all in the UCS. Perhaps you have your equipment names incorrect? Without a clearer picture of what sorts of equipment you have in your environment, I’m not so sure I can provide any real help.

  13. Meryem’s avatar

    Hi,

    Actually, I have an IBM Bladecenter H chassis with 2 Nexus 4001l switches attached to 2 Nexus 5048 and 3 MDS 9148.

    I can send you the configs if needed.

    I used UCS by mistake!

    Thanks a lot!

  14. david’s avatar

    Thanks Scott.
    After I enable the Cos/Qos featre. the SW pick the WWN of the server. Thanks.

  15. Casper42’s avatar

    Do the FC ports (like the 8 dedicated FC only ports on the expansion module for a 5548P) require an external Fabric?

    I am just wondering if in a very dense 3-4 rack buildout if I could have FCoE enabled Servers talking to the 5548 and then hang my 3Par storage right off the FC dedicated ports on the 5548s as well?

    Do I NEED an external traditional FC Fabric if I am using Nexus?

    -Casper42

  16. slowe’s avatar

    Casper42,

    You can indeed use the FC ports on a Nexus 5548 for connectivity to a storage array. The Nexus 5000 series switches have fabric services functionality available (name server, etc.) so that you don’t have to have an external Fibre Channel switch if you don’t need the additional port density.

    Hope this helps!

  17. Rob Claxton’s avatar

    We are diving into fcoe with our 2 5010′s and inbetween our 5010′s is a Cisco 6509 (sup720, both sides) with full layer 2 connectivity between our sites. If I setup a fcoe interface w/ vlan for fcoe, can I span that vlan across our 6509s (at layer 2) and still communicate with the other fcoe interface w/vlan on the other side? Thanks.

  18. Casper42’s avatar

    @Rob – I think you are asking if you can tunnel FCoE traffic across a Catalyst Switch. To my knowledge there is no FCoE support in the Catalyst family so the answer would be no.

    FCoE requires DCB (aka CEE and formerly called DCE by Cisco) in order to function, especially in a Multi Hop environment. I don’t think Catalyst supports that feature which would also make me lean towards No again.

    But perhaps someone smarter than I on Cisco gear can say for sure.

  19. jsb’s avatar

    Great Post! I will be setting up two Nexus 5020′s very shortly. This will be new to me so I had a question about the distributed fabric. I will be connecting the two 5020′s to two existing MDS switches. Once I’ve created either the EISL or san port channel between the nexus and the mds is there anything else that needs to be done for the MDS to see the pwwwn’s of anything connected to the Nexus, I guess what I want to know is after the port channel is setup, when I type sh flogi database on the MDS will it also show devices that are only connected to the nexus 5020?

  20. Eric’s avatar

    Just for the record, setting the switch to NPV DOES wipe out the entire configuration…..learning the hard way :)

  21. Mark Penzien’s avatar

    Thanks for this post! SAN fabric management is new to me and I’m attempting to help deploy a pair of Nexus 5000′s with NetApps. This has helped me get a better start!

    Because FCoE is new to me, I did a lot of googling. Thought you should know that this post was lifted by another blogger at http://www.druid.co.il/wordpress/?p=133

  22. Isaac Chehab’s avatar

    when y put
    vlan 1
    fcoe vsan 10

    the conectivity between my ucs blade servers and the nexus hangs also the nexus 5000 stops responding.

    anyone??? some help??

    thanks in advance

  23. slowe’s avatar

    Isaac, we’re going to need a bit more detail than just a couple of commands taken out of context. Can you shed a bit more light on the problem and what you’re trying to accomplish?

  24. Isaac Chehab’s avatar

    Ok… sorry.. ill explain it better…:

    I am trying to connect some cisco blades to a netapp storage trough a nexus 5000.

    I have 2 cisco UCS fabric 6120 connected via fcoe to a nexus 5000 in ports:

    Ethernet 1/1 = Fabric UCS 6120 1
    Ethernet 1/2 = Fabric UCS 6120 2

    And to the ucs 6120 i have connected a cisco blade system with 2 b200

    A fiber cable connected to a expansion module on the nexus 5000 to port and on the other end a netapp:

    FC 2/2 = Netapp

    ………………………

    Im following this guide doing this

    vsan database
    vsan 10

    vlan 2
    fcoe vsan 10
    no shutdown

    interface vfc4
    bind interface Ethernet 1/2
    switchport trunk allowed vsan 10
    no shutdown

    vsan database
    vsan 10 interface vfc4
    …………………………………………………

    the problem is that as soon as i put the command
    vlan 2
    fcoe vsan 10

    i loose conectivity to the blade servers and the conection to the management port on the nexus 5000 starts flapping…. i have enabled the features fcoe and npiv…. Im not talking about zonning cause i cant even get to that point cause i lose conection.

    thanks

  25. slowe’s avatar

    Isaac, I’m traveling at the moment and I’d really need to sit down and take a closer, more focused look at what you’re trying to accomplish. However, at first glance, try not setting vfc4 to a trunk with a specific allowed VSAN. Also, you didn’t mention what VLAN your management port is using, and you didn’t mention how connectivity to the management port is managed. Any additional information you can provide might be useful.

  26. Isaac Chehab’s avatar

    Hi…

    I just figured out my problem…. its just not supported to get the ucs 6120 (blades) vhba through the fcoe on the nexus 5000…. the only way to do it is trough the fc uplinks on the expansion fc module. The nexus 5000 is not the problem, it supports it if fc is traveling through the fcoe link but in the case of ucs 6120 the only way to send fc data from the ucs 6120 to the nexus 5000 is through the fc expansion module of the ucs 6120.

    Thanks

  27. Milena’s avatar

    VLAN 1 will be used for negotiation, I would strongly suggest to use some other VLAN to map VSAN and on trunk allow VSAN Vlan, data vlans and VLAN 1.

  28. slowe’s avatar

    Milena, thanks for the recommendation. I also generally recommend staying away from the use of VLAN 1 and VSAN 1 and using other VLANs and VSANs in your configurations.

  29. Burak’s avatar

    Hi Scott,

    I did the configuration between Nexus 5548UP and C-200 with P81E VIC. But i discovered that when the VIC finds and brings up the OS from my FC Storage after the loading of Operating System the flogi database is disappears from N5548UP. And also my C200 can use the ESXi OS but cannot see the datastores. There is no such a problem like this with my FIs and Blades. But i think i’m missing something on the N5548UP or on the VIC.

    Do you have any advises for me?

    Thank you.

  30. Marc’s avatar

    Hi Scott,
    I think on the Nexus 5548 and 5596 Plattform you also need to adjust QoS for FCoE. Because with the enable the feature fcoe, the Nexus will create the new service-policy, but not activate them:

    n5548(config)# system qos
    n5548(config)# service-policy type qos input fcoe-default-in-policy
    n5548(config)# service-policy type network-qos fcoe-default-nq-policy
    n5548(config)# service-policy type queuing input fcoe-default-in-policy
    n5548(config)# service-policy type queuing output fcoe-default-out-policy

    Is that correct?
    Thanks a lot

  31. Kamesh’s avatar

    Thanks for very informative article. It is simple to understand.

  32. York’s avatar

    Hi…
    i have a problem about the nexus 5548up connected to the 6506.
    i am follwing this guide
    The nexus 5548up:
    vsan datebase
    vsan 10
    vlan 10
    fcoe vsan 10
    interface vfc1
    bind interface Ethernet 1/1
    no shutdown
    vsan datebase
    vsan 10 interface vfc 1
    interface ethernet 1/1(this port is connected to the HP980 sevrer which is supported the FCOE)
    switchport mode trunk
    no shutdown
    interface ethernet 1/15(this port is connected to the cisco 6506,and this port type is RJ-45)
    switchport mode trunk
    speed is 1000
    no shutdown
    The cisco 6506:
    vlan 10
    interface vlan 10
    ip address 192.168.100.254 255.255.255.0
    interface g 1/0/1(this port is connected to the nexus 5548up)
    switchport trunk en dot1q
    swichport mode trunk
    speed 1000
    the HP980 address is 192.168.100.210 255.255.255.0
    the problem is that the HP980 ping 192.168.100.254 unsuccessfull
    why?thank you

  33. Mikemcg’s avatar

    I’ve a customer who bought a Blade System with Flex Fabric, and a Cisco Nexus 5000. I was under the impression that FF could connect directly into this. However the information I have since received is that there has to be an additional Cisco layer of Fabric Extender switches between the two, with a Cisco 2000 series Fabric Extender switch added to the blade chassis.

    I can see their value in effectively being repeaters, with the intelligence remaining in the Core switch.

    However if physical port connectivity is available I don’t understand why we cannot connect directly to the Cisco 5000?

  34. Paul C’s avatar

    Mike, You can definately connect Flex Fabric ( assuming you’re talking about the HP branded 10gb blade switches?) directly to the Nexus 5k.

    If you’re using FCoE in the Flex Fabric you’ll need to convert some of your ports to FC on the 5k for FC uplinks from the Flex Fabric Switches, but there are no real caveats to connecting the FlexFabric switches networking straight to the 5k.

  35. Dan’s avatar

    Full Disclosure, I work for HP:

    FlexFabric Ethernet direct to Nexus 5000 works fine.
    I personally setup a PoC environment around Sept last year with exactly this design and there were no problems. Several other peers of mine have done similar as well.

    Now FlexFabric FC ports however into a Nexus 5000 FC are currently unsupported. There are interoperability issues between the QLogic chipset inside FlexFabric and the Nexus FC ports. I know Qlogic is working on it but I haven’t checked on the status in a while.

  36. Philip’s avatar

    Mikemcg
    Just about anything you do with a Fabric Extender you can do locally on a 5k. Are you using Multiple 5k’s with multiple Flex Fabric?

    If I had to guess, you are trying to create a VPC between the two Flex Fabric and 2 5k’s. We are having some similar issues to what you are seeing. I only work on the network side, but the Blade deployment is very different here.

    If you use a Fex 2232 then that would give you a bit more redundancey however that comes with a draw back. It will also give you 1 point of failure if that 2232 crashes (assuming you are using 1 2232).

    We will have an HP consultant come in here in the next week to help the Server team setup the blade chassis.

  37. Rid’s avatar

    Hi All

    We have IBM Blade center H with Brocade 8470 modules.
    we need to connect thi to Nexus 5548 switches.
    As per my knoledge to connet this fcoe switch to fcoe switch we need VE ports but from brocade side all Fcoe ports ar in VF state.

    Pls help on this

  38. Jamal’s avatar

    Hi All ,

    I have Nexus 5548 at access and 5596 at aggregation layer ..Host CNA is connected trough fabric extender to 5548 and Clarion Storage is connected to 5596 .
    I have configured FCOE on both 5548 and 5596 and pwwn has discovered for host CNA on 5548 and pwwn has discovered for Clarion storage on 5596 .FCOE vlan is allowed on trunk port b/w 5548 and 5596 and hosts ports also configured as trunk .
    Correct zones has created at both end and has got host pwwn and storage pwwn as member .Zone has activated
    Issue is host CNA can’t connect/see to storage .

    Help please …

  39. slowe’s avatar

    Rid, I believe that Brocade supports a feature equivalent to NPV, which would allow you to connect the Brocades to the 5548 switches without the need for VE_Ports between the two switches. I don’t recall exactly what the feature is called; perhaps check the comments on this article:

    http://blog.scottlowe.org/2009/11/27/understanding-npiv-and-npv/

    Jamal, I’m not entirely sure that multi-hop FCoE is supported on the 5548 and 5596 switches, and if it is you might need a specific revision of NX-OS. I’d start there. If you have the right revision and it is indeed supported, then you’ll need to drop back to basic troubleshooting steps—is the vfc up, is FLOGI occurring, etc. Good luck!

  40. Dan’s avatar

    Brocade calls NPV “Access Gateway mode”
    http://www.brocade.com/solutions-technology/technology/platforms/fabric-os/access_gateway.page

    As for Multi Hop FCoE, did they get the whole FCF thing figured out?
    http://blog.scottlowe.org/2009/08/11/why-no-multi-hop-fcoe/

  41. slowe’s avatar

    Dan, thanks for the clarification on Access Gateway mode. Regarding multi-hop FCoE, you can definitely do it with Nexus 7000 switches using a storage VDC and VE_Ports (these must be dedicated FCoE links, not converged links). I don’t know that you can do it between multiple Nexus 5500 series switches. Note that multi-hop FCoE with UCS is, last time I check, still an issue.

  42. Juan Tarrio’s avatar

    Disclaimer: I work for Brocade.

    Rid, do you need to connect the Brocade 8470 to a Nexus 5548 with Ethernet uplinks and run FCoE over them? That is not supported. The Brocade 8470 is based on the Brocade 8000 and does not support multi-hop FCoE. It was designed as a first-hop FCoE solution for blade server I/O consolidation, and as such already provides native FC ports that you can connect to your existing FC SAN fabric. You can configure the 8470 in Access Gateway mode (same thing that Cisco calls NPV) if you want, if you need to connect to a non-Brocade SAN fabric. You *cannot* run NPV/AG mode over Ethernet links. Once you break out the FC traffic on the first hop, you can connect Ethernet uplinks between the 8470 and the 5548 just like you can to any regular Ethernet switch. There will be no FCoE traffic running on those links.

  43. slowe’s avatar

    Thanks for the additional information, Juan!

  44. MelF’s avatar

    Hi.

    Nice article. Was wondering if you have a Nexus 5K-only guide for setting up the Fabric. We won’t be having MDS switches, our servers will connect to the Nexus via FCOE and the storage devices via FC.

    Thanks.

    Mel

  45. Reg’s avatar

    Great Post! I have a UCS 6100 running Vsphere 4/5 and I’m looking to integrate a FCoE SAN into the environment. Is Nexus the best solution to accomplish this?

  46. KenN’s avatar

    I have QLogic QMI 8142 Nexus 4001i Nexus 5548UP Dell Compellent SAN.

    I have been working on this issue for a year now – I can’t get this environment stable. Even though Cisco says the issue was fixed in 4000 firmware H (and I’m on the latest, 4.1(2)E1(1i)) the problem still exists where the ports on the 4000s will shut down due to pauseRateLimitErrDisable whenever there is a significant FC load.

    Is no one else having show stopping issues with the Nexus 4000 switches???

    Ken

  47. Kamal’s avatar

    Hi,

    I have a question, im kinda new to Cisco Nexus environment and my current job is to maintain a gov data center. This data center will be hosting center for multiple gov agency. Our current Setup is like this, N7K-N5K-N232/2248 with Netapp in place as storage solution. But then one agency requested to hook up their Dell Force 10 MXL to one of our Nexus 2232, as interface for all their servers behind it and at the same time will require some storage space from Netapp. So the planning will be like this, N7K-N5k-N2k-Dell Force10-Servers, can this be done? or possible? in order for the FCoE packet travel to Netapp?. Thanks a lot.

  48. diqa’s avatar

    Hi Scott, good day to you…

    Im just finish setting up FCoE on my customer environment, for your information we are using UCS C220 M2 with P81E Card (ADAPTER FEX) running ESXi 5.1 with twin-ax cable plug into nexus 2232 as a Fabric Extender and nexus 5596UP as a Parent switch. and also im configured FCoE in nexus 5000 for Data Network Communications and zoning from P81E card in UCS to EMC VMAX 10K to VSAN Communications.

    a couple days ago my customer experiencing bad performance for copying some files between datastore in EMC Boxes, its just small file but they need several hours to accomplish the copy between datastore.

    my questions are:
    1. how is the traffic flow if im copying between datastore trigered by ESXi Host?, whether it involves FCoE on N5K or the process is only int EMC Boxes?? **this is standalone ESXi, no vCenter has managed the host.
    2. can i setting up the QoS/CoS for VSAN traffic in FCoE protocol? i mean, i would like to setting the VSAN traffic has the first priority than Data Network Traffic.

    I’ll be glad if you could give some enlightenment.

    my Regards
    Diqa

1 · 2 ·

Comments are now closed.