LACP with Cisco Switches and NetApp VIFs

In my previous article about using NetApp multi-mode VIFs with Cisco switches, I mentioned that you could—at that time—only use 802.3ad static link aggregation:

Be aware that Data ONTAP’s multi-mode VIFs are only compatible with static 802.3ad link aggregation; you can’t use PAgP (Cisco proprietary protocol). I would assume dynamic LACP is also incompatible. For this reason we used the “channel-group 1 mode on” statement instead of something like “channel-group 1 mode desirable”.

I recently got some feedback from a NetApp SE in my area; this SE informed me that Link Aggregation Control Protocol (LACP, part of the IEEE 802.3ad specification) is indeed supported with Data ONTAP version 7.2. This KB article on the NetApp NOW site (login required) indicates that ONTAP 7.2.1 is required in order to use a LACP VIF.

There are a couple important requirements to note; these are laid out in the referenced KB article:

  1. Dynamic multimode VIFs should use IP address-based load balancing. This means that the Cisco switch or the channel group must also use IP address-based load balancing.
  2. Dynamic multimode VIFs must be first-level VIFs. This makes sense; LACP is a Layer 2 protocol, so layering a LACP VIF on top of other VIFs just doesn’t work.

To create the dynamic multimode VIF on the Data ONTAP side, the command is pretty simple:

vif create lacp <vif name> -b ip {interface list}

On the Cisco side, the commands are very similar:

s3(config)#int port-channel1
s3(config-if)#description LACP multimode VIF for netapp1
s3(config-if)#int gi0/23
s3(config-if)#channel-protocol lacp
s3(config-if)#channel-group 1 mode active

These commands would be repeated for all physical ports that should be included in the LACP bundle. Note the differences from the earlier commands in the previous article; here we use “channel-group 1 mode active” instead of “channel-group 1 mode on“. We also added the “channel-protocol lacp” command.

Together, these commands will establish a LACP-based link aggregate between a NetApp storage system running Data ONTAP version 7.2.1 or higher and a Cisco IOS-based switch.

Thanks to Jeff, our NetApp SE, for providing the updated information.

Tags: , , , , ,

25 comments

  1. Wade Holmes’s avatar

    Hi Scott,

    I just went through an exercise figuring out NetApp load balancing options late last year, and discovered the NetApp LACP support. One other dependency to note, for this configuration to supply both redundancy and load balancing across separate physical Cisco switches, you must use a Cisco StackWise stack.

  2. Carsten Kreß’s avatar

    never forget on cisco switch or stack
    conf t
    port-channel load-balance src-dst-mac
    end
    wr

  3. Carsten Kreß’s avatar

    a example for cisco stack with vlan

    conf t
    int port-channel 1
    switchport access vlan 2
    switchport mode access
    exit
    int ra gi 7/0/21, gi 8/0/21
    switchport access vlan 2
    switchport mode access
    speed 1000
    duplex full
    no channel-group 1 mode on
    channel-protocol lacp
    channel-group 1 mode active
    end
    wr

  4. Dejan Ilic’s avatar

    Two things you might want to add if you use VLAN.

    First, for any router/switch that supports more than just dot1q vlan (ie. Cisco 6500 in my case) you have to tell what encapsulation you wish.

    #switchport trunk encapsulation dot1q

    I just spent several hours searching why the channel “worked” for about 60 sek without any trafic flowing, before the router disable the channel. Tip : What is the default encapsulation on Cisco before dot1q became standard?

    Second is more of an optimization if you use VLAN as we do:
    #spanning-tree portfast trunk

  5. Dejan Ilic’s avatar

    My example:
    4 port Channel, vith VLAN:s 860, 861 and 862

    interface GigabitEthernet3/13
    description NetApp link 1 of 4
    no ip address
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 860-862
    switchport mode trunk
    spanning-tree portfast trunk
    channel-protocol lacp
    channel-group 33 mode active
    end

    interface Port-channel33
    description NetAPP_EthernetChannel
    no ip address
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 860-862
    switchport mode trunk
    lacp max-bundle 4
    end

  6. Matt Brown’s avatar

    I’ve got this setup on my Cisco Catlyist 3750g switch just like above and for some reason both my netapp and my switch will only show 1 port as active. The other ports in the LACP show up as lag_inactive.

    Any ideas on why this is?

  7. Matt Brown’s avatar

    This works great on a single switch.

    I’ve got (2) 3750g Cisco Switches connected to a 6500 series Cisco switch. 2 ports from my netapp go to each of the 3750 switches. I’d like to load balance between the 2 but it doesn’t work reliably.

    I setup:
    vif create lacp Switch1vif -b ip e0c e4c
    vif create lacp Switch2vif -b ip e0d e4d
    vif create multi vifCombo -b ip Switch1vif Switch2vif

    This works to a degree. I can get data to come out of all 4 ports if I push it hard. But if I pull the cables from switch 1 it will take up to 30 seconds for switch 2 to takeover the connections. This is too long.

  8. pushkin’s avatar

    Is there a known performance impact when using “flowcontrol receive on” in the port channel configuration? Thx

  9. Drikse’s avatar

    Hi Scott, is there also a solution how to configure ONTAP vif combined with LACP on a Juniper EX4200 switch? When trying to bundle 2 x 1Gb on the Juniper to the NetApp, I see the etherlinks go down now and then.

  10. Vandalix’s avatar

    Hey Drikse,

    I had the same issue with some Juniper EX4200 switches, and had to revert back to basic Multimode to get the link to stay up. I have been meaning to upgrade the Junos to the latest and retry, but haven’t had a chance…

  11. tuan’s avatar

    I have solution for EX4200, we bought EX4200 week ago, and after go through 1222 pages with lots of testing, we found the solution.

    On netapp : vif create vifname lacp -b ip ……
    will work with juniper lacp passive or active periode slow, no autonegotiation on physical port, link speed 1g
    root@JS1# show interfaces ae0
    description “test”;
    aggregated-ether-options {
    link-speed 1g;
    lacp {
    active;
    periodic slow;
    }
    }
    unit 0 {
    family ethernet-switching;
    }

    On netapp: vif create vifname multi -b ip……
    will work with juniper aggregated-ether-options , port mode access (no lacp)

    it has been tested, it work pretty well….good luck

  12. Dost’s avatar

    I need to setup dynamic multimode vifs across 2 6500s. Is this supported ? Do I need stackable Cisco switches such as 3560s or 3750s to do this ? Does 802.3d work on 2 diffrent switches ?

  13. slowe’s avatar

    Dost,

    IIRC, it’s only supported on the 6500′s when you have the Sup720 modules that enable virtual blade switching functionality. Otherwise, 802.3ad, EtherChannel, LACP, etc., all work with only a single switch. At least, that’s my understanding. Cisco gurus, feel free to correct me!

  14. Dost’s avatar

    We got WS-SUP720-3B which is not capable of virtual switching. I guess our option is to get 2 gig 3750s or just connect each interface of netapp 3140 into both active and backup link6500 switches for the redundancy. Use Etherchannels for link aggregation and network card failure.

  15. Shaun’s avatar

    Hi,

    We are currently setting up a netapp storage box and configuring a cisco 3750 stack.

    I have created a port-channel and added two ports to this channel. Config below. Now if I set a continuous ping to the netapp and shut one port down it times out once then starts pinging again. Am I correct in thinking this shouldnt happen at all and that the ping should be continuous:

    interface Port-channel3
    description San 1
    switchport access vlan 20
    switchport trunk encapsulation dot1q
    speed 100
    duplex full

    interface FastEthernet1/0/7
    description 48a San Nic 3
    switchport access vlan 20
    switchport trunk encapsulation dot1q
    speed 100
    duplex full
    channel-protocol lacp
    channel-group 3 mode active
    spanning-tree portfast
    spanning-tree guard root
    !
    interface FastEthernet1/0/8
    description 48a San Nic 4
    switchport access vlan 20
    switchport trunk encapsulation dot1q
    speed 100
    duplex full
    channel-protocol lacp
    channel-group 3 mode active
    spanning-tree portfast
    spanning-tree guard root

  16. Amjad’s avatar

    How about HP blade server switches I am not able to get lacp on vif only multi anyone get this working?

  17. Ashok’s avatar

    Shaun – About loosing ping

    One thing that i noticed in your config – which may not makes difference – but worth checking
    interface Port-channel3
    description San 1
    switchport access vlan 20 <<<<<<>>>
    switchport trunk encapsulation dot1q <<<<>>>>

  18. Shaun’s avatar

    I have tried various ways to get this to work, However when I look on my switch I constantly get the below error:
    2w4d: %SW_MATM-4-MACFLAP_NOTIF:

    My collegue setup the netapp device and says he used multimode vif in lacp using mac based addressing.

    My ports are configured as beloew:

    interface Port-channel1
    description HOSAN Channel-Group
    switchport trunk encapsulation dot1q
    switchport mode trunk
    spanning-tree portfast
    duplex full
    !
    interface GigabitEthernet1/0/5
    description Hosan Nic 1
    switchport trunk encapsulation dot1q
    switchport mode trunk
    channel-protocol lacp
    channel-group 1 mode active
    spanning-tree portfast
    spanning-tree bpduguard enable
    duplex full
    speed 1000
    !
    interface GigabitEthernet1/0/7
    description Hosan Nic 2
    switchport trunk encapsulation dot1q
    switchport mode trunk
    channel-protocol lacp
    channel-group 1 mode active
    spanning-tree portfast
    spanning-tree bpduguard enable
    duplex full
    speed 1000

    Can anyone suggest what I need to change whether it be on the cisco switch or the netapp device.

    Kind Regards
    Shaun

  19. sty’s avatar

    If using cisco, there’s no advantage in using lacp over standard etherchannel.

    If you absolutely have to use lacp then use it in the ON mode, not active. Active mode starts negotiation which wastes precious seconds if you have a link failure. I can’t honestly call anything that doesn’t have sub-second failover an ‘HA’ system…

  20. Pierre’s avatar

    We are using source MAC on the cisco side and round robin on the netapp side. It seems to work fine for us and the balancing is done better at least for the traffic leaving the controllers. We end up having a “non symetric” type of traffic but it doesn’t really seem to matter.

    Any body else doing the same thing?

  21. geertn’s avatar

    @sty: i don’t really agree. when having portchannels between switches, i found “active” to be better then “on” sometimes. First, true, active negotiation adds some delay. However, the goal of a portchannel is to keep the channel up and running at all times, when individual links fail/recover. I have seen extra packet loss when both sides are “on”: if one side is faster than the other one (ie. 6500 vs. 3750), with “on” , one side might start forwarding faster/before the other side is ready, resulting in 0.5 sec packet loss. When using “negotiate”, they will only start forwarding when both sides are ready and negotitation is successfull. result: 0 packet loss when a link bounces. Also, “on” is dangerous if the other side is “swapped or changed” and someone “forgets” to configure the interface. It will come up and the other side will start forwarding, often, resulting in 50% pkt loss.

  22. Mark’s avatar

    Here’s one more working config – thanks everyone

    NetApp vif config:

    netapparray1> vif create lacp vif0 -b ip e0a e0b e0c e0d
    netapparray1> ifconfig vif0 172.27.1.150 netmask 255.255.0.0 partner vif0 nfo
    netapparray1> route add default 172.27.0.1 1

    netapparray2> vif create lacp vif0 -b ip e0a e0b e0c e0d
    netapparray2 > ifconfig vif0 172.27.1.160 netmask 255.255.0.0 partner vif0 nfo
    netapparray2 > route add default 172.27.0.1 1

    Cisco PortChannel config:

    interface Port-channel 50
    description NetApp EtherChannel 1
    switchport access vlan 1
    switchport mode access

    interface Port-channel 60
    description NetApp EtherChannel 2
    switchport access vlan 1
    switchport mode access

    interface-range Gi3/29-32
    description netapparray1
    switchport mode access
    speed 1000
    duplex full
    channel-protocol lacp
    channel-group 50 mode active

    interface-range Gi3/33-36
    description netapparray2
    switchport mode access
    speed 1000
    duplex full
    channel-protocol lacp
    channel-group 60 mode active

  23. Matthew’s avatar

    Hello Everyone,

    2 Questions in Regard:

    1. In the vif0 configuration how do you enable jumbo frames on the Netapp?

    2. Management:

    EoA MANAGEMENT:
    Originally I setup the N3300 with a separate non-routable data network on E0a and E0b, this is how I have done other installations. During one of our support sessions Charlie did some research and found out that in order to use the IBM System Manager and the FilerView you must assign a routable IP to E0a or E0b.

    My expectation is, like that of other products, that ALL management traffic travel over a separate management network. In the case of the N3300 this is the BMC. Charlie informed me that only SSH works through the BMC, my experience was exactly that.

    THE CONTRADICTION:
    This is against Netapps published best practice of “separating IP based storage traffic from public IP network traffic by implementing separate physical network segments or VLAN segments”. Why is it that using the IBM/Netapp graphical management tools requires a contradiction of best practices? The separated data network is also meant to be a non-routable network, again this best practice contradicts the requirement of the management tools provided with the N3300.

    QUESTION:
    How can one manage the N3300 while separating the data network?

    ANSWER:
    The only answer I can come up with is separate VLANs on the same physical ports. However I do not agree

    REFERENCE:
    NETAPP TECHNICAL REPORT
    NetApp and VMware Virtual Infrastructure 3
    Storage Best Practices
    http://www.netapp.com/us/library/technical-reports/tr-3428.html
    page 18 for iSCSI, section 4.6
    page 30 for NFS, section 5.5

    Thank you,
    Matthew

  24. syed’s avatar

    Hi,
    Regarding netapp-Fas2240 and Cisco 3560 switch. I don’t have switch stacked. CS31 & CS32 they are connected throug etherchannel
    - small network. Thinking about connecting two ports from each controller connects to CS31 and CS32 through etherchannel.
    i.e: here is config going to CS31:please advise if something wrong.
    Config:
    Interface Port-channel4
    description FAS2240 etherchannel to CS31
    switchport access vlan 10
    switchport mode access

    interface GigabitEthernet0/8
    description FAS2240-Contr-ONE->CS31
    switchport access vlan 10
    switchport mode access
    channel-protocol lacp
    channel-group 1 mode on

    interface GigabitEthernet0/9
    description FAS2240-Contr-ONE->CS31
    switchport access vlan 10
    switchport mode access
    channel-protocol lacp
    channel-group 1 mode on

  25. Leon’s avatar

    hmmmm channgel-group 1 but interface Port-channel 4?

    I think it should be 1 on both instances or offcourse 4 on both.

Comments are now closed.