Cisco Link Aggregation and NetApp VIFs

Network Appliance storage systems support the use of virtual interfaces (VIFs) to provide link redundancy and improved network throughput.  Two types of VIFs are available:

  • Single-mode VIFs act like a fault tolerant team and will fail traffic over to a standby link when the active link goes down.
  • Multi-mode VIFs act like a group of links providing aggregate bandwidth as well as link redundancy.

Single-mode VIFs are great for fault tolerance, but the storage system isn’t leveraging all the links.  It’s “active-passive” arrangement in which only one of the links is passing traffic while the other link is idle.  No switch support is required for this configuration.

Multi-mode VIFs, on the other hand, allow for both greater bandwidth utilization as well as fault tolerance.  Traffic will be distributed across all the links in the VIF (typically based on IP address), and if one link fails the traffic is redistributed across the remaining links.  However, this configuration requires support on the switch.  In this article, we’re going to look at configuring a Cisco Catalyst 3560 switch to do link aggregation with a NetApp storage system running Data ONTAP

To configure the switch, we’ll use the following commands (these are entered in global configuration mode on the switch):

s3(config)#int port-channel1
s3(config-if)#description Multi-mode VIF for netapp1
s3(config-if)#int gi0/23
s3(config-if)#channel-group 1 mode on
s3(config-if)#int gi0/24
s3(config-if)#channel-group 1 mode on

This creates the port-channel1 interface (you may need to increment that number, i.e., use port-channel2 or port-channel3, if you already have existing link aggregates configured) and adds interfaces GigabitEthernet0/23 and GigabitEthernet0/24 to the link aggregate.  If you do have to use a different link aggregate interface, be sure the number of the interface (“int port-channel4”) matches the number of the channel-group specified on the member interfaces (“channel-group 4 mode on”).  This seems obvious, but it’s worth mentioning nevertheless.

Be aware that Data ONTAP’s multi-mode VIFs are only compatible with static 802.3ad link aggregation; you can’t use PAgP (Cisco proprietary protocol).  I would assume dynamic LACP is also incompatible.  For this reason we used the “channel-group 1 mode on” statement instead of something like “channel-group 1 mode desirable”.

By default, many Cisco switches default to MAC address-based load balancing across the links, whereas NetApp defaults to IP address-based load balancing.  To see the switch’s current load balancing configuration, use this command in privileged mode:

s3#show etherchannel load-balance

To change the switch’s load balancing algorith to a mode compatible with NetApp’s, use one of the following command in global configuration mode (note that changing it affects the entire switch; you can’t change it for a single port-channel individually):

s3(config)#port-channel load-balance src-dst-ip

Once the switch is configured, then we can proceed with configuring the NetApp storage system.  The following commands will create the the multi-mode VIF (this can also be done via the FilerView GUI):

netapp1>vif create multi vif0 e6d e7d
netapp1>ifconfig vif0 netmask
netapp1>ifconfig vif0 up

This creates the VIF with interfaces e6d and e7d as members, plumbs it with an IP address, and brings it up.  Running the command “vif status vif0” now will return the following results:

default: transmit 'IP Load balancing', VIF Type 'multi_mode', fail 'log'
vif0: 2 links, transmit 'IP Load balancing', VIF Type 'multi-mode' fail 'default'
VIF Status Up Addr_set
e6d: state up, since 05Oct2001 17:17:15 (05:23:05)
mediatype: auto-1000t-fd-up
flags: enabled
input packets 2000, input bytes 12800
output packets 173, output bytes 1345
up indications 1, broken indications 0
drops (if) 0, drops (link) 0
indication: up at boot
consecutive 3, transitions 1
e7d: state up, since 05Oct2001 17:18:03 (00:10:03)
mediatype: auto-1000t-fd-up
flags: enabled
input packets 134, input bytes 987
output packets 20, output bytes 156
up indications 1, broken indications 0
drops (if) 0, drops (link) 0
indication: broken

Note the ‘IP Load balancing’ algorithm stated in the output; this is why the switch’s load-balancing mechanism should be changed to match.

At this point, the links should be up between the Cisco switch and the NetApp storage system, and traffic should be passing to and from the storage system without any problems.  To test the fault tolerance, we can pull one of the links in VIF; traffic should continue to flow with very little, if any, interruption.  And while traffic from a single client to the NetApp won’t see a significant increase in throughput, the overall throughput of multiple separate clients to the NetApp should improve with multiple links in the VIF.

More information, including additional Cisco configs, is available here.

Tags: , , , ,


  1. james’s avatar

    “By default, many Cisco switches default to MAC address-based load balancing across the links, whereas NetApp defaults to IP address-based load balancing. To see the switch’s current load balancing configuration, use this command in privileged mode”

    Is this required as without it surely they would both send frames to each other correctly but just choose different algorithms to decide which interface to use.

  2. Eric Grancher’s avatar

    good morning,
    thank you for your blog, always very interesting.
    For once, I can contribute: LACP is actually supported by Data Ontap as of 7.2.1
    see page 23 (following the document page count) or 31 (following the number of pages in the document)
    PS: we do not use it for now, not tested

  3. slowe’s avatar


    I have seen VIFs working when the load balancing algorithms don’t match between the storage system and the switch, but for optimal performance they should match.


    Good information! I’ll have a look at that and will likely update this posting based on that information.

  4. Raj’s avatar

    Does load-balancing have to match on both ends when you configure etherchannel. The reason why i ask is we have L3 switch which is being trunk line to netapp and the same l3 switch trunk to l2 switch which does load balance src-dst-mac.

    So i am wondering if the trunk and etherchannel between the layer2 switch will work as expected ?


  5. slowe’s avatar


    As far as I am aware, the load-balancing option does need to match and the load balancing option is set for the entire switch. I don’t think it can be set on a per-port channel basis. However, I’ll leave that to any Cisco expert readers to answer definitively.

  6. Bziel’s avatar

    There is a Bug in OnTapp 7.22 (Bug ID 221335 )
    LACP does not work over VLANs:

    Links on the filer can be aggregated using vifs. Multimode vifs can be enabled
    to run LACP to dynamically monitor the vif. Enabling LACP on a vif that has
    VLANs running GVRP enabled caused the links to drop all LACP frames which
    results in the vif being down.

    If no trunk is necessary use no trunking mode:
    vif create lacp cisco -b ip e0b e3b
    ifconfig cisco netmask –wins

    On the cisco site:
    interface Port-channel20
    switchport access vlan 13
    switchport mode access
    flowcontrol receive desired
    spanning-tree portfast

    interface GigabitEthernet1/0/48
    switchport access vlan 13
    switchport mode access
    flowcontrol receive desired
    no cdp enable
    channel-protocol lacp
    channel-group 20 mode active
    spanning-tree portfast

    interface GigabitEthernet2/0/48
    switchport access vlan 13
    switchport mode access
    flowcontrol receive desired
    no cdp enable
    channel-protocol lacp
    channel-group 20 mode active
    spanning-tree portfast

  7. Ross’s avatar

    There are a few questions regarding load-balancing with MAC-address vs IP, so hopefully the following is useful.

    In a simple, flat (single subnet) network MAC vs IP load-balancing should produce almost identical results as there will be one MAC address per IP address.

    If multiple subnets/VLAN’s are used, traffic for non-local subnets will all map to one (or a small number of) MAC addresses used by the default gateway for the local subnet which results in the poor performance mentioned by some posters.

    So, in summary, use IP load balancing at both ends to avoid any unexpected issues, either during the inital deployment or as your network grows and additional subnets are added.

  8. slowe’s avatar

    Good information, Ross, thanks for sharing that. I was already aware of that, but I’m sure there are other readers that may not have realized this particular interaction.

  9. Sriram’s avatar

    Hi All,

    Most of the Cisco old generation switches, including 3550 supports MAC ( source/Dest.) based load balancing. However newer models wiht latest IOS, supports MAC & IP based load balancing. Please also note that in some models, etherchannel load balancing needs to be enabled eplicitly.

    Also note that the load balance mether cant be set per channel. Its common for all the channels.

    I have configured multiple etherchannels for numerous NetApps filers. But the netowrk architect who designed the solution did not request for any specific load balancing metheod, essestially making the balancing method unmatching on both sides. But still, the filers are working.( I foresee some issues when the traffic increases in future, we will have some packet drops.) My suggestions to initiate a discussion on this was not given much weightage.

    To conclude, the load balancing method on both sides need not match, to make the channel working.

    Best Regards

  10. sty’s avatar

    The load-balancing algorithms don’t need to match, and if you’re running a layered approach on your network (access-distribution-core) in the DC, then you’re supposed to vary the algorithms to avoid polarizing the traffic paths.

    Also on good cisco switches, you can use Layer4 (src-dst-port) balancing for even better granularity than Layer3 (src-dst-ip). L3 usually gets your around 20/80 balance, L4 has the possibility to give 40/60.

    On layered network structure, you use L3 in core, L4 in distribution and L3 again in access.

    (config)#mls ip cef load-sharing full
    full load balancing algorithm to include L4 ports

    (config)#port-channel load-balance src-dst-port
    src-dst-port Src XOR Dst TCP/UDP Port

  11. zakaria mehkri’s avatar

    What is the hash alogrithm on NETAPPS when using the loadbalancing.

    Do we have to use the XOR to calculate which IP address to use for the traffic to use different ports

  12. Chris Waltham’s avatar

    Hi Scott,

    Am I missing something? You note that you’re using the command “channel-group 1 mode on” in order to force static 802.3ad, but in the Cisco documentation I’m reading ( it says that “mode on” means “Enable EtherChannel only.”

    This is my portgroup; I’m wondering if it’s LACP or PAgP? :( It goes to a NetApp filer, after all:

    server-1#show int Po8
    Port-channel8 is up, line protocol is up (connected)
    Hardware is EtherChannel, address is 000e.836c.5460 (bia 000e.836c.5460)


  13. mon’s avatar

    this suckzzz….

  14. slowe’s avatar

    Chris Waltham, “mode on” means turn on link aggregation and don’t negotiate. There’s a separate command (“channel-protocol lacp”) that specifies if it is LACP or Cisco’s EtherChannel.

    Mon, can you elaborate further? That’s a bit of a vague statement…

  15. Mark’s avatar

    I see this is over a year old, but in-case anyone else comes across this like I did:
    slowe is incorrect in his configuration
    The command: channel-group x mode on initiates etherchannel only.
    The command: channel-protocol lacp is an optional command, and not required for this operation

    Minimal configuration in order to set up a LACP bundle is:
    interface Port-channelx
    description [description]
    no shut

    (on each pair of netapp connected interfaces for that vif)
    channel-group x mode active

  16. Renato’s avatar

    Hi Scott,

    It is a great post thank you. I am using Cisco 3560 on my network. I see you setup the VIFS across single switch. If I want to setup a redundant switch, do I need to link both switches via cross over or even setup ports on both switches to interconnect them together?

    Many t hanks

Comments are now closed.