Interoperability

You are currently browsing articles tagged Interoperability.

In this post, I’m going to show you how I combined Linux network namespaces, VLANs, Open vSwitch (OVS), and GRE tunnels to do something interesting. Well, I found it interesting, even if no one else does. However, I will provide this disclaimer up front: while I think this is technically interesting, I don’t think it has any real, practical value in a production environment. (I’m happy to be proven wrong, BTW.)

This post builds on information I’ve provided in previous posts:

It may pull pieces from a few other posts, but the bulk of the info is found in these. If you haven’t already read these, you might want to take a few minutes and go do that—it will probably help make this post a bit more digestible.

After working a bit with network namespaces—and knowing that OpenStack Neutron uses network namespaces in certain configurations, especially to support overlapping IP address spaces—I wondered how one might go about integrating multiple network namespaces into a broader configuration using OVS and GRE tunnels. Could I use VLANs to multiplex traffic from multiple namespaces across a single GRE tunnel?

To test my ideas, I came up with the following design:

As you can see in the diagram, my test environment has two KVM hosts. Each KVM host has a network namespace and a running guest domain. Both the network namespace and the guest domain are connected to an OVS bridge; the network namespace via a veth pair and the guest domain via a vnet port. A GRE tunnel between the OVS bridges connects the two hosts.

The idea behind the test environment was that the VM on one host would communicate with the veth interface in the network namespace on the other host, using VLAN-tagged traffic over a GRE tunnel between them.

Let’s walk through how I built this environment to do the testing.

I built KVM Host 1 using Ubuntu 12.04.2, and installed KVM, libvirt, and OVS. On KVM Host 1, I built a guest domain, attached it to OVS via a libvirt network, and configured the VLAN tag for its OVS port with this command:

ovs-vsctl set port vnet0 tag=10

In the guest domain, I configured the OS (also Ubuntu 12.04.2) to use the IP address 10.1.1.2/24.

Also on KVM Host 1, I created the network namespace, created the veth pair, moved one of the veth interfaces, and attached the other to the OVS bridge. This set of commands is what I used:

ip netns add red
ip link add veth0 type veth peer name veth1
ip link set veth1 netns red
ip netns exec red ip addr add 10.1.2.1/24 dev veth1
ip netns exec red ip link set veth1 up
ovs-vsctl add-port br-int veth0
ovs-vsctl set port veth0 tag=20

Most of the commands listed above are taken straight from the network namespaces article I wrote, but let’s break it down anyway just for the sake of full understanding:

  • The first command adds the “red” namespace.
  • The second command creates the veth pair, creatively named veth0 and veth1.
  • The third command moves veth1 into the red namespace.
  • The next two commands add an IP address to veth1 and set the interface to up.
  • The last two commands add the veth0 interface to an OVS bridge named br-int, and then set the VLAN tag for that port to 20.

When I’m done, I’m left with KVM Host 1 running a guest domain on VLAN 10 and a network namespace on VLAN 20. (Do you see how I got there?)

I repeated the process on KVM Host 2, installing Ubuntu 12.04.2 with KVM, libvirt, and OVS. Again, I built a guest domain (also running Ubuntu 12.04.2), configured the operating system to use the IP address 10.1.2.2/24, attached it to OVS via a libvirt network, and configured its OVS port:

ovs-vsctl set port vnet0 tag=20

Similarly, I also created a new network namespace and pair of veth interfaces, but I configured them as a “mirror image” of KVM Host 1, reversing the VLAN assignments for the guest domain (as shown above) and the network namespace:

ip netns add blue
ip link add veth0 type veth peer name veth1
ip link set veth1 netns blue
ip netns exec blue ip addr add 10.1.1.1/24 dev veth1
ip netns exec blue ip link set veth1 up
ovs-vsctl add-port br-int veth0
ovs-vsctl set port veth0 tag=10

That leaves me with KVM Host 2 running a guest domain on VLAN 20 and a network namespace on VLAN 10.

The final step was to create the GRE tunnel between the OVS bridges. However, after I established the GRE tunnel, I configured the GRE port to be a VLAN trunk using this command (this command was necessary on both KVM hosts):

ovs-vsctl set port gre0 trunks=10,20,30

So I now had the environment I’d envisioned for my testing. VLAN 10 had a guest domain on one host and a veth interface on the other; VLAN 20 had a veth interface on one host and a guest domain on the other. Between the two hosts was a GRE tunnel configured to act as a VLAN trunk.

Now came the critical test—would the guest domain be able to ping the veth interface? This screen shot shows the results of my testing; this is the guest domain on KVM Host 1 communicating with the veth1 interface in the separate network namespace on KVM Host 2:

Success! Although not shown here, I also tested all other combinations as well, and they worked. (Note you’d have to use ip netns exec ping … to ping from the veth1 interface in the network namespace.) I now had a configuration where I could integrate multiple network namespaces with GRE tunnels and OVS. Unfortunately—and this is where the whole “technically interesting but practically useless” statement comes from—this isn’t really a usable configuration:

  • The VLAN configurations were manually applied to the OVS ports; this means they disappeared if the guest domains were power-cycled. (This could be fixed using libvirt portgroups, but I hadn’t bothered with building them in this environment.)
  • The GRE tunnel had to be manually established and configured.
  • Because this solution uses VLAN tags inside the GRE tunnel, you’re still limited to about 4,096 separate networks/network namespaces you could support.
  • The entire process was manual. If I needed to add another VLAN, I’d have to manually create the network namespace and veth pair, manually move one of the veth interfaces into the namespace, manually add the other veth interface to the OVS bridge, and manually update the GRE tunnel to trunk that VLAN. Not very scalable, IMHO.

However, the experiment was not a total loss. In figuring out how to tie together network namespaces and tunnels, I’ve gotten a better understanding of how all the pieces work. In addition, I have a lead on an even better way of accomplishing the same task: using OpenFlow rules and tunnel keys. This is the next area of exploration, and I’ll be sure to post something when I have more information to share.

In the meantime, feel free to share your thoughts and feedback on this post. What do you think—technically interesting or not? Useful in a real-world scenario or not? All courteous comments (with vendor disclosure, where applicable) are welcome.

Tags: , , , , , , ,

I like to spend time examining the areas where different groups of technologies intersect. Personally, I find this activity fascinating, and perhaps that’s the reason that I find myself pursing knowledge and experience in virtualization, networking, storage, and other areas simultaneously—it’s an effort to spend more time “on the border” between various technologies.

One border, in particular, is very interesting to me: the border between virtualization and networking. Time spent thinking about the border between networking and virtualization is what has generated posts like this one, this one, or this one. Because I’m not a networking expert (yet), most of the stuff I generate is junk, but at least it keeps me entertained—and it occasionally prods the Really Smart Guys (RSGs) to post something far more intelligent than anything I can create.

Anyway, I’ve been thinking more about some of these networking-virtualization chimeras, and I thought it might be interesting to talk about them, if for no other reason than to encourage the RSGs to correct me and help everyone understand a little better.

<aside>A chimera, by the way, was a mythological fire-breathing creature that was part lion, part goat, and part serpent; more generically, the word refers to any sort of organism that has two groups of genetically distinct cells. In layman’s terms, it’s something that is a mix of two other things.</aside>

Here are some of the networking-virtualization chimeras I’ve concocted:

  • FabricPath/TRILL on the hypervisor: See this blog post for more details. It turns out, at least at first glance, that this particular combination doesn’t seem to buy us much. The push for large L2 domains that seemed to fuel FabricPath and TRILL now seems to be abating in favor of network overlays and L3 routing.

  • MPLS-in-IP on the hypervisor: I also wrote about this strange concoction here. At first, I thought I was being clever and sidestepping some issues by bringing MPLS support into the hypervisor, but in thinking more about this I realize I’m wrong. Sure, we could encapsulate VM-to-VM traffic into MPLS, then encapsulate MPLS in UDP, but how is that any better than just encapsulating VM-to-VM traffic in VXLAN? It isn’t. (Not to mention that Ivan Pepelnjak set the record straight.)

  • LISP on the hypervisor: I thought this was a really good idea; by enabling LISP on the hypervisor and essentially making the hypervisor an ITR/ETR (see here for more LISP info), inter-DC vMotion becomes a snap. Want to use a completely routed access layer? No problem. Of course, that assumes all your WAN and data center equipment are LISP-capable and enabled/configured for LISP. I’m not the only one who thought this idea was cool, either. I’m sure there are additional problems/considerations of which I’m not aware, though—networking gurus, want to chime in and educate me on what I’m missing?

  • OTV on the hypervisor: This one isn’t really very interesting, as it bears great similarity to VXLAN (both OTV and VXLAN, to my knowledge, use very similar frame formats and encapsulation schemes). Is there something else here I’m missing?

  • VXLAN on physical switches: This one is interesting, even necessary according to some experts. Enabling VXLAN VTEP (VXLAN Tunnel End Point) termination on physical switches might also address some of the odd traffic patterns that would result from the use of VXLAN (see here for a simple example). Arista Networks demonstrated this functionality at VMworld 2012 in San Francisco, so this particular networking-virtualization mashup is probably closer to reality than any of the others.

  • OpenFlow on the hypervisor: Open vSwitch (OVS) already supports OpenFlow, so you might say that this mashup already exists. It’s not unreasonable to think Nicira might port OVS to VMware vSphere, which would bring an OpenFlow-compatible virtual switch to a much larger installed base. The missing piece is, of course, an OpenFlow controller. While an interesting mental exercise, I’m keenly interested to know what sort of real-world problems this might help solve, and would love to hear from any OpenFlow experts out there what they think.

  • Virtualizing physical switches: No, I’m not talking about running switch software on the hypervisor (think Nexus 1000V). Instead, I’m thinking more along the lines of FlowVisor, which in effect virtualizes a switch’s control plane so that multiple “slices” of a switch can be independently controlled by an external OpenFlow controller. If you’re familiar with NetApp, think of their “vfiler” construct, or think of the Virtual Device Contexts (VDCs) in a Nexus 7000. However, I’m thinking of something more device-independent than Nexus 7000 VDCs. As more and more switches move to x86 hardware, this seems like it might be something that could really take off. Multi-tenancy support (each “virtual switch instance” being independently managed), traffic isolation, QoS, VLAN isolation…lots of possibilities exist here.

Are there any other groupings that are worth exploring or discussing? Any other “you got your virtualization peanut butter in my networking chocolate” combinations that might help address some of the issues in data centers today? Feel free to speak up in the comments below. Courteous comments are invited and encouraged.

Tags: , , , , , ,

A short while ago, I talked about how to add client-side encryption to Dropbox using EncFS. In that post, I suggested using BoxCryptor to access your encrypted files. A short time later, though, I uncovered a potential issue with (what I thought to be) BoxCryptor. I have an update on that issue.

In case you haven’t read the comments to the original BoxCryptor-Markdown article, it turns out that the problem with using Markdown files with BoxCryptor doesn’t lie with BoxCryptor—it lies with Byword, the Markdown editor I was using on iOS. Robert, founder of BoxCryptor, suggested that Byword doesn’t properly register the necessary handlers for Markdown files, and that’s why BoxCryptor can’t preview the files or use “Open In…” functionality. On his suggestion, I tried Textastic.

It works flawlessly. I can preview Markdown files in the iOS BoxCryptor client, then use “Open In…” to send the Markdown files to Textastic for editing. I can even create new Markdown files in Textastic and then send them to BoxCryptor for encrypted upload to Dropbox (where I can, quite naturally, open them using my EncFS filesystem on my Mac systems). Very nice!

If you are thinking about using EncFS with Dropbox and using BoxCyrptor to access those files from iOS, and those files are text-based files (like Markdown, plain text, HTML, and similar file formats), I highly recommend Textastic.

Tags: , , , ,

About a week ago, I published an article showing you how to use EncFS and BoxCryptor to provide client-side encryption of Dropbox data. After working with this configuration for a while, I’ve run across a problem (at least, a problem for me—it might not be a problem for you). The problem lies on the iPad end of things.

If you haven’t read the earlier post, the basic gist of the idea is to use EncFS—an open source encrypting file system—and OSXFUSE to provide file-level encryption of Dropbox data on your OS X system. This is client-side encryption where you are in the control of the encryption keys. To access these encrypted files from your iPad, you’ll use the BoxCryptor iOS client, which is compatible with EncFS and decrypts the files.

Sounds great, right? Well, it is…mostly. The problem arises from the way that the iPad handles files. BoxCryptor uses the built-in document preview functionality of iOS, which in turn allows you to access the iPad’s “Open In…” functionality. The only way to get to the “Open In…” menu is to first preview the document using the iOS document preview feature. Unfortunately, the iOS document preview functionality doesn’t recognize a number of files and file types. Most notably for me, it doesn’t recognize Markdown files (I’ve tried several different file extensions and none of them seem to work). Since the preview feature doesn’t recognize Markdown, then I can’t get to “Open In…” to open the documents in Byword (an iOS Markdown editor), and so I’m essentially unable to access my content.

To see if this was an iOS-wide problem or a problem limited to BoxCryptor, I tested accessing some non-encrypted files using the Dropbox iOS client. The Dropbox client will, at least, render Markdown and OPML files as plain text. The Dropbox iOS client still does not, unfortunately, know how to get the Markdown files into Byword. I even tried a MindManager mind map; the Dropbox client couldn’t preview it (not surprisingly), but it did give me the option to open it in the iOS version of MindManager. The BoxCryptor client also worked with a mind map, but refuses to work with plain text-based files like Markdown and OPML.

Given that I create the vast majority of my content in Markdown, this is a problem. If anyone has any suggestions, I’d love to hear them in the comments. Otherwise, I’ll post more here as soon as I learn more or find a workaround.

Tags: , , , , ,

A few days ago, I posted an article about using VLANs with Open vSwitch (OVS) and libvirt. In that article, I stated that libvirt 1.0.0 was needed. Unfortunately (or maybe fortunately?), I was wrong. The libvirt-OVS integration works in earlier versions, too.

In my earlier OVS-libvirt post, I supplied this snippet of XML to define a libvirt network that you could use with OVS:

You would use that libvirt network definition in conjunction with a domain configuration like this:

I just tested this on Ubuntu 12.04.1 with libvirt 0.10.2, and it worked just fine. I think the problems I experienced in my earlier testing were all related to incorrect XML configurations. Some other write-ups of the libvirt-OVS integration seem to imply that you need to use the profileid parameter to link the domain XML configuration and the network XML configuration, but my testing seems to show that the real link is the portgroup name.

I haven’t yet tested this on earlier versions of libvirt, but I can confirm that it works with Ubuntu 12.04 using manually compiled versions of libvirt 0.10.2 and libvirt 1.0.0.

If anyone has any additional information to share, please speak up in the comments.

Tags: , , , , , ,

In previous posts, I’ve shown you how to use Open vSwitch (OVS) with VLANs through fake bridges, as well as how to wrap libvirt virtual network around OVS fake bridges. Both of these techniques are acceptable for configuring VLANs with OVS, but in this post I want to talk about using VLANs with OVS via a greater level of libvirt integration. This has been talked about elsewhere, but I wasn’t able to make it work until libvirt 1.0.0 was released. (Update: I was able to make it work with an earlier version. See here.)

First, let’s recap what we know so far. If you know the port to which a particular domain (guest VM) is connected, you can configure that particular port as a VLAN trunk like this:

ovs-vsctl set port <port name> trunks=10,11,12

This configuration would pass the VLAN tags for VLANs 10, 11, and 12 all the way up to the domain, where—assuming the OS installed in the domain has VLAN support—you could configure network connectivity appropriately. (I hope to have a blog post up on this soon.)

Along the same lines, if you know the port to which a particular domain is connected, you could configure that port as a VLAN access port with a command like this:

ovs-vsctl set port <port name> tag=15

This command makes the domain a member of VLAN 15, much like the use of the switchport access vlan 15 command on a Cisco switch. (I probably don’t need to state that this isn’t the only way—see the other OVS/VLAN related posts above for more techniques to put a domain into a particular VLAN.)

These commands work perfectly fine and are all well and good, but there’s a problem here—the VLAN information isn’t contained in the domain configuration. Instead, it’s in OVS, attached to an ephemeral port—meaning that when the domain is shut down, the port and the associated configuration disappears. What I’m going to show you in this post is how to use VLANs with OVS in conjunction with libvirt for persistent VLAN configurations.

This document was written using Ubuntu 12.04.1 LTS and Open vSwitch 1.4.0 (installed straight from the Precise Pangolin repositories using apt-get). Libvirt was compiled manually (see instructions here). Due to some bugs, it appears you need at least version 1.0.0 of libvirt. Although the Silicon Loons article I referenced earlier mentions an earlier version of libvirt, I was not able to make it work until the 1.0.0 release. Your mileage may vary, of course—I freely admit that I might have been doing something wrong in my earlier testing.

To make VLANs work with OVS and libvirt, two things are necessary:

  1. First, you must define a libvirt virtual network that contains the necessary portgroup definitions.
  2. Second, you must include the portgroup reference to the virtual network in the domain (guest VM) configuration.

Let’s look at each of these steps.

Creating the Virtual Network

The easiest way I’ve found to create the virtual network is to craft the network XML definition manually, then import it into libvirt using virsh net-define.

Here’s some sample XML code (I’ll break down the relevant parts after the code):

The key takeaways from this snippet of XML are:

  1. First, note that the OVS bridge is specified as the target bridge in the <bridge name=...> element. You’ll need to edit this as necessary to make your specific OVS configuration. For example, in my configuration, ovsbr0 refers to a bridge that handles only host management traffic.
  2. Second, note the <portgroup name=...> element. This is where the “magic” happens. Note that you can have no VLAN element (as in the vlan-01 portgroup), a VLAN tag (as in the vlan-10 or vlan-20 portgroups), or a set of VLAN tags to pass as a trunk (as in the vlan-all portgroup).

Once you’ve got the network definition in the libvirt XML format, you can import that configuration with virsh net-define <XML filename>. (Prepend this command with sudo if necessary.)

After it is imported, use virsh net-start <network name> to start the libvirt virtual network. If you make changes to the virtual network, such as adding or removing portgroups, be sure to restart the virtual network using virsh net-destroy <network name> followed by virsh net-start <network name>.

Now that the virtual network is defined, we can move on to creating the domain configuration.

Configuring the Domain Networking

As far as I’m aware, to include the appropriate network definitions in the domain XML configuration, you’ll have to edit the domain XML manually.

Here’s the relevant snippet of domain XML configuration:

You’ll likely have more configuration entries in your domain configuration, but the important one is the <source network=...> element, where you’ll specify both the name of the network you created as well as the name of the portgroup to which this domain should be attached.

With this configuration in place, when you start the domain, it will pass the necessary parameters to OVS to apply the desired VLAN configuration automatically. In other words, once you define the desired configuration in the domain XML, it’s maintained persistently inside the domain XML (instead of on the ephemeral port in OVS), re-applied anytime the domain is started.

Verifying the Configuration

Once the appropriate configuration is in place, you can see the OVS configuration created by libvirt when a domain is started by simply using ovs-vsctl show or—for more detailed information—ovs-vsctl list port <port name>. Of particular interest when using ovs-vsctl list port <port name> are the tag and/or trunks values; these are where VLAN configurations are applied.

Summary

In this post, I’ve shown you how to create libvirt virtual networks that integrate with OVS to provide persistent VLAN configurations for domains connected to an OVS bridge. The key benefit that arises from this configuration is that you longer need to know to which OVS port a given domain is connected. Because the VLAN configuration is stored with the domain and applied to OVS automatically when the domain is started, you can be assured that a domain will always be attached to the correct VLAN when it starts.

As usual, I encourage your feedback on this article. If you have questions, thoughts, corrections, or clarifications, you are invited to speak up in the comments below.

Tags: , , , , , , ,

If you’ve been reading my site for any length of time, you know that I’ve been working with libvirt, the open source virtualization API, for a while. As part of my testing with libvirt, I’ve needed to stay on the “cutting edge” of libvirt’s development (so to speak), so I’ve been compiling my own version of libvirt from the source packages available from the libvirt HTTP server.

You can see some of the posts I’ve written about compiling libvirt:

Compiling libvirt 1.0.0 on Ubuntu 12.04 and 12.10
Compiling libvirt 0.10.1 on Ubuntu 12.04
Compiling libvirt 0.10.1 on CentOS 6.3

While these instructions work, there is an important consideration to keep in mind here: updating the system. When you compile and install your own binaries, the system has no way of knowing how to apply updates. For example, on Ubuntu, this means that apt-get, aptitude, and Update Manager are not aware that a newer version of libvirt is present on the system. Thus, when you go to update the system—you do keep your system updated, right?—then things will break. (Note that there are workarounds for apt-get and aptitude, but it does not appear that these workarounds also work with Update Manager.)

If you are considering following my instructions to compile your own binaries for libvirt, please be sure to keep this consideration in mind. This sort of “gotcha” might be fine for a small home-based lab like mine, but it probably isn’t the sort of issue you’d like to introduce into any sort of production (or quasi-production) environment.

Thanks to Theron Conrey for highlighting the significance of this concern. Be sure to check out Theron’s comment on this article for more information on a potential Ubuntu PPA that might help with this issue.

Tags: , , , , ,

In case you hadn’t noticed late last week, the open source virtualization API project libvirt released version 1.0.0—a major milestone, obviously. To celebrate the 1.0.0 release, here are instructions for compiling the libvirt 1.0.0 release on Ubuntu 12.04 and 12.10.

(If you need instructions on compiling an earlier release of libvirt on Ubuntu, see this post. It works for libvirt 0.10.1 and 0.10.2 on Ubuntu 12.04 LTS; I haven’t yet tested it on Ubuntu 12.10.)

These instructions assume that you have built a plain-jane Ubuntu system (either 12.04 LTS or 12.10). I tested these instructions on an Ubuntu 12.04.1 LTS VM, freshly built and updated using apt-get update && apt-get upgrade, on a freshly-built Ubuntu 12.01 VM (also up to date), and a physical system running an up-to-date instance of Ubuntu 12.04.1 that also had KVM and an earlier version of libvirt installed. These instructions worked in all three cases.

Ready? Let’s get started!

  1. Download the libvirt-1.0.0.tar.gz tarball from the libvirt.org HTTP server.

  2. Extract the tarball using tar -xvzf libvirt-1.0.0.tar.gz.

  3. Using apt-get, ensure that the following packages are installed: gcc, make, pkg-config (already installed in my testing), libxml2-dev, libgnutls-dev, libdevmapper-dev, libcurl4-gnutls-dev, and python-dev. (Interestingly enough, earlier versions also required libyajl-dev, but this version didn’t seem to need it.) You can probably omit the libcurl4-gnutls-dev package if you don’t want ESX support; I did want ESX support, so I included it.

  4. From within the extracted libvirt-1.0.0 directory, run the configure command. I specified particular directories because that’s where the prebuilt binaries from Ubuntu are normally placed. I’ve line-wrapped it here for readability:

  5. ./configure --prefix=/usr --localstatedir=/var \
    --sysconfdir=/etc --with-esx=yes
  6. The above command does not include Xen support; if you need Xen support, you’ll need to add --with-xen=yes to the command, and your list of prerequisite packages will change (there are additional packages you’ll need to install).

  7. Once the configure command completes, then complete the process with make and make install. Note that I used sudo make install here because I’m installing files into privileged system locations.

  8. make
    sudo make install
  9. Finally, to verify that everything seems to be working as expected, restart libvirt with initctl stop libvirt-bin and initctl start libvirt-bin. (Yes, you could use initctl restart, but this allows you to see the clean stop and start of the libvirt daemon.) As an aside, note that this step only works if you either a) had a previous version of libvirt installed, or b) create the initctl job for libvirt. I chose the first approach.

  10. As a final verification step, run virsh --version or libvirtd --version to verify that you’re running libvirt 1.0.0.

That’s it! Now, as others have pointed out, this will create some potential system administration challenges, in that apt-get will still suggest new libvirt packages to install when you try to update the system. I think that there are some commands you can run to keep the manually compiled version instead of the version apt-get is suggesting, but I haven’t yet determined exactly which commands to use. (If you have more information, please speak up in the comments!)

I have a series of new libvirt-related blog posts in the works, so stay tuned for those. In the meantime, feel free to post any questions, corrections, or clarifications in the comments below. Courteous comments are always welcome.

Tags: , , , , ,

In this post, I’ll be sharing with you information on how to do link aggregation (with LACP) and VLAN trunking on a Brocade FastIron switch with both VMware vSphere as well as Open vSwitch (OVS).

Throughout the majority of my career, my networking experience has been centered around Cisco’s products. You can easily tell that from looking at the articles I’ve published here. However, I’ve recently had the opportunity to spend some time working with a Brocade FastIron switch (a 48-port FastIron Edge X, specifically, running software version 7.2), and I wanted to write-up what I’ve learned about how to do link aggregation and VLAN trunking in conjunction with both VMware vSphere as well as OVS.

Configuring Link Aggregation with LACP

When researching how to do link aggregation on a Brocade FastIron, I came across a number of different articles suggesting two different ways to configure link aggregation (ultimately I followed the information provided in this article and this article). I think that the difference in the configuration comes down to whether or not you want to use LACP, but I’m not completely sure. (If you’re a Brocade/Foundry expert, feel free to weigh in.)

To configure a link aggregate using LACP, use these commands:

  • You’ll use the link-aggregate configure key <unique key> command to identify which interfaces may participate in a given link aggregate. The key must range from 10000 to 65535, and has to be unique for each group of interfaces in a link aggregate bundle. The switch uses the key to identify which ports may be a part of a link aggregate.
  • You’ll use the link-aggregate active command to indicate the use of LACP for link aggregation configuration and negotiation.

For example, if you wanted to configure port 10 on a switch for link aggregation, the commands would look something like this:

switch(config)# interface ethernet 10
switch(config-if-e1000-10)# link-aggregate configure key 10000
switch(config-if-e1000-10)# link-aggregate active

For each additional port that should also belong in this same link aggregate bundle, you would repeat these commands and use the same key value. As I mentioned earlier, the identical key value is what tells the switch which interfaces belong in the same bundle.

Configuring the virtualization host is pretty straightforward from here:

  • If you are using vSphere, note that you’ll need to use vSphere 5.1 and a vSphere Distributed Switch (VDS) in order to use LACP. In order to use LACP, you’ll need to set your teaming policy to “Route based on IP hash,” and then you must enable LACP in the settings for the uplink group. Chris Wahl has a nice write-up here, including a list of the caveats of using LACP with vSphere. VMware also has a VMware KB article on the topic.
  • If you are using OVS, you can follow the instructions I provided in this post on link aggregation and LACP with Open vSwitch.

Configuring VLANs

Although VLANs are (generally) interoperable between different switch vendors due to the broad adoption of the 802.1Q standard, the details of each vendor’s implementation of VLANs is just different enough to make life difficult. In this particular case, since I learned Cisco’s VLAN implementation first, Brocade’s VLAN implementation on the FastIron Edge X series switches seemed rather odd. I’m sure that had I learned Brocade’s implementation first, then Cisco’s version would seem odd.

In any case, the commands you use for VLANs are as follows:

  • To create a VLAN, use the vlan <VLAN identifier> command.
  • To add a port to that VLAN, so that traffic across that port is tagged for the specified VLAN, use the tagged ethernet <interface> command.
  • To add a range of ports to a VLAN, use the tagged ethernet <start interface> to <end interface> command.
  • To allow a port to carry both untagged (native, or default VLAN) and tagged traffic, you must use the dual-mode command. Otherwise, a port carries only untagged or tagged traffic. (This was a key difference in Brocade’s VLAN implementation that threw me off at first.)

So, if you wanted to create VLAN 30, add Ethernet interface 24 to that VLAN, and configure the interface to carry both tagged and untagged traffic, the commands would look something like this:

switch(config)# vlan 30 name server-subnet
switch(config-vlan-30)# tagged ethernet 24
switch(config-vlan-30)# interface ethernet 24
switch(config-if-e1000-24)# dual-mode

Once the VLANs are created and the interfaces are added to the VLANs, configuring the virtualization hosts is—once again—pretty straightforward:

I hope this information is useful to someone. If anyone has any corrections or clarifications, I encourage you to add your information to the comments on this post.

Tags: , , , , , ,

You might have noticed that I’ve been writing a fair number of articles around libvirt, the open source virtualization API project. Here are some of the libvirt-related articles that I’ve posted over the last couple of months (and there are more in the works):

Working with KVM Guests
Compiling libvirt 0.10.1 on CentOS 6.3
Compiling libvirt 0.10.1 on Ubuntu 12.04
Wrapping libvirt Virtual Networks Around Open vSwitch Fake Bridges

While libvirt is pretty cool in and of itself (I really like the abstraction layer that libvirt provides between multiple hypervisors and management tools farther up the stack), one of the primary reasons that I’m spending some time working with and attempting to understand libvirt is because libvirt is a key component in the communication that occurs between the OpenStack Compute (Nova) software and the underlying hypervisors that are supported with OpenStack (especially KVM). In attempting to educate myself about OpenStack, I’ve decided to first start by making myself very familiar with a few key components, including libvirt, Open vSwitch, and KVM. This provides a foundation upon which I can build additional knowledge.

At least, that’s the general idea. As always, I’m open to constructive feedback—is it worth the time, or not? I’d love to hear from others who might be on a similar path of discovery and education. Feel free to share your thoughts in the comments below.

Tags: , , , ,

« Older entries