KVM

You are currently browsing articles tagged KVM.

This issue describes a fix I found for an issue I had when booting KVM guest domains on the Ubuntu/KVM hypervisors in my home lab. I’d been struggling with this issue for quite some time now, but only recently found what I believe to be the final fix for the problem.

First, allow me to provide a bit of background. Some time ago—I’d say around August 2012, when I left the vSpecialist team at EMC to join an OpenStack-focused team in another part of EMC—I moved my home lab over completely to Ubuntu 12.04 LTS with the KVM hypervisor. This was an important step in educating myself on Linux, KVM, libvirt, and Open vSwitch (OVS), all of which are critical core components in most installations of OpenStack.

Ever since making that change—particularly after adding some new hardware, a pair of Dell C6100 servers, to my home lab—I would experience intermittent problems booting a KVM guest. The guest would appear to boot properly, but then hang shortly after a message about activating swap space and fsck reporting that the file system was clean. Sometimes, rebooting the guest would work; many times, rebooting the guest didn’t work. Re-installing the guest sometimes worked, but sometimes it didn’t. There didn’t appear to be any consistency with regard to the host (the issue occurred on all hosts) or guest configuration. The only consistency appeared to be with Ubuntu, as virtually (no pun intended) all my KVM guests were running Ubuntu.

Needless to say, this was quite frustrating. I tried all the troubleshooting I could imagine—deleting and recreating swap space, manually checking the file system(s), various different installation routines—and nothing seemed to make any difference.

Finally, just in the last few weeks, I stumbled across this page, which indicated that adding “nomodeset” to the grub command line fixed the problem. This was a standard part of my build (it kept the console from getting too large when using VNC to connect to the guest), but it required that I was able to successfully boot the VM first. I’d noted that once I had been able to successfully boot a guest and add “nomodeset” to the grub configuration, I didn’t have any further issues with that particular guest; however, I explained that away by saying that the intermittent boot issue must have been some sort of first-time boot issue.

In any case, that page linked to this ServerFault entry, which also indicated that the use of “nomodeset” helped fix some (seemingly) random boot problems. The symptoms described there—recovery mode worked fine, booting normally after booting into recovery mode resulted in an “initctl: event failed” error—were consistent with what I’d been seeing as well.

So, I took one of the VMs that was experiencing this problem, booted it into recovery mode, edited the /etc/default/grub file to include “nomodeset” on the GRUB_CMDLINE_LINUX_DEFAULT line, and rebooted. The KVM guest booted without any issues. Problem fixed (apparently).

Thus far, this has fixed the intermittent boot issue on every KVM guest I’ve tried, so I’m relatively comfortable recommending it as a potential change you should explore if you experience the same problem/symptoms. I can’t guarantee it will work, but it has worked for me so far.

Good luck!

Tags: , ,

In this post, I’ll discuss how you could use Open vSwitch (OVS) and GRE tunnels to connect bare metal workloads. While OVS is typically used in conjunction with a hypervisor such as KVM or Xen, you’re certainly not restricted to only using it on hypervisors. Similarly, while GRE tunnels are commonly used to connect VMs or containers, you’re definitely not restricted from using them with bare metal workloads as well. In this post, I’ll explore how you would go about connecting bare metal workloads over GRE tunnels managed by OVS.

This post, by the way, was sparked in part by a comment on my article on using GRE tunnels with OVS, in which the reader asked: “Is there a way to configure bare Linux (Ubuntu)…with OVS installed…to serve as a tunnel endpoint…?” Hopefully this post helps answer that question. (By the way, the key to understanding how this works is in understanding OVS traffic patterns. If you haven’t yet read my post on examining OVS traffic patterns, I highly recommend you go have a look right now. Seriously.)

Once you have OVS installed (maybe this is helpful?), then you need to create the right OVS configuration. That configuration can be described, at a high level, like this:

  • Assign an IP address to a physical interface. This interface will be considered the “tunnel endpoint,” and therefore should have an IP address that is correct for use on the transport network.
  • Create an OVS bridge that has no physical interfaces assigned.
  • Create an OVS internal interface on this OVS bridge, and assign it an IP address for use inside the GRE tunnel(s). This interface will be considered the primary interface for the OS instance.
  • Create the GRE tunnel for connecting to other tunnel endpoints.

Each of these areas is described in a bit more detail in the following sections.

Setting Up the Transport Interface

When setting up the physical interface—which I’ll refer to as the transport interface moving forward, since it is responsible for transporting the GRE tunnel across to the other endpoints—you’ll just need to use an IP address and routing entries that enable it to communicate with other tunnel endpoints.

Let’s assume that we are going to have tunnel endpoints on the 192.168.1.0/24 subnet. On the bare metal OS instance, you’d configure a physical interface (I’ll assume eth0, but it could be any physical interface) to have an IP address on the 192.168.1.0/24 subnet. You could do this automatically via DHCP or manually; the choice is yours. Other than ensuring that the bare metal OS instance can communicate with other tunnel endpoints, no additional configuration is required. (I’m using “required” as in “necessary to make it work.” You may want to increase the MTU on your physical interface and network equipment in order to accommodate the GRE headers in order to optimize performance, but that isn’t required in order to make it work.)

Once you have the transport interface configured and operational, you can move on to configuring OVS.

Configuring OVS

If you’ve been following along at home with all of my OVS-related posts (you can browse all posts using the OVS tag), you can probably guess what this will look like (hint: it will look a little bit like the configuration I described in my post on running host management through OVS). Nevertheless, I’ll walk through the configuration for the benefit of those who are new to OVS.

First, you’ll need to create an OVS bridge that has no physical interfaces—the so-called “isolated bridge” because it is isolated from the physical network. You can call this bridge whatever you want. I’ll use the name br-int (the “integration bridge”) because it’s commonly used in other environments like OpenStack and NVP/NSX.

To create the isolated bridge, use ovs-vsctl:

ovs-vsctl add-br br-int

Naturally, you would substitute whatever name you’d like to use in the above command. Once you’ve created the bridge, then add an OVS internal interface; this internal interface will become the bare metal workload’s primary network interface:

ovs-vsctl add-port br-int mgmt0 -- set interface mgmt0 type=internal

You can use a name other than mgmt0 if you so desire. Next, configure this new OVS internal interface at the operating system level, assigning it an IP address. This IP address should be taken from a subnet “inside” the GRE tunnel, because it is only via the GRE tunnel that you’ll want the workload to communicate.

The following commands will take care of this part for you:

ip addr add 10.10.10.30/24 dev mgmt0
ip link set mgmt0 up

The process of ensuring that the mgmt0 interface comes up automatically when the system boots is left as an exercise for the reader (hint: use /etc/network/interfaces).

At this point, the bare metal OS instance will have two network interfaces:

  • A physical interface (we’re assuming eth0) that is configured for use on the transport network. In other words, it has an IP address and routes necessary for communication with other tunnel endpoints.
  • An OVS internal interface (I’m using mgmt0) that is configured for use inside the GRE tunnel. In other words, it has an IP address and routes necessary to communicate with other workloads (bare metal, containers, VMs) via the OVS-hosted GRE tunnel(s).

Because the bare metal OS instance sees two interfaces (and therefore has visibility into the routes both “inside” and “outside” the tunnel), you may need to apply some policy routing configuration. See my introductory post on Linux policy routing if you need more information.

The final step is establishing the GRE tunnel.

Establishing the GRE Tunnel

The commands for establishing the GRE tunnel have been described numerous times, but once again I’ll walk through the process just for the sake of completeness. I’m assuming that you’ve already completed the steps in the previous section, and that you are using an OVS bridge named br-int.

First, add the GRE port to the bridge:

ovs-vsctl add-port br-int gre0

Next, configure the GRE interface on that port:

ovs-vsctl set interface gre0 type=gre options:remote_ip=<IP address of remote tunnel endpoint>

Let’s say that you’ve assigned 192.168.1.10 to the transport interface on this system (the bare metal OS instance), and that the remote tunnel endpoint (which could be a host with multiple containers, or a hypervisor running VMs) has an IP address of 192.168.1.15. On the bare metal system, you’d configure the GRE interface like this:

ovs-vsctl set interface gre0 type=gre options:remote_ip=192.168.1.15

On the remote tunnel endpoint, you’d configure the GRE interface like this:

ovs-vsctl set interface gre0 type=gre options:remote_ip=192.168.1.10

In other words, each GRE interface points to the transport IP address on the opposite end of the tunnel.

Once the configuration on both ends is done, then you should be able to go into the bare metal OS instance and ping an IP address inside the GRE tunnel. For example, I used this configuration to connect a bare metal Ubuntu 12.04 instance, a container running on an Ubuntu host, and a KVM VM running on an Ubuntu host (I had a full mesh topology with STP enabled, as described here). I was able to successfully ping between the bare metal OS instance, the container, and the VM, all inside the GRE tunnel.

Summary, Caveats, and Other Thoughts

While this configuration is interesting as a “proof of concept” that OVS and GRE tunnels can be used to connect bare metal OS instances and workloads, there are a number of considerations and/or caveats that you’ll want to think about before trying something like this in a production environment:

  • The bare metal OS instance has visibility both “inside” and “outside” the tunnel, so there isn’t an easy way to prevent the bare metal OS instance from communicating outside the tunnel to other entities. This might be OK—or it might not. It all depends on your requirements, and what you are trying to achieve. (In theory, you might be able to provide some isolation using network namespaces, but I haven’t tested this at all.)
  • If you want to create a full mesh topology of GRE tunnels, you’ll need to enable STP on OVS.
  • There’s nothing preventing you from attaching an OpenFlow controller to the OVS instances (including the OVS instance on the bare metal OS) and pushing flow rules down. This would eliminate the need for STP, since OVS won’t be in MAC learning mode. This means you could easily incorporate bare metal OS instances into a network virtualization-type environment. However…
  • There’s no easy way to provide a separation of OVS and the bare metal OS instance. This means that users who are legitimately allowed to make administrative changes to the bare metal OS instance could also make changes to OVS, which could easily “break” the configuration and cause problems. My personal view is that this is why you rarely see this sort of OVS configuration used in conjunction with bare metal workloads.

I still see value in explaining how this works because it provides yet another example of how to configure OVS and how to use OVS to help provide advanced networking capabilities in a variety of environments and situations.

If you have any questions, I encourage you to add them in the comments below. Likewise, if I have overlooked something, made any mistakes, or if I’m just plain wrong, please speak up below (courteously, of course!). I welcome all useful/pertinent feedback and interaction.

Tags: , , , , , ,

It’s been a little while since I talked about libvirt directly, but in this post I’d like to go back to libvirt to talk about how you can edit a guest domain’s libvirt XML definition to adjust access to the VNC console for that guest domain.

This is something of a follow-up to my post on using SSH to access the VNC consoles of KVM guests, in which I showed how to use SSH tunneling (including multi-hop tunnels) to access the VNC console of a KVM guest domain. That post was necessary because, by default, libvirt binds the VNC server for a guest domain to the KVM host’s loopback address. Thus, the only way to access the VNC console of a KVM guest domain is via SSH, and then tunneling VNC traffic to the host’s loopback address.

However, it’s possible to change this behavior. Consider the following snippet of libvirt XML code; this is responsible for setting up the VNC access to the guest domain’s console:

<graphics type='vnc' port='-1' autoport='yes'/>

This is the default configuration; it binds a VNC server for this guest domain to the host’s loopback address and auto-allocates a port for VNC (the first guest gets screen 0/port 5900, the second guest gets screen 1/port 5901, etc.). With this configuration, you must use SSH and tunneling in order to gain access to the guest domain’s VNC console.

Now consider this libvirt XML configuration:

<graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'/>

With this configuration in place, the VNC console for that guest domain will be available on any interface on the host, but the port will still be auto-allocated (first guest domain powered on gets port 5900, second guest domain powered on gets port 5901, etc.). With this configuration, anyone with a VNC viewer on your local network can gain access to the console of the guest domain. As you might expect, you could change the ‘0.0.0.0' to a specific IP address assigned to the KVM host, and you could limit access to the VNC port via IPTables (if you so desired).

If you so desired, you can password-protect the VNC console for the guest domain using this snippet of libvirt XML configuration:

<graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' passwd='protectme'/>

Now, the user attempting to access the guest domain’s VNC console must know the password specified by the passwd parameter in the configuration. Otherwise, this configuration is the same as the previous configuration, and can be limited/protected in a variety of ways (limited to specific interfaces and/or controlled via IPTables).

For more details, you can refer to the full reference on the libvirt guest domain XML configuration.

Tags: , , , ,

In this post, I’m going to show you how I combined Linux network namespaces, VLANs, Open vSwitch (OVS), and GRE tunnels to do something interesting. Well, I found it interesting, even if no one else does. However, I will provide this disclaimer up front: while I think this is technically interesting, I don’t think it has any real, practical value in a production environment. (I’m happy to be proven wrong, BTW.)

This post builds on information I’ve provided in previous posts:

It may pull pieces from a few other posts, but the bulk of the info is found in these. If you haven’t already read these, you might want to take a few minutes and go do that—it will probably help make this post a bit more digestible.

After working a bit with network namespaces—and knowing that OpenStack Neutron uses network namespaces in certain configurations, especially to support overlapping IP address spaces—I wondered how one might go about integrating multiple network namespaces into a broader configuration using OVS and GRE tunnels. Could I use VLANs to multiplex traffic from multiple namespaces across a single GRE tunnel?

To test my ideas, I came up with the following design:

As you can see in the diagram, my test environment has two KVM hosts. Each KVM host has a network namespace and a running guest domain. Both the network namespace and the guest domain are connected to an OVS bridge; the network namespace via a veth pair and the guest domain via a vnet port. A GRE tunnel between the OVS bridges connects the two hosts.

The idea behind the test environment was that the VM on one host would communicate with the veth interface in the network namespace on the other host, using VLAN-tagged traffic over a GRE tunnel between them.

Let’s walk through how I built this environment to do the testing.

I built KVM Host 1 using Ubuntu 12.04.2, and installed KVM, libvirt, and OVS. On KVM Host 1, I built a guest domain, attached it to OVS via a libvirt network, and configured the VLAN tag for its OVS port with this command:

ovs-vsctl set port vnet0 tag=10

In the guest domain, I configured the OS (also Ubuntu 12.04.2) to use the IP address 10.1.1.2/24.

Also on KVM Host 1, I created the network namespace, created the veth pair, moved one of the veth interfaces, and attached the other to the OVS bridge. This set of commands is what I used:

ip netns add red
ip link add veth0 type veth peer name veth1
ip link set veth1 netns red
ip netns exec red ip addr add 10.1.2.1/24 dev veth1
ip netns exec red ip link set veth1 up
ovs-vsctl add-port br-int veth0
ovs-vsctl set port veth0 tag=20

Most of the commands listed above are taken straight from the network namespaces article I wrote, but let’s break it down anyway just for the sake of full understanding:

  • The first command adds the “red” namespace.
  • The second command creates the veth pair, creatively named veth0 and veth1.
  • The third command moves veth1 into the red namespace.
  • The next two commands add an IP address to veth1 and set the interface to up.
  • The last two commands add the veth0 interface to an OVS bridge named br-int, and then set the VLAN tag for that port to 20.

When I’m done, I’m left with KVM Host 1 running a guest domain on VLAN 10 and a network namespace on VLAN 20. (Do you see how I got there?)

I repeated the process on KVM Host 2, installing Ubuntu 12.04.2 with KVM, libvirt, and OVS. Again, I built a guest domain (also running Ubuntu 12.04.2), configured the operating system to use the IP address 10.1.2.2/24, attached it to OVS via a libvirt network, and configured its OVS port:

ovs-vsctl set port vnet0 tag=20

Similarly, I also created a new network namespace and pair of veth interfaces, but I configured them as a “mirror image” of KVM Host 1, reversing the VLAN assignments for the guest domain (as shown above) and the network namespace:

ip netns add blue
ip link add veth0 type veth peer name veth1
ip link set veth1 netns blue
ip netns exec blue ip addr add 10.1.1.1/24 dev veth1
ip netns exec blue ip link set veth1 up
ovs-vsctl add-port br-int veth0
ovs-vsctl set port veth0 tag=10

That leaves me with KVM Host 2 running a guest domain on VLAN 20 and a network namespace on VLAN 10.

The final step was to create the GRE tunnel between the OVS bridges. However, after I established the GRE tunnel, I configured the GRE port to be a VLAN trunk using this command (this command was necessary on both KVM hosts):

ovs-vsctl set port gre0 trunks=10,20,30

So I now had the environment I’d envisioned for my testing. VLAN 10 had a guest domain on one host and a veth interface on the other; VLAN 20 had a veth interface on one host and a guest domain on the other. Between the two hosts was a GRE tunnel configured to act as a VLAN trunk.

Now came the critical test—would the guest domain be able to ping the veth interface? This screen shot shows the results of my testing; this is the guest domain on KVM Host 1 communicating with the veth1 interface in the separate network namespace on KVM Host 2:

Success! Although not shown here, I also tested all other combinations as well, and they worked. (Note you’d have to use ip netns exec ping … to ping from the veth1 interface in the network namespace.) I now had a configuration where I could integrate multiple network namespaces with GRE tunnels and OVS. Unfortunately—and this is where the whole “technically interesting but practically useless” statement comes from—this isn’t really a usable configuration:

  • The VLAN configurations were manually applied to the OVS ports; this means they disappeared if the guest domains were power-cycled. (This could be fixed using libvirt portgroups, but I hadn’t bothered with building them in this environment.)
  • The GRE tunnel had to be manually established and configured.
  • Because this solution uses VLAN tags inside the GRE tunnel, you’re still limited to about 4,096 separate networks/network namespaces you could support.
  • The entire process was manual. If I needed to add another VLAN, I’d have to manually create the network namespace and veth pair, manually move one of the veth interfaces into the namespace, manually add the other veth interface to the OVS bridge, and manually update the GRE tunnel to trunk that VLAN. Not very scalable, IMHO.

However, the experiment was not a total loss. In figuring out how to tie together network namespaces and tunnels, I’ve gotten a better understanding of how all the pieces work. In addition, I have a lead on an even better way of accomplishing the same task: using OpenFlow rules and tunnel keys. This is the next area of exploration, and I’ll be sure to post something when I have more information to share.

In the meantime, feel free to share your thoughts and feedback on this post. What do you think—technically interesting or not? Useful in a real-world scenario or not? All courteous comments (with vendor disclosure, where applicable) are welcome.

Tags: , , , , , , ,

This is part 4 of the Learning NVP blog series. Just to quickly recap what’s happened so far, in part 1 I provided the high-level architecture of NVP and discussed the role of the components in broad terms. In part 2, I focused on the NVP controllers, showing you how to build/configure the NVP controllers and establish a controller cluster. Part 3 focused on NVP Manager, which allowed us to perform a few additional NVP configuration tasks. In this post, I’ll show you how to add hypervisors to NVP so that you can turn up your first logical network.

Assumptions

In this post, I’m using Ubuntu 12.04.2 LTS with the KVM hypervisor. I’m assuming that you’ve already gone through the process of getting KVM installed on your Linux host; if you need help with that, a quick Google search should turn up plenty of “how to” articles (it’s basically a sudo apt-get install kvm operation). If you are using a different Linux distribution or a different hypervisor, the commands you’ll use as well as the names of the prerequisite packages you’ll need to install will vary slightly. Please keep that in mind.

Installing Prerequisites

To get a version of libvirt that supports Open vSwitch (OVS), you’ll want to enable the Ubuntu Cloud Archive. The Ubuntu Cloud Archive is technically intended to allow users of Ubuntu LTS releases to install newer versions of OpenStack and its dependent packages (including libvirt). Instructions for enabling and using the Ubuntu Cloud Archive are found here. However, I’ve found using the Ubuntu Cloud Archive to be an easy way to get a newer version of libvirt (version 1.0.2) on Ubuntu 12.04 LTS.

Once you get the Ubuntu Cloud Archive working, go ahead and install libvirt:

sudo apt-get install libvirt-bin

Next, go ahead and install some prerequisite packages you’ll need to get OVS installed and working:

sudo apt-get install dkms make libc6-dev

Now you’re ready to install OVS.

Installing OVS

Once your hypervisor node has the appropriate prerequisites installed, you’ll need to install an NVP-specific build of OVS. This build of OVS is identical to the open source build in every way except that it includes the ability to create STT tunnels and includes some extra NVP-specific utilities for integrating OVS into NVP. For Ubuntu, this NVP-specific version of OVS is distributed as a compressed tar archive. First, you’ll need to extract the files out like this:

tar -xvzf <file name>

This will extract a set of Debian packages. For ease of use, I recommend moving these files into a separate directory. Once the files are in their own directory, you would install them like this:

cd <directory where the files are stored>
sudo dpkg -i *.deb

Note that if you don’t install the prerequisites listed above, the installation of the DKMS package (for the OVS kernel datapath) will fail. Trying to then run apt-get install <package list> at that point will also fail; you’ll need to run apt-get -f install. This will “fix” the broken packages and allow the DKMS installation to proceed (which it will do automatically).

These particular OVS installation packages do a couple of different things:

  • They install OVS (of course), including the kernel module.
  • They automatically create and configure something called the integration bridge, which I’ll describe in more detail in a few moments.
  • They automatically generate some self-signed certificates you’ll need later.

Now that you have OVS installed, you’re ready for the final step: to add the hypervisor to NVP.

Adding the Hypervisor to NVP

For this process, you’ll need access to NVP Manager as well as SSH access to the hypervisor. I strongly recommend using the same system for access to both NVP Manager and the hypervisor via SSH, as this will make things easier.

Adding the hypervisor to NVP is a two-step process. First, you’ll configure the hypervisor; second, you’ll configure NVP. Let’s start with the hypervisor.

Verifying the Hypervisor Configuration

Before you can add the hypervisor to NVP, you’ll need to be sure it’s configured correctly so I’ll walk you through a couple of verification steps. NVP expects there to be an integration bridge, an OVS bridge that it will control. This integration bridge will be separate from any other bridges configured on the system, so if you have a bridge that you’re using (or will use) outside of NVP this will be separate. I’ll probably expand upon this in more detail in a future post, but for now let’s just verify that the integration bridge exists and is configured properly.

To verify the present of the integration bridge, use the command ovs-vsctl show (you might need to use sudo to get the appropriate permissions). The output of the command should look something like this:

Note the bridge named “br-int”—this is the default name for the integration bridge. As you can see by this output, the integration bridge does indeed exist. However, you must also verify that the integration bridge is configured appropriately. For that, you’ll use the ovs-vsctl list bridge br-int, which will produce output that looks something like this:

Now there’s a lot of goodness here (“Hey, look—NetFlow support! Mirrors! sFlow! STP support even!”), but try to stay focused. I want to draw your attention to the external_ids line, where the value “bridge-id” has been set to “br-int”. This is exactly what you want to see, and I’ll explain why in just a moment.

One final verification step is needed. NVP uses self-signed certificates to authenticate hypervisors, so you’ll need to be sure that the certificates have been generated (they should have been generated during the installation of OVS). You can verify by running ls -la /etc/openvswitch and looking for the file “ovsclient-cert.pem”. If it’s there, you should be good to go.

Next, you’ll need to do a couple of things in NVP Manager.

Create a Transport Zone

Before you can actually add the hypervisor to NVP, you first need to ensure that you have a transport zone defined. I’ll assume that you don’t and walk you through creating one.

  1. In NVP Manager (log in if you aren’t already logged in), select Network Components > Transport Zones.
  2. In the Network Components Query Results, you’ll probably see no transport zones listed. Click Add.
  3. In the Create Transport Zone dialog, specify a name for the transport zone, and optionally add tags (tags are used for searching and sorting information in NVP Manager).
  4. Click Save.

That’s it, but it’s an important bit. I’ll explain more in a moment.

Add the Hypervisor

Now that you’ve verified the hypervisor configuration and created a transport zone, you’re ready to add the hypervisor to NVP. Here’s how.

  1. Log into NVP Manager, if you aren’t already, and click on Dashboard in the upper left corner.
  2. In the Summary of Transport Components section, click the Add button on the line for Hypervisors.
  3. Ensure that Hypervisor is listed in the Transport Node Type drop-down list, then click Next.
  4. Specify a display name (I use the hostname of the hypervisor itself), and—optionally—add one or more tags. (Tags are used for searching/sorting data in NVP Manager.) Click Next.
  5. In the Integration Bridge ID text box, type “br-int”. (Do you know why? It’s not because that’s the name of the integration bridge. It’s because that’s the value set in the external_ids section of the integration bridge.) Be sure that Admin Status Enabled is checked, and optionally you can check Tunnel Keep-Alive Spray. Click Next to continue.
  6. Next, you need to authenticate the hypervisor to NVP. This is a multi-step process. First, switch over to the SSH session with the hypervisor (you still have it open, right?) and run cat /etc/openvswitch/ovsclient-cert.pem. This will output the contents of the OVS client certificate to the screen. Copy everything between the BEGIN CERTIFICATE and the END CERTIFICATE lines, including those lines.
  7. Flip back over to NVP Manager. Ensure that Security Certificate is listed, then paste the clipboard contents into the Security Certificate box. The red X on the left of the dialog box should change to a green check mark, and you can click Next to continue.
  8. The final step is creating a transport connector. Click Add Connector.
  9. In the Create Transport Connector dialog, select STT as the Transport Type.
  10. Select the transport zone you created earlier, if it isn’t already populated in the Transport Zone UUID drop-down.
  11. Specify the IP address of the interface on the hypervisor that will be used as the source for all tunnel traffic. This is generally not the management interface. Click OK.
  12. Click Save.

This should return you to the NVP Manager Dashboard, where you’ll see the Hypervisors line in the Summary of Transport Components go from 0 to 1 (both in the Registered and the Active columns). You can refresh the display of that section only by clicking the little circular arrow button. You should also see an entry appear in the Hypervisor Software Version Summary section. This screen shot shows you what the dashboard would look like after adding 3 hypervisors:

NVP Manager dashboard

(Side note: This dashboard shows a gateway and service node also added, though I haven’t discussed those yet. It also shows a logical switch and logical switch ports, though I haven’t discussed those either. Be patient—they’re coming. I can only write so fast.)

Congratulations—you’ve added your first hypervisor to NVP! In the next part, I’m going to take a moment to remind readers of a few concepts that I’ve covered in the past, and show how those concepts relate to NVP. From there, we’ll pick back up with adding our first logical network and establishing connectivity between VMs on different hypervisors.

Until that time, feel free to post any questions, thoughts, corrections, or clarifications in the comments below. Please disclose vendor affiliations, where appropriate, and—as always—I welcome all courteous comments.

Tags: , , , , , , ,

About a year ago, I published an entry on working with KVM guests, in which I described how to perform some common operations with KVM-based guest VMs (or guest domains). While reading that article today (I needed to refer back to the section on using virt-install), I realized that I hadn’t discussed how to access the VNC console of a KVM guest. In this post, I’ll show you how to use SSH to access the VNC console of a KVM guest domain.

Here are the tools you’ll need to make this work:

  • An SSH client. On OS X or Linux, this installs with the OS; on Windows, you’ll need a client like PuTTY.
  • A VNC client. There are a variety of clients out there; find one that works for you and run with it. On OS X, I use Chicken (formerly Chicken of the VNC).

Basically, all we’re going to do is use SSH port forwarding (additional information here) to connect your local system to the VNC port of the guest domain on the KVM host. At its simplest level, the generic command to do local port forwarding with SSH looks something like this:

ssh <username>@<remote host IP address or DNS name> -L <local port>:<remote IP address>:<remote port>

The idea of local port forwarding via SSH is really well-known and well-documented all over the Internet, so I won’t go into a great level of detail here.

Let’s take the generic command above and turn it into a command that is specific to accessing the VNC console of a guest domain on a remote KVM host. Let’s assume in this example that the IP address of the remote KVM host is 192.168.1.4, that your username on this remote KVM host is admin, and that there is only one guest domain running on the remote KVM host. To access that guest domain’s VNC console, you’d use this command:

ssh [email protected] -L 5900:127.0.0.1:5900

Allow me to break this command down just a bit:

  • The first part (ssh [email protected]) simply establishes the SSH session to the remote KVM host at that IP address.
  • The -L parameter tells SSH we are doing local port forwarding. In other words, SSH will take traffic directed to the specified local port and send it to the specified remote port on the specified remote IP address.
  • The last part (5900:127.0.0.1:5900) is probably the part that will confuse folks who haven’t done this before. The first port number indicates the local port number—that is, the port on your local system. The IP address indicates the remote IP address that will be contacted via the remote host, and the last port number indicates the destination port to which traffic will be directed. It’s really important to remember that the IP address specified in the port forwarding specification will be contacted through the SSH tunnel as if the traffic were originating from the remote host. So, in this case, specifying 127.0.0.1 (the loopback address) here means we’ll be accessing the remote host’s loopback address, not our own.

Once you have this connection established, you can point your VNC client at your own loopback address. If you select display 0 (the default on most clients), then traffic will be directed to port 5900, which will be redirected through the SSH tunnel to port 5900 on the loopback address on the remote system—which, incidentally, is where the VNC console of the guest domain is listening. Voila, you’re all set.

Here are a few additional notes of which you’ll want to be aware:

  • The first guest domain launched (or started) on a host will listen on port 5900 of the KVM host’s loopback address. (This is why we used those values in our command.) The second guest domain will listen on 5901, the third on 5902, and so forth.
  • You can use the command sudo netstat -tunelp | grep LISTEN to show listening ports on the KVM host; this will include listening VNC consoles. The last column in the output will be the process ID; use ps ax | grep <process ID> to figure out which VM is listening to a particular port (the name of the VM will be in the command’s output).

When you’re accessing the guest domains on the remote KVM host directly, it’s pretty straightforward. But what if you need to send traffic through an intermediate jump host first? For example, what if you have to SSH to the jump host first, then SSH from the jump host to the remote hypervisors—can you still access guest domain consoles in this sort of situation?

Yes you can!

It’s a bit more complicated, but it’s doable. Let’s assume that our jump host is, quite imaginatively, called jumphost.domain.com, and that our hypervisor is called—you guessed it—hypervisor.domain.com. The following set of commands would allow you to gain access to the first guest domain on the remote hypervisor via the jump host.

First, run this command from your local system to the jump host:

ssh [email protected] -L 5900:127.0.0.1:5900

Next, run this command from the jump host to the remote hypervisor:

ssh [email protected] -L 5900:127.0.0.1:5900 -g

The -g parameter is important here; I couldn’t make it work without adding this parameter. Your mileage may vary, of course. Once you have both sets of port forwarding established, then just point your VNC client to port 5900 of your local loopback address. Traffic gets redirected to port 5900 on the jump host’s loopback address, which is in turn redirected to port 5900 on the remote hypervisor’s loopback address—which is where the guest domain’s console is listening. Cool, huh? If you establish multiple sessions listening on different ports, you can access multiple VMs on multiple hypervisors. Very handy!

If you have any questions, corrections, or additional information to share, please feel free to speak up in the comments below.

Tags: , , , ,

In an earlier post, I provided an introduction to policy routing as implemented in recent versions of Ubuntu Linux (and possibly other distributions as well), and I promised that in a future post I would provide a practical application of its usage. This post looks at that practical application: how—and why—you would use Linux policy routing in an environment running OVS and a Linux hypervisor (I’ll assume KVM for the purposes of this post).

Before I get into the “how,” let’s first discuss the “why.” Let’s assume that you have a KVM+OVS environment and are leveraging tunnels (GRE or other) for some guest domain traffic. Recall from my post on traffic patterns with Open vSwitch that tunnel traffic is generated by the OVS process itself, and therefore is controlled by the Linux host’s IP routing table with regard to which interfaces that tunnel traffic will use. But what if you need the tunnel traffic to be handled differently than the host’s management traffic? What if you need a default route for tunnel traffic that uses one interface, but a different default route for your separate management network that uses its own interface? This is why you would use policy routing in this configuration. Using source routing (i.e., policy routing based on the source of the traffic), you could easily define a table for tunnel traffic that has its own default route while still allowing management traffic to use the host’s default routing table.

Let’s take a look at how it’s done. In this example, I’ll make the following assumptions:

  • I’ll assume that you’re running host management traffic through OVS, as I outlined here. I’ll use the name mgmt0 to refer to the management interface that’s running through OVS for host management traffic. We’ll use the IP address 192.168.100.10 for the mgmt0 interface.
  • I’ll assume that you’re running tunnel traffic through an OVS interface interface named tep0. (This helps provide some consistency with my walk-through on using GRE tunnels with OVS.) We’ll use the IP address 192.168.200.10 for the tep0 interface.
  • I’ll assume that the default gateway on each subnet uses the .1 address on that subnet.

With these assumptions out of the way, let’s look at how you would set this up.

First, you’ll create a custom policy routing table, as outlined here. I’ll use the name “tunnel” for my new table:

echo 200 tunnel >> /etc/iproute2/rt_tables

Next, you’ll need to modify /etc/network/interfaces for the tep0 interface so that a custom policy routing rule and custom route are installed whenever this interface is brought up. The new configuration stanza would look something like this:

(If the configuration stanza doesn’t appear above, click here.)

Finally, you’ll want to ensure that mgmt0 is properly configured in /etc/network/interfaces. No special configuration is required there, just the use of the gateway directive to install the default route. Ubuntu will install the default route into the main table automatically, making it a “system-wide” default route that will be used unless a policy routing rule dictates otherwise.

With this configuration in place, you now have a system that:

  • Can communicate via mgmt0 with other systems in other subnets via the default gateway of 192.168.100.1.
  • Can communicate via tep0 to establish tunnels with other hypervisors in other subnets via the 192.168.200.1 gateway.

This configuration requires only the initial configuration (which could, quite naturally, be automated via a tool like Puppet) and does not require using additional routes as the environment scales to include new subnets for other hypervisors (either for management or tunnel traffic). Thus, organizations can use recommended practices for building scalable L3 networks with reasonably-sized L2 domains without sacrificing connectivity to/from the hypervisors in the environment.

(By the way, this is something that is not easily accomplished in the vSphere world today. ESXi has only a single routing table for all VMkernel interfaces, which means that management traffic, vMotion traffic, VXLAN traffic, etc., are all bound by that single routing table. To achieve full L3 connectivity, you’d have to install specific routes into the VMkernel routing table on each ESXi host. When additional subnets are added for scale, each host would have to be touched to add the additional route.)

Hopefully this gives you an idea of how Linux policy routing could be effectively used in environments leveraging virtualization, OVS, and overlay protocols. Feel free to add your thoughts, idea, corrections, or questions in the comments below. Courteous comments are always welcome! (Please disclose vendor affiliations where applicable.)

Tags: , , , , ,

I’m back with another “how to” article on Open vSwitch (OVS), this time taking a look at using GRE (Generic Routing Encapsulation) tunnels with OVS. OVS can use GRE tunnels between hosts as a way of encapsulating traffic and creating an overlay network. OpenStack Quantum can (and does) leverage this functionality, in fact, to help separate different “tenant networks” from one another. In this write-up, I’ll walk you through the process of configuring OVS to build a GRE tunnel to build an overlay network between two hypervisors running KVM.

Naturally, any sort of “how to” such as this always builds upon the work of others. In particular, I found a couple of Brent Salisbury’s articles (here and here) especially useful.

This process has 3 basic steps:

  1. Create an isolated bridge for VM connectivity.
  2. Create a GRE tunnel endpoint on each hypervisor.
  3. Add a GRE interface and establish the GRE tunnel.

These steps assume that you’ve already installed OVS on your Linux distribution of choice. I haven’t explicitly done a write-up on this, but there are numerous posts from a variety of authors (in this regard, Google is your friend).

We’ll start with an overview of the topology, then we’ll jump into the specific configuration steps.

Reviewing the Topology

The graphic below shows the basic topology of what we have going on here:

Topology overview

We have two hypervisors (CentOS 6.3 and KVM, in my case), both running OVS (an older version, version 1.7.1). Each hypervisor has one OVS bridge that has at least one physical interface associated with the bridge (shown as br0 connected to eth0 in the diagram). As part of this process, you’ll create the other internal interfaces (the tep and gre interfaces, as well as the second, isolated bridge to which VMs will connect. You’ll then create a GRE tunnel between the hypervisors and test VM-to-VM connectivity.

Creating an Isolated Bridge

The first step is to create the isolated OVS bridge to which the VMs will connect. I call this an “isolated bridge” because the bridge has no physical interfaces attached. (Side note: this idea of an isolated bridge is fairly common in OpenStack and NVP environments, where it’s usually called the integration bridge. The concept is the same.)

The command is very simple, actually:

ovs-vsctl add-br br2

Yes, that’s it. Feel free to substitute a different name for br2 in the command above, if you like, but just make note of the name as you’ll need it later.

To make things easier for myself, once I’d created the isolated bridge I then created a libvirt network for it so that it was dead-easy to attach VMs to this new isolated bridge.

Configuring the GRE Tunnel Endpoint

The GRE tunnel endpoint is an interface on each hypervisor that will, as the name implies, serve as the endpoint for the GRE tunnel. My purpose in creating a separate GRE tunnel endpoint is to separate hypervisor management traffic from GRE traffic, thus allowing for an architecture that might leverage a separate management network (which is typically considered a recommended practice).

To create the GRE tunnel endpoint, I’m going to use the same technique I described in my post on running host management traffic through OVS. Specifically, we’ll create an internal interface and assign it an IP address.

To create the internal interface, use this command:

ovs-vsctl add-port br0 tep0 -- set interface tep0 type=internal

In your environment, you’ll substitute br2 with the name of the isolated bridge you created earlier. You could also use a different name than tep0. Since this name is essentially for human consumption only, use what makes sense to you. Since this is a tunnel endpoint, tep0 made sense to me.

Once the internal interface is established, assign it with an IP address using ifconfig or ip, whichever you prefer. I’m still getting used to using ip (more on that in a future post, most likely), so I tend to use ifconfig, like this:

ifconfig tep0 192.168.200.20 netmask 255.255.255.0

Obviously, you’ll want to use an IP addressing scheme that makes sense for your environment. One important note: don’t use the same subnet as you’ve assigned to other interfaces on the hypervisor, or else you can’t control that the GRE tunnel will originate (or terminate) on the interface you specify. This is because the Linux routing table on the hypervisor will control how the traffic is routed. (You could use source routing, a topic I plan to discuss in a future post, but that’s beyond the scope of this article.)

Repeat this process on the other hypervisor, and be sure to make note of the IP addresses assigned to the GRE tunnel endpoint on each hypervisor; you’ll need those addresses shortly. Once you’ve established the GRE tunnel endpoint on each hypervisor, test connectivity between the endpoints using ping or a similar tool. If connectivity is good, you’re clear to proceed; if not, you’ll need to resolve that before moving on.

Establishing the GRE Tunnel

By this point, you’ve created the isolated bridge, established the GRE tunnel endpoints, and tested connectivity between those endpoints. You’re now ready to establish the GRE tunnel.

Use this command to add a GRE interface to the isolated bridge on each hypervisor:

ovs-vsctl add-port br2 gre0 -- set interface gre0 type=gre \
options:remote_ip=<GRE tunnel endpoint on other hypervisor>

Substitute the name of the isolated bridge you created earlier here for br2 and feel free to use something other than gre0 for the interface name. I think using gre as the base name for the GRE interfaces makes sense, but run with what makes sense to you.

Once you repeat this command on both hypervisors, the GRE tunnel should be up and running. (Troubleshooting the GRE tunnel is one area where my knowledge is weak; anyone have any suggestions or commands that we can use here?)

Testing VM Connectivity

As part of this process, I spun up an Ubuntu 12.04 server image on each hypervisor (using virt-install as I outlined here), attached each VM to the isolated bridge created earlier on that hypervisor, and assigned each VM an IP address from an entirely different subnet than the physical network was using (in this case, 10.10.10.x).

Here’s the output of the route -n command on the Ubuntu guest, to show that it has no knowledge of the “external” IP subnet—it knows only about its own interfaces:

ubuntu:~ root$ route -n
Kernel IP routing table
Destination  Gateway       Genmask        Flags Metric Ref Use Iface
0.0.0.0      10.10.10.254  0.0.0.0        UG    100    0   0   eth0
10.10.10.0   0.0.0.0       255.255.255.0  U     0      0   0   eth0

Similarly, here’s the output of the route -n command on the CentOS host, showing that it has no knowledge of the guest’s IP subnet:

centos:~ root$ route -n
Kernel IP routing table
Destination  Gateway        Genmask        Flags Metric Ref Use Iface
192.168.2.0  0.0.0.0        255.255.255.0  U     0      0   0   tep0
192.168.1.0  0.0.0.0        255.255.255.0  U     0      0   0   mgmt0
0.0.0.0      192.168.1.254  0.0.0.0        UG    0      0   0   mgmt0

In my case, VM1 (named web01) was given 10.10.10.1; VM2 (named web02) was given 10.10.10.2. Once I went through the steps outlined above, I was able to successfully ping VM2 from VM1, as you can see in this screenshot:

VM-to-VM connectivity over GRE tunnel

(Although it’s not shown here, connectivity from VM2 to VM1 was obviously successful as well.)

“OK, that’s cool, but why do I care?” you might ask.

In this particular context, it’s a bit of a science experiment. However, if you take a step back and begin to look at the bigger picture, then (hopefully) something starts to emerge:

  • We can use an encapsulation protocol (GRE in this case, but it could have just as easily been STT or VXLAN) to isolate VM traffic from the physical network and from other VM traffic. (Think multi-tenancy.)
  • While this process was manual, think about some sort of controller (an OpenFlow controller, perhaps?) that could help automate this process based on its knowledge of the VM topology.
  • Using a virtualized router or virtualized firewall, I could easily provide connectivity into or out of this isolated (encapsulated) private network. (This is probably something I’ll experiment with later.)
  • What if we wrapped some sort of orchestration framework around this, to help deploy VMs, create networks, add routers/firewalls automatically, all based on the customer’s needs? (OpenStack Networking, anyone?)

Anyway, I hope this is helpful to someone. As always, I welcome feedback and suggestions for improvement, so feel free to speak up in the comments below. Vendor disclosures, where appropriate, are greatly appreciated. Thanks!

Tags: , , , , , ,

Welcome to Technology Short Take #32, the latest installment in my irregularly-published series of link collections, thoughts, rants, raves, and miscellaneous information. I try to keep the information linked to data center technologies like networking, storage, virtualization, and the like, but occasionally other items slip through. I hope you find something useful.

Networking

  • Ranga Maddipudi (@vCloudNetSec on Twitter) has put together two blog posts on vCloud Networking and Security’s App Firewall (part 1 and part 2). These two posts are detailed, hands-on, step-by-step guides to using the vCNS App firewall—good stuff if you aren’t familiar with the product or haven’t had the opportunity to really use it.
  • The sentiment behind this post isn’t unique to networking (or networking engineers), but that was the original audience so I’m including it in this section. Nick Buraglio climbs on his SDN soapbox to tell networking professionals that changes in the technology field are part of life—but then provides some specific examples of how this has happened in the past. I particularly appreciated the latter part, as it helps people relate to the fact that they have undergone notable technology transitions in the past but probably just don’t realize it. As I said, this doesn’t just apply to networking folks, but to everyone in IT. Good post, Nick.
  • Some good advice here on scaling/sizing VXLAN in VMware deployments (as well as some useful background information to help explain the advice).
  • Jason Edelman goes on a thought journey connecting some dots around network APIs, abstractions, and consumption models. I’ll let you read his post for all the details, but I do agree that it is important for the networking industry to converge on a consistent set of abstractions. Jason and I disagree that OpenStack Networking (formerly Quantum) should be the basis here; he says it shouldn’t be (not well-known in the enterprise), I say it should be (already represents work created collaboratively by multiple vendors and allows for different back-end implementations).
  • Need a reasonable introduction to OpenFlow? This post gives a good introduction to OpenFlow, and the author takes care to define OpenFlow as accurately and precisely as possible.
  • SDN, NFV—what’s the difference? This post does a reasonable job of explaining the differences (and the relationship) between SDN and NFV.

Servers/Hardware

  • Chris Wahl provides a quick overview of the HP Moonshot servers, HP’s new ARM-based offerings. I think that Chris may have accidentally overlooked the fact that these servers are not x86-based; therefore, a hypervisor such as vSphere is not supported. Linux distributions that offer ARM support, though—like Ubuntu, RHEL, and SuSE—are supported, however. The target market for this is massively parallel workloads that will benefit from having many different cores available. It will be interesting to see how the support of a “Tier 1″ hardware vendor like HP affects the adoption of ARM in the enterprise.

Security

  • Ivan Pepelnjak talks about a demonstration of an attack based on VM BPDU spoofing. In vSphere 5.1, VMware addressed this potential issue with a feature called BPDU Filter. Check out how to configure BPDU Filter here.

Cloud Computing/Cloud Management

  • Check out this post for some vCloud Director and RHEL 6.x interoperability issues.
  • Nick Hardiman has a good write-up on the anatomy of an AWS CloudFormation template.
  • If you missed the OpenStack Summit in Portland, Cody Bunch has a reasonable collection of Summit summary posts here (as well as materials for his hands-on workshops here). I was also there, and I have some session live blogs available for your pleasure.
  • We’ve probably all heard the “pets vs. cattle” argument applied to virtual machines in a cloud computing environment, but Josh McKenty of Piston Cloud Computing asks whether it is now time to apply that thinking to the physical hosts as well. Considering that the IT industry still seems to be struggling with applying this line of thinking to virtual systems, I suspect it might be a while before it applies to physical servers. However, Josh’s arguments are valid, and definitely worth considering.
  • I have to give Rob Hirschfeld some credit for—as a member of the OpenStack Board—acknowledging that, in his words, “we’ve created such a love fest for OpenStack that I fear we are drinking our own kool aide.” Open, honest, transparent dealings and self-assessments are critically important for a project like OpenStack to succeed, so kudos to Rob for posting a list of some of the challenges facing the project as adoption, visibility, and development accelerate.

Operating Systems/Applications

Nothing this time around, but I’ll stay alert for items to add next time.

Storage

  • Nigel Poulton tackles the question of whether ASIC (application-specific integrated circuit) use in storage arrays elongates the engineering cycles needed to add new features. This “double edged sword” argument is present in networking as well, but this is the first time I can recall seeing the question asked about modern storage arrays. While Nigel’s article specifically refers to the 3PAR ASIC and its relationship to “flash as cache” functionality, the broader question still stands: at what point do the drawbacks of ASICs begin to outweight the benefits?
  • Quite some time ago I pointed readers to a post about Target Driven Zoning from Erik Smith at EMC. Erik recently announced that TDZ works after a successful test run in a lab. Awesome—here’s hoping the vendors involved will push this into the market.
  • Using iSER (iSCSI Extensions for RDMA) to accelerate iSCSI traffic seems to offer some pretty promising storage improvements (see this article), but I can’t help but feel like this is a really complex solution that may not offer a great deal of value moving forward. Is it just me?

Virtualization

  • Kevin Barrass has a blog post on the VMware Community site that shows you how to create VXLAN segments and then use Wireshark to decode and view the VXLAN traffic, all using VMware Workstation.
  • Andre Leibovici explains how Horizon View Multi-VLAN works and how to configure it.
  • Looking for a good list of virtualization and cloud podcasts? Look no further.
  • Need Visio stencils for VMware? Look no further.
  • It doesn’t look like it has changed much from previous versions, but nevertheless some people might find it useful: a “how to” on virtualization with KVM on CentOS 6.4.
  • Captain KVM (cute name, a take-off of Captain Caveman for those who didn’t catch it) has a couple of posts on maximizing 10Gb Ethernet on KVM and RHEV (the KVM post is here, the RHEV post is here). I’m not sure that I agree with his description of LACP bonds (“2 10GbE links become a single 20GbE link”), since any given flow in a LACP configuration can still only use 1 link out of the bond. It’s more accurate to say that aggregate bandwidth increases, but that’s a relatively minor nit overall.
  • Ben Armstrong has a write-up on how to install Hyper-V’s integration components when the VM is offline.
  • What are the differences between QuickPrep and Sysprep? Jason Boche’s got you covered.

I suppose that’s enough information for now. As always, courteous comments are welcome, so feel free to add your thoughts in the comments below. Thanks for reading!

Tags: , , , , , , , , , , , ,

Technology Short Take #30

Welcome to Technology Short Take #30. This Technology Short Take is a bit heavy on the networking side, but I suppose that’s understandable given my recent job change. Enjoy!

Networking

  • Ben Cherian, Chief Strategy Officer for Midokura, helps make a case for network virtualization. (Note: Midokura makes a network virtualization solution.) If you’re wondering about network virtualization and why there is a focus on it, this post might help shed some light. Given that it was written by a network virtualization vendor, it might seem a bit rah-rah, so keep that in mind.
  • Brent Salisbury has a fantastic series on OpenFlow. It’s so good I wish I’d written it. He starts out by discussing proactive vs. reactive flows, in which Brent explains that OpenFlow performance is less about OpenFlow and more about how flows are inserted into the hardware. Next, he tackles the concerns over the scale of flow-based forwarding in his post on coarse vs. fine flows. I love this quote from that article: “The second misnomer is, flow based forwarding does not scale. Bad designs are what do not scale.” Great statement! The third post in the series tackles what Brent calls hybrid SDN deployment strategies, and Brent provides some great design considerations for organizations looking to deploy an SDN solution. I’m looking forward to the fourth and final article in the series!
  • Also, if you’re looking for some additional context to the TCAM considerations that Brent discusses in his OpenFlow series, check out this Packet Pushers blog post on OpenFlow switching performance.
  • Another one from Brent, this time on Provider Bridging and Provider Backbone Bridging. Good explanation—it certainly helped me.
  • This article by Avi Chesla points out a potential security weakness in SDN, in the form of a DoS (Denial of Service) attack where many switching nodes request many flows from the central controller. It appears to me that this would only be an issue for networks using fine-grained, reactive flows. Am I wrong?
  • Scott Hogg has a nice list of 9 common Spanning Tree mistakes you shouldn’t make.
  • Schuberg Philis has a nice write-up of their CloudStack+NVP deployment here.

Servers/Hardware

  • Alex Galbraith recently posted a two-part series on what he calls the “NanoLab,” a home lab built on the Intel NUC (“Next Unit of Computing”). It’s a good read for those of you looking for some very quiet and very small home lab equipment, and Alex does a good job of providing all the details. Check out part 1 here and part 2 here.
  • At first, I thought this article was written from a sarcastic point of view, but it turns out that Kevin Houston’s post on 5 reasons why you may not want blade servers is the real deal. It’s nice to see someone who focuses on blade servers opening up about why they aren’t necessarily the best fit for all situations.

Security

  • Nick Buraglio has a good post on the potential impact of Arista’s new DANZ functionality on tap aggregation solutions in the security market. It will be interesting to see how this shapes up. BTW, Nick’s writing some pretty good content, so if you’re not subscribed to his blog I’d reconsider.

Cloud Computing/Cloud Management

  • Although this post is a bit older (it’s from September of last year), it’s still an interesting comparison of both OpenStack and CloudStack. Note that the author apparently works for Mirantis, which is a company that provides OpenStack consulting services. In spite of that fact, he manages to provide a reasonably balanced approach to comparing the two cloud management platforms. Both of them (I believe) have had releases since this time, so some of the points may not be valid any longer.
  • Are you a CloudStack fan? If so, you should probably check out this collection of links from Aaron Delp. Aaron’s focused a lot more on CloudStack now that he’s at Citrix, so he might be a good resource if that is your cloud management platform of choice.

Operating Systems/Applications

  • If you’re just now getting into the whole configuration management scene where tools like Puppet, Chef, and others play, you might find this article helpful. It walks through the difference between configuring a system imperatively and configuring a system declaratively (hint: Puppet, Chef, and others are declarative). It does presume a small bit of programming knowledge in the examples, but even as a non-programmer I found it useful.
  • Here’s a three-part series on beginning Puppet that you might find helpful as well (Part 1, Part 2, and Part 3).
  • If you’re a developer-type person, I would first ask why you’re reading my site, then I’d point you to this post on the AMQP, MQTT, and STOMP messaging protocols.

Storage

Virtualization

  • Although these posts are storage-related, the real focus is on how the storage stack is implemented in a virtualization solution, which is why I’m putting them in this section. Cormac Hogan has a series going titled “Pluggable Storage Architecture (PSA) Deep Dive” (part 1 here, part 2 here, part 3 here). If you want more PSA information, you’d be hard-pressed to find a better source. Well worth reading for VMware admins and architects.
  • Chris Colotti shares information on a little-known vSwitch advanced setting that helps resolve an issue with multicast traffic and NICs in promiscuous mode in this post.
  • Frank Denneman reminds everyone in this post that the concurrent vMotion limit only goes to 8 concurrent vMotions when vSphere detects the NIC speed at 10Gbps. Anything less causes the concurrent limit to remain at 4. For those of you using solutions like HP VirtualConnect or similar that allow you to slice and dice a 10Gb link into smaller links, this is a design consideration you’ll want to be sure to incorporate. Good post Frank!
  • Interested in some OpenStack inception? See here. How about some oVirt inception? See here. What’s that? Not familiar with oVirt? No problem—see here.
  • Windows Backup has native Hyper-V support in Windows Server 2012. That’s cool, but are you surprised? I’m not.
  • Red Hat and IBM put out a press release today on improved I/O performance with RHEL 6.4 and KVM. The press release claims that a single KVM guest on RHEL 6.4 can support up to 1.5 million IOPS. (Cue timer until next virtualization vendor ups the ante…)

I guess I should wrap things up now, even though I still have more articles that I’d love to share with readers. Perhaps a “mini-TST”…

In any event, courteous comments are always welcome, so feel free to speak up below. Thanks for reading and I hope you’ve found something useful!

Tags: , , , , , , , , , , , ,

« Older entries