Interoperability

This category contains posts that focus on interoperability between various technologies or products, with an emphasis on technical details on how to resolve interoperability issues.

It seems as if APIs are popping up everywhere these days. While this isn’t a bad thing, it does mean that IT professionals need to have a better understanding of how to interact with these APIs. In this post, I’m going to discuss how to use the popular command line utility curl to interact with a couple of RESTful APIs—specifically, the OpenStack APIs and the VMware NSX API.

Before I go any further, I want to note that to work with the OpenStack and VMware NSX APIs you’ll be sending and receiving information in JSON (JavaScript Object Notation). If you aren’t familiar with JSON, don’t worry—I’ve have an introductory post on JSON that will help get you up to speed. (Mac users might also find this post helpful as well.)

Also, please note that this post is not intended to be a comprehensive reference to the (quite extensive) flexibility of curl. My purpose here is to provide enough of a basic reference to get you started. The rest is up to you!

To make consuming this information easier (I hope), I’ll break this information down into a series of examples. Let’s start with passing some JSON data to a REST API to authenticate.

Example 1: Authenticating to OpenStack

Let’s say you’re working with an OpenStack-based cloud, and you need to authenticate to OpenStack using OpenStack Identity (“Keystone”). Keystone uses the idea of tokens, and to obtain a token you have to pass correct credentials. Here’s how you would perform that task using curl.

You’re going to use a couple of different command-line options:

  • The “-d” option allows us to pass data to the remote server (in this example, the remote server running OpenStack Identity). We can either embed the data in the command or pass the data using a file; I’ll show you both variations.
  • The “-H” option allows you to add an HTTP header to the request.

If you want to embed the authentication credentials into the command line, then your command would look something like this:

curl -d '{"auth":{"passwordCredentials":{"username": "admin",
"password": "secret"},"tenantName": "customer-A"}}'
-H "Content-Type: application/json" http://192.168.100.100:5000/v2.0/tokens

I’ve wrapped the text above for readability, but on the actual command line it would all run together with no breaks. (So don’t try to copy and paste, it probably won’t work.) You’ll naturally want to substitute the correct values for the username, password, tenant, and OpenStack Identity URL.

As you might have surmised by the use of the “-H” header in that command, the authentication data you’re passing via the “-d” parameter is actually JSON. (Run it through python -m json.tool and see.) Because it’s actually JSON, you could just as easily put this information into a file and pass it to the server that way. Let’s say you put this information (which you could format for easier readability) into a file named credentials.json. Then the command would look something like this (you might need to include the full path to the file):

curl -d @credentials.json -H "Content-Type: application/json" http://192.168.100.100:35357/v2.0/tokens

What you’ll get back from OpenStack—assuming your command is successful—is a wealth of JSON. I highly recommend piping the output through python -m json.tool as it can be difficult to read otherwise. (Alternately, you could pipe the output into a file.) Of particular usefulness in the returned JSON is a section that gives you a token ID. Using this token ID, you can prove that you’ve authenticated to OpenStack, which allows you to run subsequent commands (like listing tenants, users, etc.).

Example 2: Authenticating to VMware NSX

Not all RESTful APIs handle authentication in the same way. In the previous example, I showed you how to pass some credentials in JSON-encoded format to authenticate. However, some systems use other methods for authentication. VMware NSX is one example.

In this example, you’ll need to use a different set of curl command-line options:

  • The “–insecure” option tells curl to ignore HTTPS certificate validation. VMware NSX controllers only listen on HTTPS (not HTTP).
  • The “-c” option stores data received by the server (one of the NSX controllers, in this case) into a cookie file. You’ll then re-use this data in subsequent commands with the “-b” option.
  • The “-X” option allows you to specify the HTTP method, which normally defaults to GET. In this case, you’ll use the POST method along the the “-d” parameter you saw earlier to pass authentication data to the NSX controller.

Putting all this together, the command to authenticate to VMware NSX would look something like this (naturally you’d want to substitute the correct username and password where applicable):

curl --insecure -c cookies.txt -X POST -d 'username=admin&password=admin' https://192.168.100.50/ws.v1/login

Example 3: Gathering Information from OpenStack

Once you’ve gotten an authentication token from OpenStack as I showed you in example #1 above, then you can start using API requests to get information from OpenStack.

For example, let’s say you wanted to list the instances for a particular tenant. Once you’ve authenticated, you’d want to get the ID for the tenant in question, so you’d need to ask OpenStack to give you a list of the tenants (you’ll only see the tenants your credentials permit). The command to do that would look something like this:

curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:5000/v2.0/tenants

The value to be substituted for token ID in the above command is returned by OpenStack when you authenticate (that’s why it’s important to pay attention to the data being returned). In this case, the data returned by the command will be a JSON-encoded list of tenants, tenant IDs, and tenant descriptions. From that data, you can get the ID of the tenant for whom you’d like to list the instances, then use a command like this:

curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:8774/v2/<Tenant ID>/servers

This will return a stream of JSON-encoded data that includes the list of instances and each instance’s unique ID—which you could then use to get more detailed information about that instance:

curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:8774/v2/<Tenant ID>/servers/<Server ID>

By and large, the API is reasonably well-documented; you just need to be sure that you are pointing the API call against the right endpoint. For example, authentication has to happen against the server running Keystone, which may or may not be the same server that is running the Nova API services. (In the examples I just provided, Keystone and the Nova API services are running on the same host, which is why the IP address is the same in the command lines.)

Example 4: Creating Objects in VMware NSX

Getting information from VMware NSX using the RESTful API is very much like what you’ve already seen in getting information from OpenStack. Of course, the API can also be used to create objects. To create objects—such as logical switches, logical switch ports, or ACLs—you’ll use a combination of curl options:

  • You’ll use the “-b” option to pass cookie data (stored when you authenticated to NSX) back for authentication.
  • The “-X” option allows you to specify the HTTP method (in this case, POST).
  • The “-d” option lets us transfer JSON-encoded data to form the request for the object we are going to create. We’ll specify a filename here, preceded by the “@” symbol.
  • The “-H” option adds an appropriate “Content-Type: application/json” header to the request, since we are passing JSON-encoded data to the NSX controller.

When you put it all together, it looks something like this (substituting appropriate values where applicable):

curl --insecure -b cookies.txt -d @new-switch.json
-H "Content-Type: application/json" -X POST https://192.168.100.50/ws.v1/lswitch

As I mentioned earlier, you’re passing JSON-encoded data to the NSX controller; here are the contents of the new-switch.json file referenced in the above command example:

If you can’t see the code block, please click here.

Once again, I recommend piping the output of this command through python -m json.tool, as what you’ll get back on a successful call is some useful JSON data that includes, among other things, the UUID of the object (logical switch, in this case) that you just created. You can use this UUID in subsequent API calls to list properties, change properties, add logical switch ports, etc.

Clearly, there is much more that can be done with the OpenStack and VMware NSX APIs, but this at least should give you a starting point from which you can continue to explore in more detail. If anyone has any corrections, clarifications, or questions, please feel free to post them in the comments section below. All courteous comments (with vendor disclosure, where applicable) are welcome!

Tags: , , , ,

In this post, I’m going to show you how I combined Linux network namespaces, VLANs, Open vSwitch (OVS), and GRE tunnels to do something interesting. Well, I found it interesting, even if no one else does. However, I will provide this disclaimer up front: while I think this is technically interesting, I don’t think it has any real, practical value in a production environment. (I’m happy to be proven wrong, BTW.)

This post builds on information I’ve provided in previous posts:

It may pull pieces from a few other posts, but the bulk of the info is found in these. If you haven’t already read these, you might want to take a few minutes and go do that—it will probably help make this post a bit more digestible.

After working a bit with network namespaces—and knowing that OpenStack Neutron uses network namespaces in certain configurations, especially to support overlapping IP address spaces—I wondered how one might go about integrating multiple network namespaces into a broader configuration using OVS and GRE tunnels. Could I use VLANs to multiplex traffic from multiple namespaces across a single GRE tunnel?

To test my ideas, I came up with the following design:

As you can see in the diagram, my test environment has two KVM hosts. Each KVM host has a network namespace and a running guest domain. Both the network namespace and the guest domain are connected to an OVS bridge; the network namespace via a veth pair and the guest domain via a vnet port. A GRE tunnel between the OVS bridges connects the two hosts.

The idea behind the test environment was that the VM on one host would communicate with the veth interface in the network namespace on the other host, using VLAN-tagged traffic over a GRE tunnel between them.

Let’s walk through how I built this environment to do the testing.

I built KVM Host 1 using Ubuntu 12.04.2, and installed KVM, libvirt, and OVS. On KVM Host 1, I built a guest domain, attached it to OVS via a libvirt network, and configured the VLAN tag for its OVS port with this command:

ovs-vsctl set port vnet0 tag=10

In the guest domain, I configured the OS (also Ubuntu 12.04.2) to use the IP address 10.1.1.2/24.

Also on KVM Host 1, I created the network namespace, created the veth pair, moved one of the veth interfaces, and attached the other to the OVS bridge. This set of commands is what I used:

ip netns add red
ip link add veth0 type veth peer name veth1
ip link set veth1 netns red
ip netns exec red ip addr add 10.1.2.1/24 dev veth1
ip netns exec red ip link set veth1 up
ovs-vsctl add-port br-int veth0
ovs-vsctl set port veth0 tag=20

Most of the commands listed above are taken straight from the network namespaces article I wrote, but let’s break it down anyway just for the sake of full understanding:

  • The first command adds the “red” namespace.
  • The second command creates the veth pair, creatively named veth0 and veth1.
  • The third command moves veth1 into the red namespace.
  • The next two commands add an IP address to veth1 and set the interface to up.
  • The last two commands add the veth0 interface to an OVS bridge named br-int, and then set the VLAN tag for that port to 20.

When I’m done, I’m left with KVM Host 1 running a guest domain on VLAN 10 and a network namespace on VLAN 20. (Do you see how I got there?)

I repeated the process on KVM Host 2, installing Ubuntu 12.04.2 with KVM, libvirt, and OVS. Again, I built a guest domain (also running Ubuntu 12.04.2), configured the operating system to use the IP address 10.1.2.2/24, attached it to OVS via a libvirt network, and configured its OVS port:

ovs-vsctl set port vnet0 tag=20

Similarly, I also created a new network namespace and pair of veth interfaces, but I configured them as a “mirror image” of KVM Host 1, reversing the VLAN assignments for the guest domain (as shown above) and the network namespace:

ip netns add blue
ip link add veth0 type veth peer name veth1
ip link set veth1 netns blue
ip netns exec blue ip addr add 10.1.1.1/24 dev veth1
ip netns exec blue ip link set veth1 up
ovs-vsctl add-port br-int veth0
ovs-vsctl set port veth0 tag=10

That leaves me with KVM Host 2 running a guest domain on VLAN 20 and a network namespace on VLAN 10.

The final step was to create the GRE tunnel between the OVS bridges. However, after I established the GRE tunnel, I configured the GRE port to be a VLAN trunk using this command (this command was necessary on both KVM hosts):

ovs-vsctl set port gre0 trunks=10,20,30

So I now had the environment I’d envisioned for my testing. VLAN 10 had a guest domain on one host and a veth interface on the other; VLAN 20 had a veth interface on one host and a guest domain on the other. Between the two hosts was a GRE tunnel configured to act as a VLAN trunk.

Now came the critical test—would the guest domain be able to ping the veth interface? This screen shot shows the results of my testing; this is the guest domain on KVM Host 1 communicating with the veth1 interface in the separate network namespace on KVM Host 2:

Success! Although not shown here, I also tested all other combinations as well, and they worked. (Note you’d have to use ip netns exec ping … to ping from the veth1 interface in the network namespace.) I now had a configuration where I could integrate multiple network namespaces with GRE tunnels and OVS. Unfortunately—and this is where the whole “technically interesting but practically useless” statement comes from—this isn’t really a usable configuration:

  • The VLAN configurations were manually applied to the OVS ports; this means they disappeared if the guest domains were power-cycled. (This could be fixed using libvirt portgroups, but I hadn’t bothered with building them in this environment.)
  • The GRE tunnel had to be manually established and configured.
  • Because this solution uses VLAN tags inside the GRE tunnel, you’re still limited to about 4,096 separate networks/network namespaces you could support.
  • The entire process was manual. If I needed to add another VLAN, I’d have to manually create the network namespace and veth pair, manually move one of the veth interfaces into the namespace, manually add the other veth interface to the OVS bridge, and manually update the GRE tunnel to trunk that VLAN. Not very scalable, IMHO.

However, the experiment was not a total loss. In figuring out how to tie together network namespaces and tunnels, I’ve gotten a better understanding of how all the pieces work. In addition, I have a lead on an even better way of accomplishing the same task: using OpenFlow rules and tunnel keys. This is the next area of exploration, and I’ll be sure to post something when I have more information to share.

In the meantime, feel free to share your thoughts and feedback on this post. What do you think—technically interesting or not? Useful in a real-world scenario or not? All courteous comments (with vendor disclosure, where applicable) are welcome.

Tags: , , , , , , ,

I’m back with another “how to” article on Open vSwitch (OVS), this time taking a look at using GRE (Generic Routing Encapsulation) tunnels with OVS. OVS can use GRE tunnels between hosts as a way of encapsulating traffic and creating an overlay network. OpenStack Quantum can (and does) leverage this functionality, in fact, to help separate different “tenant networks” from one another. In this write-up, I’ll walk you through the process of configuring OVS to build a GRE tunnel to build an overlay network between two hypervisors running KVM.

Naturally, any sort of “how to” such as this always builds upon the work of others. In particular, I found a couple of Brent Salisbury’s articles (here and here) especially useful.

This process has 3 basic steps:

  1. Create an isolated bridge for VM connectivity.
  2. Create a GRE tunnel endpoint on each hypervisor.
  3. Add a GRE interface and establish the GRE tunnel.

These steps assume that you’ve already installed OVS on your Linux distribution of choice. I haven’t explicitly done a write-up on this, but there are numerous posts from a variety of authors (in this regard, Google is your friend).

We’ll start with an overview of the topology, then we’ll jump into the specific configuration steps.

Reviewing the Topology

The graphic below shows the basic topology of what we have going on here:

Topology overview

We have two hypervisors (CentOS 6.3 and KVM, in my case), both running OVS (an older version, version 1.7.1). Each hypervisor has one OVS bridge that has at least one physical interface associated with the bridge (shown as br0 connected to eth0 in the diagram). As part of this process, you’ll create the other internal interfaces (the tep and gre interfaces, as well as the second, isolated bridge to which VMs will connect. You’ll then create a GRE tunnel between the hypervisors and test VM-to-VM connectivity.

Creating an Isolated Bridge

The first step is to create the isolated OVS bridge to which the VMs will connect. I call this an “isolated bridge” because the bridge has no physical interfaces attached. (Side note: this idea of an isolated bridge is fairly common in OpenStack and NVP environments, where it’s usually called the integration bridge. The concept is the same.)

The command is very simple, actually:

ovs-vsctl add-br br2

Yes, that’s it. Feel free to substitute a different name for br2 in the command above, if you like, but just make note of the name as you’ll need it later.

To make things easier for myself, once I’d created the isolated bridge I then created a libvirt network for it so that it was dead-easy to attach VMs to this new isolated bridge.

Configuring the GRE Tunnel Endpoint

The GRE tunnel endpoint is an interface on each hypervisor that will, as the name implies, serve as the endpoint for the GRE tunnel. My purpose in creating a separate GRE tunnel endpoint is to separate hypervisor management traffic from GRE traffic, thus allowing for an architecture that might leverage a separate management network (which is typically considered a recommended practice).

To create the GRE tunnel endpoint, I’m going to use the same technique I described in my post on running host management traffic through OVS. Specifically, we’ll create an internal interface and assign it an IP address.

To create the internal interface, use this command:

ovs-vsctl add-port br0 tep0 -- set interface tep0 type=internal

In your environment, you’ll substitute br2 with the name of the isolated bridge you created earlier. You could also use a different name than tep0. Since this name is essentially for human consumption only, use what makes sense to you. Since this is a tunnel endpoint, tep0 made sense to me.

Once the internal interface is established, assign it with an IP address using ifconfig or ip, whichever you prefer. I’m still getting used to using ip (more on that in a future post, most likely), so I tend to use ifconfig, like this:

ifconfig tep0 192.168.200.20 netmask 255.255.255.0

Obviously, you’ll want to use an IP addressing scheme that makes sense for your environment. One important note: don’t use the same subnet as you’ve assigned to other interfaces on the hypervisor, or else you can’t control that the GRE tunnel will originate (or terminate) on the interface you specify. This is because the Linux routing table on the hypervisor will control how the traffic is routed. (You could use source routing, a topic I plan to discuss in a future post, but that’s beyond the scope of this article.)

Repeat this process on the other hypervisor, and be sure to make note of the IP addresses assigned to the GRE tunnel endpoint on each hypervisor; you’ll need those addresses shortly. Once you’ve established the GRE tunnel endpoint on each hypervisor, test connectivity between the endpoints using ping or a similar tool. If connectivity is good, you’re clear to proceed; if not, you’ll need to resolve that before moving on.

Establishing the GRE Tunnel

By this point, you’ve created the isolated bridge, established the GRE tunnel endpoints, and tested connectivity between those endpoints. You’re now ready to establish the GRE tunnel.

Use this command to add a GRE interface to the isolated bridge on each hypervisor:

ovs-vsctl add-port br2 gre0 -- set interface gre0 type=gre \
options:remote_ip=<GRE tunnel endpoint on other hypervisor>

Substitute the name of the isolated bridge you created earlier here for br2 and feel free to use something other than gre0 for the interface name. I think using gre as the base name for the GRE interfaces makes sense, but run with what makes sense to you.

Once you repeat this command on both hypervisors, the GRE tunnel should be up and running. (Troubleshooting the GRE tunnel is one area where my knowledge is weak; anyone have any suggestions or commands that we can use here?)

Testing VM Connectivity

As part of this process, I spun up an Ubuntu 12.04 server image on each hypervisor (using virt-install as I outlined here), attached each VM to the isolated bridge created earlier on that hypervisor, and assigned each VM an IP address from an entirely different subnet than the physical network was using (in this case, 10.10.10.x).

Here’s the output of the route -n command on the Ubuntu guest, to show that it has no knowledge of the “external” IP subnet—it knows only about its own interfaces:

ubuntu:~ root$ route -n
Kernel IP routing table
Destination  Gateway       Genmask        Flags Metric Ref Use Iface
0.0.0.0      10.10.10.254  0.0.0.0        UG    100    0   0   eth0
10.10.10.0   0.0.0.0       255.255.255.0  U     0      0   0   eth0

Similarly, here’s the output of the route -n command on the CentOS host, showing that it has no knowledge of the guest’s IP subnet:

centos:~ root$ route -n
Kernel IP routing table
Destination  Gateway        Genmask        Flags Metric Ref Use Iface
192.168.2.0  0.0.0.0        255.255.255.0  U     0      0   0   tep0
192.168.1.0  0.0.0.0        255.255.255.0  U     0      0   0   mgmt0
0.0.0.0      192.168.1.254  0.0.0.0        UG    0      0   0   mgmt0

In my case, VM1 (named web01) was given 10.10.10.1; VM2 (named web02) was given 10.10.10.2. Once I went through the steps outlined above, I was able to successfully ping VM2 from VM1, as you can see in this screenshot:

VM-to-VM connectivity over GRE tunnel

(Although it’s not shown here, connectivity from VM2 to VM1 was obviously successful as well.)

“OK, that’s cool, but why do I care?” you might ask.

In this particular context, it’s a bit of a science experiment. However, if you take a step back and begin to look at the bigger picture, then (hopefully) something starts to emerge:

  • We can use an encapsulation protocol (GRE in this case, but it could have just as easily been STT or VXLAN) to isolate VM traffic from the physical network and from other VM traffic. (Think multi-tenancy.)
  • While this process was manual, think about some sort of controller (an OpenFlow controller, perhaps?) that could help automate this process based on its knowledge of the VM topology.
  • Using a virtualized router or virtualized firewall, I could easily provide connectivity into or out of this isolated (encapsulated) private network. (This is probably something I’ll experiment with later.)
  • What if we wrapped some sort of orchestration framework around this, to help deploy VMs, create networks, add routers/firewalls automatically, all based on the customer’s needs? (OpenStack Networking, anyone?)

Anyway, I hope this is helpful to someone. As always, I welcome feedback and suggestions for improvement, so feel free to speak up in the comments below. Vendor disclosures, where appropriate, are greatly appreciated. Thanks!

Tags: , , , , , ,

I had a reader contact me with a question on using Kerberos and LDAP for authentication into Active Directory, based on Active Directory integration work I did many years ago. I was unable to help him, but he did find the solution to the problem, and I wanted to share it here in case it might help others.

The issue was that he was experiencing a problem using native Kerberos authentication against Active Directory with SSH. Specifically, when he tried open an SSH session to another system from a user account that had a Kerberos Ticket Granting Ticket (TGT), the remote system dropped the connection with a “connection closed” error message. (The expected behavior should have been to authenticate the user automatically using the TGT.) However, when he stopped the SSH daemon and then ran it manually as root, the Kerberos authentication worked.

It’s been a number of years since I dealt with this sort of integration, so I wasn’t really sure where to start, to be honest, and I relayed this to the reader.

Fortunately, the reader contacted me a few days later with the solution. As it turns out, the problem was with SELinux. Apparently, by copying the keytab file from a Windows KDC (an Active Directory domain controller), the keytab is considered “foreign” because it doesn’t have the right security context. The fix, as my reader discovered, is to use the restorecon command to reset the security context on the Kerberos files, like this (the last command may not be necessary):

restorecon /etc/krb5.conf
restorecon /etc/krb5.keytab
restorecon /root/.k5login

Once the security context had been reset, the Kerberos authentication via SSH worked as expected. Thanks Tomas!

Tags: , , , ,

I like to spend time examining the areas where different groups of technologies intersect. Personally, I find this activity fascinating, and perhaps that’s the reason that I find myself pursing knowledge and experience in virtualization, networking, storage, and other areas simultaneously—it’s an effort to spend more time “on the border” between various technologies.

One border, in particular, is very interesting to me: the border between virtualization and networking. Time spent thinking about the border between networking and virtualization is what has generated posts like this one, this one, or this one. Because I’m not a networking expert (yet), most of the stuff I generate is junk, but at least it keeps me entertained—and it occasionally prods the Really Smart Guys (RSGs) to post something far more intelligent than anything I can create.

Anyway, I’ve been thinking more about some of these networking-virtualization chimeras, and I thought it might be interesting to talk about them, if for no other reason than to encourage the RSGs to correct me and help everyone understand a little better.

<aside>A chimera, by the way, was a mythological fire-breathing creature that was part lion, part goat, and part serpent; more generically, the word refers to any sort of organism that has two groups of genetically distinct cells. In layman’s terms, it’s something that is a mix of two other things.</aside>

Here are some of the networking-virtualization chimeras I’ve concocted:

  • FabricPath/TRILL on the hypervisor: See this blog post for more details. It turns out, at least at first glance, that this particular combination doesn’t seem to buy us much. The push for large L2 domains that seemed to fuel FabricPath and TRILL now seems to be abating in favor of network overlays and L3 routing.

  • MPLS-in-IP on the hypervisor: I also wrote about this strange concoction here. At first, I thought I was being clever and sidestepping some issues by bringing MPLS support into the hypervisor, but in thinking more about this I realize I’m wrong. Sure, we could encapsulate VM-to-VM traffic into MPLS, then encapsulate MPLS in UDP, but how is that any better than just encapsulating VM-to-VM traffic in VXLAN? It isn’t. (Not to mention that Ivan Pepelnjak set the record straight.)

  • LISP on the hypervisor: I thought this was a really good idea; by enabling LISP on the hypervisor and essentially making the hypervisor an ITR/ETR (see here for more LISP info), inter-DC vMotion becomes a snap. Want to use a completely routed access layer? No problem. Of course, that assumes all your WAN and data center equipment are LISP-capable and enabled/configured for LISP. I’m not the only one who thought this idea was cool, either. I’m sure there are additional problems/considerations of which I’m not aware, though—networking gurus, want to chime in and educate me on what I’m missing?

  • OTV on the hypervisor: This one isn’t really very interesting, as it bears great similarity to VXLAN (both OTV and VXLAN, to my knowledge, use very similar frame formats and encapsulation schemes). Is there something else here I’m missing?

  • VXLAN on physical switches: This one is interesting, even necessary according to some experts. Enabling VXLAN VTEP (VXLAN Tunnel End Point) termination on physical switches might also address some of the odd traffic patterns that would result from the use of VXLAN (see here for a simple example). Arista Networks demonstrated this functionality at VMworld 2012 in San Francisco, so this particular networking-virtualization mashup is probably closer to reality than any of the others.

  • OpenFlow on the hypervisor: Open vSwitch (OVS) already supports OpenFlow, so you might say that this mashup already exists. It’s not unreasonable to think Nicira might port OVS to VMware vSphere, which would bring an OpenFlow-compatible virtual switch to a much larger installed base. The missing piece is, of course, an OpenFlow controller. While an interesting mental exercise, I’m keenly interested to know what sort of real-world problems this might help solve, and would love to hear from any OpenFlow experts out there what they think.

  • Virtualizing physical switches: No, I’m not talking about running switch software on the hypervisor (think Nexus 1000V). Instead, I’m thinking more along the lines of FlowVisor, which in effect virtualizes a switch’s control plane so that multiple “slices” of a switch can be independently controlled by an external OpenFlow controller. If you’re familiar with NetApp, think of their “vfiler” construct, or think of the Virtual Device Contexts (VDCs) in a Nexus 7000. However, I’m thinking of something more device-independent than Nexus 7000 VDCs. As more and more switches move to x86 hardware, this seems like it might be something that could really take off. Multi-tenancy support (each “virtual switch instance” being independently managed), traffic isolation, QoS, VLAN isolation…lots of possibilities exist here.

Are there any other groupings that are worth exploring or discussing? Any other “you got your virtualization peanut butter in my networking chocolate” combinations that might help address some of the issues in data centers today? Feel free to speak up in the comments below. Courteous comments are invited and encouraged.

Tags: , , , , , ,

A short while ago, I talked about how to add client-side encryption to Dropbox using EncFS. In that post, I suggested using BoxCryptor to access your encrypted files. A short time later, though, I uncovered a potential issue with (what I thought to be) BoxCryptor. I have an update on that issue.

In case you haven’t read the comments to the original BoxCryptor-Markdown article, it turns out that the problem with using Markdown files with BoxCryptor doesn’t lie with BoxCryptor—it lies with Byword, the Markdown editor I was using on iOS. Robert, founder of BoxCryptor, suggested that Byword doesn’t properly register the necessary handlers for Markdown files, and that’s why BoxCryptor can’t preview the files or use “Open In…” functionality. On his suggestion, I tried Textastic.

It works flawlessly. I can preview Markdown files in the iOS BoxCryptor client, then use “Open In…” to send the Markdown files to Textastic for editing. I can even create new Markdown files in Textastic and then send them to BoxCryptor for encrypted upload to Dropbox (where I can, quite naturally, open them using my EncFS filesystem on my Mac systems). Very nice!

If you are thinking about using EncFS with Dropbox and using BoxCyrptor to access those files from iOS, and those files are text-based files (like Markdown, plain text, HTML, and similar file formats), I highly recommend Textastic.

Tags: , , , ,

About a week ago, I published an article showing you how to use EncFS and BoxCryptor to provide client-side encryption of Dropbox data. After working with this configuration for a while, I’ve run across a problem (at least, a problem for me—it might not be a problem for you). The problem lies on the iPad end of things.

If you haven’t read the earlier post, the basic gist of the idea is to use EncFS—an open source encrypting file system—and OSXFUSE to provide file-level encryption of Dropbox data on your OS X system. This is client-side encryption where you are in the control of the encryption keys. To access these encrypted files from your iPad, you’ll use the BoxCryptor iOS client, which is compatible with EncFS and decrypts the files.

Sounds great, right? Well, it is…mostly. The problem arises from the way that the iPad handles files. BoxCryptor uses the built-in document preview functionality of iOS, which in turn allows you to access the iPad’s “Open In…” functionality. The only way to get to the “Open In…” menu is to first preview the document using the iOS document preview feature. Unfortunately, the iOS document preview functionality doesn’t recognize a number of files and file types. Most notably for me, it doesn’t recognize Markdown files (I’ve tried several different file extensions and none of them seem to work). Since the preview feature doesn’t recognize Markdown, then I can’t get to “Open In…” to open the documents in Byword (an iOS Markdown editor), and so I’m essentially unable to access my content.

To see if this was an iOS-wide problem or a problem limited to BoxCryptor, I tested accessing some non-encrypted files using the Dropbox iOS client. The Dropbox client will, at least, render Markdown and OPML files as plain text. The Dropbox iOS client still does not, unfortunately, know how to get the Markdown files into Byword. I even tried a MindManager mind map; the Dropbox client couldn’t preview it (not surprisingly), but it did give me the option to open it in the iOS version of MindManager. The BoxCryptor client also worked with a mind map, but refuses to work with plain text-based files like Markdown and OPML.

Given that I create the vast majority of my content in Markdown, this is a problem. If anyone has any suggestions, I’d love to hear them in the comments. Otherwise, I’ll post more here as soon as I learn more or find a workaround.

Tags: , , , , ,

Lots of folks like using Dropbox, the ubiquitous store-and-sync cloud storage service; I am among them. However, concerns over the privacy and security of my data have kept me from using Dropbox for some projects. To help address that, I looked around to find an open, interoperable way of adding an extra layer of encryption onto my data. What I found is described in this post, and it involves using the open source EncFS and OSXFUSE projects along with an application from BoxCryptor to provide real-time, client-side AES-256 encryption.

Background

First, some background why I went down this path. Of all the various cloud-based services out there, I’m not sure there is a service that I rely upon more than Dropbox. The Dropbox team has done a great job of creating an almost seamlessly integrated product that makes it much easier to keep your files accessible across locations and devices.

Of course, Dropbox is not without its flaws, and security and privacy are considered among the prime concerns. Dropbox states they use server-side encryption to protect your data on the Amazon S3 infrastructure, but Dropbox also controls those server-side encryption keys. Many individuals, myself among them, would prefer client-side encryption with control over our own encryption keys.

So, a fair number of companies have sprung up offering ways to help fix this. One of these is BoxCryptor, who offers an application for Windows, Mac, iOS, and Android that performs client-side encryption. From the Mac OS X perspective, BoxCryptor’s solution is, as far as I know, built on top of some fundamental building blocks:

  • The open source OSXFUSE project, which is a port of FUSE for Mac OS X
  • A Mac port of the open source EncFS FUSE filesystem

I would imagine that ports of these components for other operating systems are used in their other platforms, but I don’t know this for certain. Regardless, it’s possible to use BoxCryptor’s application to get client-side encryption across a variety of platforms. For those who want a quick, easy, simple solution, my recommendation is to use BoxCryptor. However, if you want a bit more flexibility, then using the individual components can give you the same effect. I chose to use the individual components, more for my own understanding than anything else, and that’s what is described in this post.

What You’ll Need

This post was written from the perspective of getting this solution running on Mac OS X; if you’re using a different operating system, the specifics will quite naturally be different (although the broad concepts are still applicable).

There are four main components you’ll need:

  • OSXFUSE: This is a port of FUSE to OS X, and is one of a couple of successors to the now-defunct MacFUSE project. OSXFUSE is available to download here.
  • Macfusion: Macfusion is a GUI to help simplify and automate the mounting of filesystems. While it’s not strictly necessary, it does make things a lot easier. Macfusion can be downloaded here.
  • EncFS: You’ll need a version of EncFS for Mac OS X. There are a variety of ways to get it; I used an installer actually made available by BoxCryptor here.
  • EncFS plugin for Macfusion: This is what enables Macfusion to mount or unmount EncFS filesystems, and is actually included in the EncFS installer above. You can also download the plugin here.

Setting Things Up

Once you have all the components you need, then you’re ready to start installing.

  1. First, install OSXFUSE. When installing OSXFUSE, be sure to select to install the MacFUSE Compatibility Layer. The OSXFUSE installer recommends rebooting after the installation, but I waited until I’d finished installing all the components.

  2. Once OSXFUSE is installed, install Macfusion. Macfusion is distributed as a ZIP file; simply unzip the file and move it to the location of your choice. I installed it to /Applications.

  3. Next, run the EncFS installer. During the installation, select to install only EncFS and the EncFS plugin for Macfusion. Do not install any of the other components. I rebooted here.

  4. You’ll need both a mount point as well as a directory to store the raw, encrypted data. Since the raw, encrypted data is intended to be synchronized via Dropbox, you’ll want to create the encrypted directory in the Dropbox hierarchy. I chose to use ~/Dropbox/Secure. For the mount point, I chose to use ~/.Secure. You can obviously modify both of these directories to better suit your own needs or preferences.

  5. Once you have all the components installed and the mount point and encrypted directories created, you’re ready to actually create the encrypted filesystem. Run the command encfs ~/Dropbox/Secure ~/.Secure. The encfs program will run through some questions; select “x” for Expert mode and configure it according to the guidelines described in this support article. When prompted for a passphrase, be sure to enter an appropriately complex passphrase—and make sure you remember it (you’ll need it later).

  6. When encfs finishes running, it will mount an encrypted volume on your desktop. It will have an odd name, but you won’t be able to change it. Go ahead and eject (unmount) this volume; we’ll remount it again shortly using Macfusion. Note that you might see some Dropbox activity here.

  7. Launch Macfusion, then re-add the encrypted filesystem created in step 5; you’ll need to supply the same passphrase you entered earlier. Here in Macfusion you’ll be able to specify a name for the encrypted filesystem and supply a custom icon as well. Mount the encrypted filesystem to be sure that everything is working as expected.

That’s it—any files you now copy into the encrypted filesystem—which is represented by an external drive on your Desktop—will be encrypted using AES-256 and then synchronized to Dropbox. Cool, huh?

Adding Another Computer

I have two Macs in my office (my 13″ MacBook Pro and my Mac Pro), so I had to repeat the process on the second Mac so that it could read the encrypted files. If you have more than one computer, you’ll need to do the same. Simply go through steps 1 through 5. In step 5, though, it will only prompt for the passphrase. You can even skip steps 5 and 6 to go straight to 7. As long as you have the passphrase for the encrypted filesystem, adding access for additional Dropbox-linked computers should be a piece of cake.

Adding Access from iOS

This is where BoxCryptor comes back into play again. Install the BoxCryptor app onto your device, then link it to your Dropbox account and select the directory within Dropbox where the raw, encrypted data is found. As long as you followed the configuration guidelines here, BoxCryptor should be able to decrypt the encrypted filesystem created with EncFS.

Following these instructions, you’ll gain a way to add AES-256 encryption to your Dropbox files (or a subset of your Dropbox files) while still maintaining access to those files from just about any location across a variety of devices.

If anyone has any questions or clarifications about what I’ve posted here, please speak up in the comments below. All courteous comments are welcome!

Tags: , , ,

A few days ago, I posted an article about using VLANs with Open vSwitch (OVS) and libvirt. In that article, I stated that libvirt 1.0.0 was needed. Unfortunately (or maybe fortunately?), I was wrong. The libvirt-OVS integration works in earlier versions, too.

In my earlier OVS-libvirt post, I supplied this snippet of XML to define a libvirt network that you could use with OVS:

You would use that libvirt network definition in conjunction with a domain configuration like this:

I just tested this on Ubuntu 12.04.1 with libvirt 0.10.2, and it worked just fine. I think the problems I experienced in my earlier testing were all related to incorrect XML configurations. Some other write-ups of the libvirt-OVS integration seem to imply that you need to use the profileid parameter to link the domain XML configuration and the network XML configuration, but my testing seems to show that the real link is the portgroup name.

I haven’t yet tested this on earlier versions of libvirt, but I can confirm that it works with Ubuntu 12.04 using manually compiled versions of libvirt 0.10.2 and libvirt 1.0.0.

If anyone has any additional information to share, please speak up in the comments.

Tags: , , , , , ,

In previous posts, I’ve shown you how to use Open vSwitch (OVS) with VLANs through fake bridges, as well as how to wrap libvirt virtual network around OVS fake bridges. Both of these techniques are acceptable for configuring VLANs with OVS, but in this post I want to talk about using VLANs with OVS via a greater level of libvirt integration. This has been talked about elsewhere, but I wasn’t able to make it work until libvirt 1.0.0 was released. (Update: I was able to make it work with an earlier version. See here.)

First, let’s recap what we know so far. If you know the port to which a particular domain (guest VM) is connected, you can configure that particular port as a VLAN trunk like this:

ovs-vsctl set port <port name> trunks=10,11,12

This configuration would pass the VLAN tags for VLANs 10, 11, and 12 all the way up to the domain, where—assuming the OS installed in the domain has VLAN support—you could configure network connectivity appropriately. (I hope to have a blog post up on this soon.)

Along the same lines, if you know the port to which a particular domain is connected, you could configure that port as a VLAN access port with a command like this:

ovs-vsctl set port <port name> tag=15

This command makes the domain a member of VLAN 15, much like the use of the switchport access vlan 15 command on a Cisco switch. (I probably don’t need to state that this isn’t the only way—see the other OVS/VLAN related posts above for more techniques to put a domain into a particular VLAN.)

These commands work perfectly fine and are all well and good, but there’s a problem here—the VLAN information isn’t contained in the domain configuration. Instead, it’s in OVS, attached to an ephemeral port—meaning that when the domain is shut down, the port and the associated configuration disappears. What I’m going to show you in this post is how to use VLANs with OVS in conjunction with libvirt for persistent VLAN configurations.

This document was written using Ubuntu 12.04.1 LTS and Open vSwitch 1.4.0 (installed straight from the Precise Pangolin repositories using apt-get). Libvirt was compiled manually (see instructions here). Due to some bugs, it appears you need at least version 1.0.0 of libvirt. Although the Silicon Loons article I referenced earlier mentions an earlier version of libvirt, I was not able to make it work until the 1.0.0 release. Your mileage may vary, of course—I freely admit that I might have been doing something wrong in my earlier testing.

To make VLANs work with OVS and libvirt, two things are necessary:

  1. First, you must define a libvirt virtual network that contains the necessary portgroup definitions.
  2. Second, you must include the portgroup reference to the virtual network in the domain (guest VM) configuration.

Let’s look at each of these steps.

Creating the Virtual Network

The easiest way I’ve found to create the virtual network is to craft the network XML definition manually, then import it into libvirt using virsh net-define.

Here’s some sample XML code (I’ll break down the relevant parts after the code):

The key takeaways from this snippet of XML are:

  1. First, note that the OVS bridge is specified as the target bridge in the <bridge name=...> element. You’ll need to edit this as necessary to make your specific OVS configuration. For example, in my configuration, ovsbr0 refers to a bridge that handles only host management traffic.
  2. Second, note the <portgroup name=...> element. This is where the “magic” happens. Note that you can have no VLAN element (as in the vlan-01 portgroup), a VLAN tag (as in the vlan-10 or vlan-20 portgroups), or a set of VLAN tags to pass as a trunk (as in the vlan-all portgroup).

Once you’ve got the network definition in the libvirt XML format, you can import that configuration with virsh net-define <XML filename>. (Prepend this command with sudo if necessary.)

After it is imported, use virsh net-start <network name> to start the libvirt virtual network. If you make changes to the virtual network, such as adding or removing portgroups, be sure to restart the virtual network using virsh net-destroy <network name> followed by virsh net-start <network name>.

Now that the virtual network is defined, we can move on to creating the domain configuration.

Configuring the Domain Networking

As far as I’m aware, to include the appropriate network definitions in the domain XML configuration, you’ll have to edit the domain XML manually.

Here’s the relevant snippet of domain XML configuration:

You’ll likely have more configuration entries in your domain configuration, but the important one is the <source network=...> element, where you’ll specify both the name of the network you created as well as the name of the portgroup to which this domain should be attached.

With this configuration in place, when you start the domain, it will pass the necessary parameters to OVS to apply the desired VLAN configuration automatically. In other words, once you define the desired configuration in the domain XML, it’s maintained persistently inside the domain XML (instead of on the ephemeral port in OVS), re-applied anytime the domain is started.

Verifying the Configuration

Once the appropriate configuration is in place, you can see the OVS configuration created by libvirt when a domain is started by simply using ovs-vsctl show or—for more detailed information—ovs-vsctl list port <port name>. Of particular interest when using ovs-vsctl list port <port name> are the tag and/or trunks values; these are where VLAN configurations are applied.

Summary

In this post, I’ve shown you how to create libvirt virtual networks that integrate with OVS to provide persistent VLAN configurations for domains connected to an OVS bridge. The key benefit that arises from this configuration is that you longer need to know to which OVS port a given domain is connected. Because the VLAN configuration is stored with the domain and applied to OVS automatically when the domain is started, you can be assured that a domain will always be attached to the correct VLAN when it starts.

As usual, I encourage your feedback on this article. If you have questions, thoughts, corrections, or clarifications, you are invited to speak up in the comments below.

Tags: , , , , , , ,

« Older entries