It seems as if APIs are popping up everywhere these days. While this isn’t a bad thing, it does mean that IT professionals need to have a better understanding of how to interact with these APIs. In this post, I’m going to discuss how to use the popular command line utility curl to interact with a couple of RESTful APIs—specifically, the OpenStack APIs and the VMware NSX API.

Before I go any further, I want to note that to work with the OpenStack and VMware NSX APIs you’ll be sending and receiving information in JSON (JavaScript Object Notation). If you aren’t familiar with JSON, don’t worry—I’ve have an introductory post on JSON that will help get you up to speed. (Mac users might also find this post helpful as well.)

Also, please note that this post is not intended to be a comprehensive reference to the (quite extensive) flexibility of curl. My purpose here is to provide enough of a basic reference to get you started. The rest is up to you!

To make consuming this information easier (I hope), I’ll break this information down into a series of examples. Let’s start with passing some JSON data to a REST API to authenticate.

Example 1: Authenticating to OpenStack

Let’s say you’re working with an OpenStack-based cloud, and you need to authenticate to OpenStack using OpenStack Identity (“Keystone”). Keystone uses the idea of tokens, and to obtain a token you have to pass correct credentials. Here’s how you would perform that task using curl.

You’re going to use a couple of different command-line options:

  • The “-d” option allows us to pass data to the remote server (in this example, the remote server running OpenStack Identity). We can either embed the data in the command or pass the data using a file; I’ll show you both variations.
  • The “-H” option allows you to add an HTTP header to the request.

If you want to embed the authentication credentials into the command line, then your command would look something like this:

curl -d '{"auth":{"passwordCredentials":{"username": "admin",
"password": "secret"},"tenantName": "customer-A"}}'
-H "Content-Type: application/json" http://192.168.100.100:5000/v2.0/tokens

I’ve wrapped the text above for readability, but on the actual command line it would all run together with no breaks. (So don’t try to copy and paste, it probably won’t work.) You’ll naturally want to substitute the correct values for the username, password, tenant, and OpenStack Identity URL.

As you might have surmised by the use of the “-H” header in that command, the authentication data you’re passing via the “-d” parameter is actually JSON. (Run it through python -m json.tool and see.) Because it’s actually JSON, you could just as easily put this information into a file and pass it to the server that way. Let’s say you put this information (which you could format for easier readability) into a file named credentials.json. Then the command would look something like this (you might need to include the full path to the file):

curl -d @credentials.json -H "Content-Type: application/json" http://192.168.100.100:35357/v2.0/tokens

What you’ll get back from OpenStack—assuming your command is successful—is a wealth of JSON. I highly recommend piping the output through python -m json.tool as it can be difficult to read otherwise. (Alternately, you could pipe the output into a file.) Of particular usefulness in the returned JSON is a section that gives you a token ID. Using this token ID, you can prove that you’ve authenticated to OpenStack, which allows you to run subsequent commands (like listing tenants, users, etc.).

Example 2: Authenticating to VMware NSX

Not all RESTful APIs handle authentication in the same way. In the previous example, I showed you how to pass some credentials in JSON-encoded format to authenticate. However, some systems use other methods for authentication. VMware NSX is one example.

In this example, you’ll need to use a different set of curl command-line options:

  • The “–insecure” option tells curl to ignore HTTPS certificate validation. VMware NSX controllers only listen on HTTPS (not HTTP).
  • The “-c” option stores data received by the server (one of the NSX controllers, in this case) into a cookie file. You’ll then re-use this data in subsequent commands with the “-b” option.
  • The “-X” option allows you to specify the HTTP method, which normally defaults to GET. In this case, you’ll use the POST method along the the “-d” parameter you saw earlier to pass authentication data to the NSX controller.

Putting all this together, the command to authenticate to VMware NSX would look something like this (naturally you’d want to substitute the correct username and password where applicable):

curl --insecure -c cookies.txt -X POST -d 'username=admin&password=admin' https://192.168.100.50/ws.v1/login

Example 3: Gathering Information from OpenStack

Once you’ve gotten an authentication token from OpenStack as I showed you in example #1 above, then you can start using API requests to get information from OpenStack.

For example, let’s say you wanted to list the instances for a particular tenant. Once you’ve authenticated, you’d want to get the ID for the tenant in question, so you’d need to ask OpenStack to give you a list of the tenants (you’ll only see the tenants your credentials permit). The command to do that would look something like this:

curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:5000/v2.0/tenants

The value to be substituted for token ID in the above command is returned by OpenStack when you authenticate (that’s why it’s important to pay attention to the data being returned). In this case, the data returned by the command will be a JSON-encoded list of tenants, tenant IDs, and tenant descriptions. From that data, you can get the ID of the tenant for whom you’d like to list the instances, then use a command like this:

curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:8774/v2/<Tenant ID>/servers

This will return a stream of JSON-encoded data that includes the list of instances and each instance’s unique ID—which you could then use to get more detailed information about that instance:

curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:8774/v2/<Tenant ID>/servers/<Server ID>

By and large, the API is reasonably well-documented; you just need to be sure that you are pointing the API call against the right endpoint. For example, authentication has to happen against the server running Keystone, which may or may not be the same server that is running the Nova API services. (In the examples I just provided, Keystone and the Nova API services are running on the same host, which is why the IP address is the same in the command lines.)

Example 4: Creating Objects in VMware NSX

Getting information from VMware NSX using the RESTful API is very much like what you’ve already seen in getting information from OpenStack. Of course, the API can also be used to create objects. To create objects—such as logical switches, logical switch ports, or ACLs—you’ll use a combination of curl options:

  • You’ll use the “-b” option to pass cookie data (stored when you authenticated to NSX) back for authentication.
  • The “-X” option allows you to specify the HTTP method (in this case, POST).
  • The “-d” option lets us transfer JSON-encoded data to form the request for the object we are going to create. We’ll specify a filename here, preceded by the “@” symbol.
  • The “-H” option adds an appropriate “Content-Type: application/json” header to the request, since we are passing JSON-encoded data to the NSX controller.

When you put it all together, it looks something like this (substituting appropriate values where applicable):

curl --insecure -b cookies.txt -d @new-switch.json
-H "Content-Type: application/json" -X POST https://192.168.100.50/ws.v1/lswitch

As I mentioned earlier, you’re passing JSON-encoded data to the NSX controller; here are the contents of the new-switch.json file referenced in the above command example:

If you can’t see the code block, please click here.

Once again, I recommend piping the output of this command through python -m json.tool, as what you’ll get back on a successful call is some useful JSON data that includes, among other things, the UUID of the object (logical switch, in this case) that you just created. You can use this UUID in subsequent API calls to list properties, change properties, add logical switch ports, etc.

Clearly, there is much more that can be done with the OpenStack and VMware NSX APIs, but this at least should give you a starting point from which you can continue to explore in more detail. If anyone has any corrections, clarifications, or questions, please feel free to post them in the comments section below. All courteous comments (with vendor disclosure, where applicable) are welcome!

Tags: , , , ,

In this article, I’ll show you how to install Ubuntu packages from a specific repository. It’s not that this is a terribly difficult process, but it also isn’t necessarily intuitive for those who haven’t had to do it before. I ran into this while trying to install an alpha release of LXC 1.0.0 for my recent post on automatically connecting LXC to Open vSwitch (OVS).

Normally, you’d install a package using apt-get like this:

apt-get install package-name

However, when you want to install a package from a specific repository, the command syntax shifts slightly so that it looks like this instead:

apt-get install package-name/repository

So, in my particular case, I was trying to install LXC. However, I needed the alpha LXC 1.0.0 package from the precise-backports repository. So the command to do that looked like this:

apt-get install lxc/precise-backports

Are there other useful apt-get tidbits like this that other readers might find particularly useful? Feel free to share in the comments below. Thanks for reading!

Tags: , ,

Welcome to Technology Short Take #38, another installment in my irregularly-published series that collects links and thoughts on data center-related technologies from around the web. But enough with the introduction, let’s get on to the content already!

Networking

  • Jason Edelman does some experimenting with the Python APIs on a Cisco Nexus 3000. In the process, he muses about the value of configuration management tool chains such as Chef and Puppet in a world of “open switch” platforms such as Cumulus Linux.
  • Speaking of Cumulus Linux…did you see the announcement that Dell has signed a reseller agreement with Cumulus Networks? I’m pretty excited about this announcement, and I hope that Cumulus sees great success as a result. There are a variety of write-ups about the announcement; so good, many not so good. The not-so-good variety typically refers to Cumulus’ product as an SDN product when technically it isn’t. This article on Barron’s by Tiernan Ray is a pretty good summary of the announcement and some of its implications.
  • Pete Welcher has launched a series of articles discussing “practical SDN,” focusing on the key leaders in the market: NSX, DFA, and the yet-to-be-launched ACI. In the initial installation of the series, he does a good job of providing some basics around each of the products, although (as would be expected of a product that hasn’t launched yet) he has to do some guessing when it comes to ACI. The series continues with a discussion of L2 forwarding and L3 forwarding across the various products. Definitely worth reading, in my opinion.
  • Nick Buraglio takes away all your reasons for not collecting flow-based data from your environment with his write-up on installing nfsen and nfdump for NetFlow and/or sFlow collection.
  • Terry Slattery has a nice write-up on new network designs that are ideally suited for SDN. If you are looking for a primer on “next-generation” network designs, this is worth reviewing.
  • Need some Debian packages for Open vSwitch 2.0? Here’s another article from Nick Buraglio—he has some information to help you out.

Servers/Hardware

Nothing this time, but check back next time.

Security

Nothing from my end. Maybe you have something you’d like to share in the comments?

Cloud Computing/Cloud Management

  • Christian Elsen (who works in Integration Engineering at VMware) has a nice series of articles going on using OpenStack with vSphere and NSX. The series starts here, but follow the links at the bottom of that article for the rest of the posts. This is really good stuff—he includes the use of the NSX vSwitch with vSphere 5.5, and talks about vSphere OpenStack Virtual Appliance (VOVA) as well. All in all, well worth a read in my opinion.
  • Maish Saidel-Keesing (one of my co-authors on the first edition of VMware vSphere Design and also a super-sharp guy) recently wrote an article on how adoption of OpenStack will slow the adoption of SDN. While I agree that widespread adoption of OpenStack could potentially retard the evolution of enterprise IT, I’m not necessarily convinced that it will slow the adoption of SDN and network virtualization solutions. Why? Because, in part, I believe that the full benefits of something like OpenStack need a good network virtualization solution in order to be realized. Yes, some vendors are writing plugins for Neutron that manipulate physical switches. But for developers to get true isolation, application portability, the ability to re-create production environments in development—all that is going to require network virtualization.
  • Here’s a useful OpenStack CLI cheat sheet for some commonly-used commands.

Operating Systems/Applications

  • If you’re using Ansible (a product I haven’t had a chance to use but I’m closely watching), but I came across this article on an upcoming change to the SSH transport that Ansible uses. This change, referred to as “ssh_alt,” promises a significant performance increase for Ansible. Good stuff.
  • I don’t think I’ve mentioned this before, but Forbes Guthrie (my co-author on the VMware vSphere Design books and an already great guy) has a series going on using Linux as a domain controller for a vSphere-based lab. The series is up to four parts now: part 1, part 2, part 3, and part 4.
  • Need (or want) to increase the SCSI timeout for a KVM guest? See these instructions.
  • I’ve been recommending that IT pros get more familiar with Linux, as I think its influence in the data center will continue to grow. However, the problem that I sometimes face is that experienced folks tend to share these “super commands” that ordinary folks have a hard time decomposing. However, this site should make that easier. I’ve tried it—it’s actually pretty handy.

Storage

  • Jim Ruddy (an EMCer, former co-worker of mine, and an overall great guy) has a pretty cool series of articles discussing the use of EMC ViPR in conjunction with OpenStack. Want to use OpenStack Glance with EMC ViPR using ViPR’s Swift API support? See here. Want a multi-node Cinder setup with ViPR? Read how here. Multi-node Glance with ViPR? He’s got it. If you’re new to ViPR (who outside of EMC isn’t?), you might also find his articles on deploying EMC ViPR, setting up back-end storage for ViPR, or deploying object services with ViPR to also be helpful.
  • Speaking of ViPR, EMC has apparently decided to release it for free for non-commercial use. See here.
  • Looking for more information on VSAN? Look no further than Cormac Hogan’s extensive VSAN series (up to Part 14 at last check!). The best way to find this stuff is to check articles tagged VSAN on Cormac’s site. The official VMware vSphere blog also has a series of articles running; check out part 1 and part 2.

Virtualization

  • Did you happen to see this news about Microsoft Hyper-V Recovery Manager (HRM)? This is an Azure-hosted service that can be roughly compared to VMware’s Site Recovery Manager (SRM). However, unlike SRM (which is hosted on-premise), HRM is hosted by Microsoft Azure. As the article points out, it’s important to understand that this doesn’t mean your VMs are replicated to Azure—it’s just the orchestration portion of HRM that is running in Azure.
  • Oh, and speaking of Hyper-V…in early January Microsoft released version 3.5 of their Linux Integration Services, which primarily appears to be focused on adding Linux distribution support (CentOS/RHEL 6.5 is now supported).
  • Gregory Gee has a write-up on installing the Cisco CSR 1000V in VirtualBox. (I’m a recent VirtualBox convert myself; I find the vboxmanage command just so very handy.) Note that I haven’t tried this myself, as I don’t have a Cisco login to get the CSR 1000V code. If any readers have tried it, I’d love to hear your feedback. Gregory also has a few other interesting posts I’m planning to review in the next few weeks as well.
  • Sunny Dua, who works with VMware PSO in India, has a series of blog posts on architecting vSphere environments. It’s currently up to five parts; I don’t know how many more (if any) are planned. Here are the links: part 1 (clusters), part 2 (vCenter SSO), part 3 (storage), part 4 (design process), and part 5 (networking).

It’s time to wrap up now before this gets any longer. If you have any thoughts or tidbits you’d like to share, I welcome any and all courteous comments. Join (or start) the conversation!

Tags: , , , , , , , , , , , ,

I’ve previously discussed using Open vSwitch (OVS) with Linux Containers (LXC) in a couple of previous posts (here and here). In this post, I’m going to show you one way to have your containers automatically connected to OVS on startup without having to use libvirt.

I tested this configuration using Ubuntu 12.04 with the Linux 3.8.0 kernel and an alpha release of LXC 1.0.0 from the precise-backports repository. The version of LXC in the 12.04 repositories (version 0.7.5, if I recall correctly) isn’t new enough to support the specific feature I’m describing here, so plan accordingly.

If you aren’t familiar with LXC, I’d suggest you first read my LXC introductory post. You’ll probably also find the post on using LXC, OVS, and GRE tunnels useful, as some of the information there is applicable here also.

Ready? Let’s get started.

Configuring the Container

You can create your container using the standard LXC tools:

lxc-create -n cn-01 -t ubuntu

As you may already know, this will create a container named “cn–01″ based on the Ubuntu template. The configuration for this container will be found, by default, at /var/lib/lxc/cn–01/config. By default, unless you’ve changed the configuration of your system, this container will be configured to use virtual Ethernet network interfaces and be attached to the default LXC bridge.

The changes required to make the container connect to OVS are, fortunately, quite minimal:

  1. First, remove or comment out the lxc.network.link line. This is the configuration parameter that causes the container to attach to the default LXC bridge (normally called “lxcbr0″).
  2. Add a configuration line to run a script after creating the network interfaces. In my examples here, I’ll assume the script is called “ovsup” and is stored in the /etc/lxc/ directory. The configuration parameter should look something like this:
lxc.network.script.up = /etc/lxc/ovsup

(Note that there is also a corresponding lxc.network.script.down configuration parameter, but I won’t be using it in this example.)

Once you’ve made these changes to the container’s configuration, then you’re ready to create the actual script.

Creating the Network Attachment Script

Your script—the one referenced on the lxc.network.script.up in the container’s configuration file—should look something like this:

(If you can’t see the code block above, please click here.)

LXC passes five parameters to the script when it is called:

  1. The name of the container
  2. The configuration section of the container’s configuration (“net” in this case)
  3. Either “up” or “down”, depending on which configuration option is calling the script (lxc.network.script.up passes “up”, lxc.network.script.down passes “down”)
  4. The type of networking (“veth” in this case)
  5. The name of the interface (randomly generated unless you have included lxc.network.veth.peer in the container’s configuration)

This simple script doesn’t really need anything other than the interface name, so it only uses parameter 5 (the $5 in the script). The script first ensures that the appropriate OVS bridge exists (creating it if necessary), then deletes the interface from the OVS bridge (if it exists) and adds it back to the OVS bridge.

(Note: If you are using the lxc.network.script.down configuration parameter, you could eliminate the line to delete the port from the OVS bridge and place it in the down script instead. Or, you could write logic into the script to see if “down” is being called and delete the port. There are a variety of ways to approach the situation.)

Using this configuration, when you start the container the host-side virtual Ethernet interface created by LXC will be automatically added to OVS, and your container will have whatever network connectivity is dictated by the OVS configuration. This could include tunneled connectivity (as described here) or bridged connectivity.

If you have any questions, feedback, or corrections, please feel free to speak up in the comments below. I encourage reader interaction!

Tags: , , ,

In this post, I’m going to show you a workaround to running Synergy on OS X Mavericks. If you visit the official Synergy page, you’ll note that the site indicates that full Mavericks support is still pending. However, if you’re willing to “get your hands dirty,” you can run Synergy on OS X Mavericks right now.

If you’re unfamiliar with Synergy, read this write-up (from 2 years ago) on how I use Synergy in my home office setup. The basic gist behind Synergy is that one computer will run the Synergy server; other computers will run the Synergy client and connect to the Synergy server. You’ll be able to use the keyboard and mouse attached to the Synergy server to control the Synergy clients.

Here’s how to get Synergy support running on OS X Mavericks now:

  1. Download the latest 10.8 Synergy build from the website. (I didn’t include a link here because the link changes as the version changes, so the link would become stale rather quickly.) This downloads as a .DMG file to your computer.
  2. Double-click the .DMG to open and mount it on your desktop. Inside the .DMG, you’ll see the Synergy app icon.
  3. Right-click (or Ctrl-click) on the Synergy app and select “Show Package Contents.”
  4. Double-click on Contents, then MacOS.
  5. In the MacOS file, copy the synergys and synergyc files to a different location. It doesn’t really matter where, just make note of the location.
  6. Close all the window and eject (unmount) the downloaded .DMG file.

For your Synergy server, you’ll need an appropriate configuration file. You can check my previously-mentioned Synergy post for an example configuration file, or you can peruse the official wiki. Either way, create an appropriate configuration file, and make note of its name and location.

When you’re ready, just launch the Synergy server from the OS X Terminal, like this (I’m assuming that synergys and its configuration file—creatively named synergy.conf—are stored in your home directory):

~/synergys -c ~/synergy.conf

Using whatever method you prefer, copy the previously-extracted synergyc file to your Synergy client(s). As before, it doesn’t really matter too much where you put the file, just make a note of the location. Then, using the OS X Terminal, run this (as before, I’m assuming synergyc is in your home directory):

~/synergyc <Name of Synergy server>

That’s it! You should now be able to use the keyboard and mouse on the Synergy server to control the Synergy client. I can verify that current builds of the Synergy client (synergyc) work just fine on OS X Mavericks, and I would imagine that the Synergy server would work fine as well (I just haven’t had time to test it). If anyone has tested it and would like to provide feedback in the comments, I’m sure other readers would appreciate it.

Enjoy! (By the way, if you do find Synergy to be useful, I’d recommend donating to the project.)

Tags: , ,

For the last couple of years, I’ve been sharing my annual “projects list” and then grading myself on the progress (or lack thereof) on the projects at the end of the year. For example, I shared my 2012 project list in early January 2012, then gave myself grades on my progress in early January 2013.

In this post, I’m going to grade myself on my 2013 project list. Here’s the project list I posted just under a year ago:

  1. Continue to learn German.
  2. Reinforce base Linux knowledge.
  3. Continue using Puppet for automation.
  4. Reinforce data center networking fundamentals.

So, how did I do? Here’s my assessment of my progress:

  1. Continue to learn German: I have made some progress here, though certainly not the progress that I wanted to learn. I’ve incorporated the use of Memrise, which has been helpful, but I still haven’t made the progress I’d like. If anyone has any other suggestions for additional tools, I’m open to your feedback. Grade: D (below average)

  2. Reinforce base Linux knowledge: I’ve been suggesting to VMUG attendees that they needed to learn Linux, as it’s popping up all over the place in all sorts of roles. In my original 2013 project list, I said that I was going to focus on RHEL and RHEL variants, but over the course of the year ended up focusing more on Debian and Ubuntu instead (due to more up-to-date packages and closer alignment with OpenStack). Despite that shift in focus, I think I’ve made decent progress here. There’s always room to grow, of course. Grade: B (above average)

  3. Continue using Puppet for automation: I’ve made reasonable progress here, expanding my use of Puppet to include managing Debian/Ubuntu software repositories (see here and here for examples), managing SSH keys, managing Open vSwitch (OVS) via a third-party module, and—most recently—exploring the use of Puppet with OpenStack (no blog posts—yet). There’s still quite a bit I need to learn (some of my manifests don’t work quite as well as I’d like), but I did make progress here. Grade: C (average)

  4. Reinforce data center networking fundamentals: Naturally, my role at VMware has me spending a great deal of time on how network virtualization affects DC networking, and this translated into some progress on this project. While I gained solid high-level knowledge on a number of DC networking topics, I think I was originally thinking I needed more low-level “in the weeds” knowledge. In that regard, I don’t feel like I did well; on the flip side, though, I’m not sure whether I really needed more low-level “in the weeds” knowledge. This highlights a key struggle for me personally: how to balance the deep, “in the weeds” knowledge with the high-level knowledge. Suggestions on how others have overcome this challenge are welcome. Grade: C (average)

In summary: not bad, but could have been better!

What’s not reflected in this project list is the progress I made with understanding OpenStack, or my deepened level of knowledge of OVS (just browse articles tagged OVS for an idea of what I’ve been doing in that area).

Over the next week or two, I’ll be reflecting on my progress with my 2013 projects and thinking about what projects I should be taking in 2014. In the meantime, I would love to hear any feedback, suggestions, or thoughts on projects I should consider, technologies that should be incorporated, or learning techniques I should leverage. Feel free to speak up in the comments below.

Tags: , , , , , , ,

I’m back with another “Reducing the Friction” blog post, this time to talk about training an e-mail spam filter. As you may recall if you read one of the earlier posts (may I suggest this one and this one?), I use the phrase “reducing the friction” to talk about streamlining and simplifying commonly-performed tasks so as to allow you to improve your own personal efficiency and/or adopt more efficient habits or workflows.

I recently moved my e-mail services off Google and onto Fastmail. (I described the reasons why I made this move in this post.) Fastmail has been great—I find their e-mail service to be much faster than what I was seeing with Google. The one drawback, though, has been an increase in spam. Not a huge increase, mind you, but enough to notice. Fastmail recommends that you help train your personal spam filter by moving messages into a folder you designate, and then telling their systems to consider everything in that folder to be spam. While that’s not hard, it’s also not very streamlined, so I took up the task of making it even easier and faster.

(Note that, as a Mac user, most of my tips focus on Mac applications. If you’re a user of another platform, I do apologize—but I can only speak about what I use myself.)

To help make this easier, I came up with this bit of AppleScript:

(Click here if you don’t see a code block above this paragraph.)

To make this work on your system, all you need to do is just change the two property declarations at the top. Set them to the correct values for your system.

As you can tell by the comments in the code, this script was designed to be run from within Apple’s Mail app itself. To make that easy, I use a fantastic tool called FastScripts (highly recommended!). Using FastScripts, I can easily designate an application-specific shortcut key (I use Ctrl-Cmd-S) to invoke the script from within Apple Mail. Boom! Just like that, you now have a super-easy way to both help speed up processing your e-mail as well as helping train your personal spam filter. (Note: if you are also a FastMail customer, refer to the FastMail help screens while logged in to get more details on marking a folder for spam learning.)

I hope this helps someone out there!

Tags: , , ,

In this post, I’m going to show you how to manage Open vSwitch (OVS) using the popular open source configuration management tool Puppet. This is not the first time I’ve written about this topic; in the past I showed you how to automate OVS configuration with Puppet via a hack utilizing some RHEL-OVS integrations. This post, however, focuses on the use of an actual Puppet module that will manage the configuration of OVS, a much cleaner solution—in my view, at least—than leveraging the file-based integrations I discussed earlier.

The Puppet module I’ll be using and discussing in this post is the L23Network module (found here on GitHub). This is an extremely flexible and useful module, capable of not only configuring and managing network interfaces but also capable of managing the configuration of OVS. The latter functionality—managing the configuration of OVS—will be the primary focus of this article (with one exception).

The L23Network module is pretty well-documented, so I won’t bother regurgitating the documentation here. Instead, I’ll just try to provide some specific examples, and tie those examples back to some of the various OVS configurations I’ve shown you in earlier posts.

First, let’s get “the one exception” I mentioned earlier out of the way. In OVS environments, you’ll often need to bring up a physical interface without assigning that interface an IP address. For example, consider a physical interface that is providing bridged connectivity to guest domains (VMs) on an OVS bridge. You’ll want the interface to be up, but the interface does not need an IP address. Using the L23Network module, you can accomplish that with this piece of code in your manifest:

l23network::l3::ifconfig {'eth1': ipaddr => 'none'}

Now that eth1 is up, you could create a bridge to which to attach it with this code:

l23network::l2::bridge {'br-ex': }

And then you could actually attach eth1 like this:

l23network::l2::port {'eth1': bridge => 'br-ex'}

You could then provide multi-VLAN bridged connectivity to guest domains via libvirt as I explained in my post on using VLANs with libvirt and OVS. (Or, if you are using LXC with libvirt and OVS, you could provide multi-VLAN bridged connectivity to containers.)

The L23Network module can also work with other types of interfaces, not just physical interfaces. Want to create an internal interface, perhaps to use as a tunnel endpoint for GRE tunnel as I described here? Use this snippet of Puppet code:

l23network::l2::port {'tep0': bridge => 'br-tun', type => 'internal'}

You could then assign the newly-created tep0 interface an IP address on your transport network like this:

l23network::l3::ifconfig {'tep0': ipaddr => '10.1.1.1/24'}

(In theory, you could also use the L23Network module to create an internal interface so as to run host management through OVS, but then you could run into issues communicating with the Puppet server over the same interfaces the Puppet server is configuring.)

I haven’t yet used L23Network to create/manage patch ports or GRE ports, but the documentation indicates the module is capable of doing so. This is an area that I plan to explore in a bit more detail in the near future (in my copious free time).

Based on the snippets I’ve given you above, it should be pretty straightforward how to combine these various pieces together to fully configure and manage OVS instances across a large number of systems. However, if you have any questions, feel free to post them in the comments below. I also welcome all other courteous feedback; you are encouraged to start (or join) the conversation.

Tags: , , , , , ,

Some time ago, I showed you how to use Puppet to add Ubuntu Cloud Archive support to your Ubuntu installation. Since that time, OpenStack has had a new release (the Havana release) and the Ubuntu Cloud Archive repository has been updated with new packages to support the Havana release. In this post, I’ll show you an updated snippet of code to take advantage of these newer packages in the Ubuntu Cloud Archive repository.

For reference, here’s the original Puppet code I posted in the first article:

(If you can’t see the code snippet above, please click here.)

That points your Ubuntu installation to the Grizzly packages.

Here’s updated code that will point your installation to the appropriate packages to support OpenStack’s Havana release:

(Click here if you can’t see the code snippet above.)

As you can see, there is only one small change between the two code snippets: changing “precise-updates/grizzly” in the first to “precise-updates/havana” in the second. (Naturally, this assumes you’re using Ubuntu 12.04, the latest LTS release as of this writing.) I know this seems like a pretty simple thing to post, but I wanted to include it here for the sake of completeness and the benefit of future readers.

Feel free to speak up in the comments with any questions, suggestions, or corrections.

Tags: , , , ,

I recently had the opportunity to conduct an e-mail interview with Jesse Proudman, founder and CEO of Blue Box. The interview is posted below. While it gets a bit biased toward Blue Box at times (he started the company, after all), there are some interesting points raised.

[Scott Lowe] Tell the readers here a little bit about yourself and Blue Box.

[Jesse Proudman] My name is Jesse Proudman. I love the Internet’s “plumbing”. I started working in the infrastructure space in 1997 to capitalize on my “gear head fascination” with the real-time nature of server infrastructure. In 2003, I founded Blue Box from my college dorm room to be a managed hosting company focused on overcoming the complexities of highly customized open source infrastructure running high traffic web applications. Unlike many hosting and cloud startups that evolved to become focused solely on selling raw infrastructure, Blue Box subscribes to the belief that many businesses demand fully rounded solutions vs. raw infrastructure that they must assemble.

In 2007, Blue Box developed proprietary container-based cloud technology for both our public and private cloud offerings. Blue Box customers combine bare metal infrastructure with on-demand cloud containers for a hybrid deployment coupled with fully managed support including 24×7 monitoring. In Q3 of 2013, Blue Box launched OpenStack On-Demand, a hosted, single tenant private cloud offering. Capitalizing on our 10 years of infrastructure experience, this single-tenant hosted private cloud delivers on all six tenants today’s IT teams require as they evolve their cloud strategy.

Outside of Blue Box, I have an amazing wife and daughter, and I have a son due in February. I am a fanatical sports car racer and also am actively involved in the Seattle entrepreneurial community, guiding the next generation of young entrepreneurs through the University of Puget Sound Business Leadership, 9Mile Labs and University of Washington’s Entrepreneurial mentorship programs.

[SL] Can you tell me a bit more about why you see the continuing OpenStack API debate to be irrelevant?

[JP] First, I want to be specific that when I say irrelevant, I don’t mean unhealthy. This debate is a healthy one to be having. The sharing of ideas and opinions is the cornerstone of the open source philosophy.

But I believe the debate may be premature.

Imagine a true IaaS stack as a tree. Strong trees must have a strong trunk to support their many branches. For IaaS technology, the trunk is built of essential cloud core services: compute, networking and storage. In OpenStack, these equate to Nova, Neutron, Cinder and Swift (or Ceph). The branches then consist of everything else that evolves the offering and makes it more compelling and easier to use: everything that builds upon the strong foundation of the trunk. In OpenStack, these branches include services like Ceilometer, Heat, Trove and Marconi.

I consider API compatibility a branch.

Without a robust, reliable sturdy trunk, the branches become irrelevant, as there isn’t a strong supporting foundation to hold them up. And if neither the trunk, nor the branches are reliable, then the API to talk to them certainly isn’t relevant.

In is my belief that OpenStack needs concentrate on strengthening the trunk before putting significant emphasis into the possibilities that exist in the upper reaches of the canopy.

OpenStack’s core is quite close. Grizzly was the first release many would define as “stable” and Havana is the first release where that stability could convert into operational simplicity. But there still is room for improvement (particularly with projects like Neutron), so it is my argument to focus on strengthening the core before exploring new projects.

Once the core is strong then the challenge becomes the development of the service catalogue. Amazon has over one hundred different services that can be integrated together into a powerful ecosystem. OpenStack’s service catalogue is still very young and evolving rapidly. Focus here is required to ensure this evolution is effective.

Long term, I certainly believe API compatibility with AWS (or Azure, or GCE) can bring value to the OpenStack ecosystem. Early cloud adopters who took to AWS before alternatives existed have technology stacks written to interface directly with Amazon’s APIs. Being able to provide compatibility for those prospects means preventing them from having to rewrite large sections of their tooling to work with OpenStack.

API compatibility provided via a higher-level proxy would allow for the breakout of maintenance to a specific group of engineers focused on that requirement (and remove that burden from the individual service teams). It’s important to remember that chasing external APIs will always be a moving target.

In the short run, I believe it wise to rally the community around a common goal: strengthen the trunk and intelligently engineer the branches.

[SL] What are your thoughts on public vs. private OpenStack?

[JP] For many, OpenStack draws much of its appeal from the availability of both public, hosted private and on-premise private implementations. While “cloud bursting” still lives more in the realms of fantasy than reality, the power of a unified API and service stack across multiple consumption models enables incredible possibilities.

Conceptually, public cloud is generally better defined and understood than private cloud. Private cloud is a relatively new phenomenon, and for many has really meant advanced virtualization. While it’s true private clouds have traditionally meant on-premise implementations, hosted private cloud technologies are empowering a new wave of companies who recognize the power of elastic capabilities, and the value that single-tenant implementations can deliver. These organizations are deploying applications into hosted private clouds, seeing the value proposition that can bring.

A single-sourced vendor or technology won’t dominate this world. OpenStack delivers flexibility through its multiple consumption models, and that only benefits the customer. Customers can use that flexibility to deploy workloads to the most appropriate venue, and that only will ensure further levels of adoption.

[SL] There’s quite a bit of discussion that private cloud strictly a transitional state. Can you share your thoughts on that topic?

[JP] In 2012, we began speaking with IT managers across our customer base, and beyond. Through those interviews, we confirmed what we now call the “six tenets of private cloud.” Our customers and prospects are all evolving their cloud strategies in real time, and are looking for solutions that satisfy these requirements:

  1. Ease of use ­ new solutions should be intuitively simple. Engineers should be able to use existing tooling, and ops staff shouldn’t have to go learn an entirely new operational environment.

  2. Deliver IaaS and PaaS – IaaS has become a ubiquitous requirement, but we repeatedly heard requests for an environment that would also support PaaS deployments.

  3. Elastic capabilities – the desire to the ability to grow and contract private environments much in the same way they could in a public cloud.

  4. Integration with existing IT infrastructure ­ businesses have significant investments in existing data center infrastructure: load balancers, IDS/IPS, SAN, database infrastructure, etc. From our conversations, integration of those devices into a hosted cloud environment brought significant value to their cloud strategy.

  5. Security policy control ­ greater compliance pressures mean a physical “air gap” around their cloud infrastructure can help ensure compliance and ease peace of mind.

  6. Cost predictability and control – Customers didn’t want to need a PhD to understand how much they’ll owe at the end of the month. Budgets are projected a year in advance, and they needed to know they could project their budgeted dollars into specific capacity.

Public cloud deployments can certainly solve a number of these tenets, but we quickly discovered that no offering on the market today was solving all six in a compelling way.

This isn’t a zero sum game. Private cloud, whether it be on-premise or in a hosted environment, is here to stay. It will be treated as an additional tool in the toolbox. As buyers reallocate the more than $400 billion that’s spent annually on IT deployments, I believe we’ll see a whole new wave of adoption, especially when private cloud offerings address the six tenets of private cloud.

[SL] Thanks for your time, Jesse!

If anyone has any thoughts to share about some of the points raised in the interview, feel free to speak up in the comments. As Jesse points out, debate can be healthy, so I invite you to post your (courteous and professional) thoughts, ideas, or responses below. All feedback is welcome!

Tags: ,

« Older entries § Newer entries »