Linux

This category contains posts related to Linux.

This issue describes a fix I found for an issue I had when booting KVM guest domains on the Ubuntu/KVM hypervisors in my home lab. I’d been struggling with this issue for quite some time now, but only recently found what I believe to be the final fix for the problem.

First, allow me to provide a bit of background. Some time ago—I’d say around August 2012, when I left the vSpecialist team at EMC to join an OpenStack-focused team in another part of EMC—I moved my home lab over completely to Ubuntu 12.04 LTS with the KVM hypervisor. This was an important step in educating myself on Linux, KVM, libvirt, and Open vSwitch (OVS), all of which are critical core components in most installations of OpenStack.

Ever since making that change—particularly after adding some new hardware, a pair of Dell C6100 servers, to my home lab—I would experience intermittent problems booting a KVM guest. The guest would appear to boot properly, but then hang shortly after a message about activating swap space and fsck reporting that the file system was clean. Sometimes, rebooting the guest would work; many times, rebooting the guest didn’t work. Re-installing the guest sometimes worked, but sometimes it didn’t. There didn’t appear to be any consistency with regard to the host (the issue occurred on all hosts) or guest configuration. The only consistency appeared to be with Ubuntu, as virtually (no pun intended) all my KVM guests were running Ubuntu.

Needless to say, this was quite frustrating. I tried all the troubleshooting I could imagine—deleting and recreating swap space, manually checking the file system(s), various different installation routines—and nothing seemed to make any difference.

Finally, just in the last few weeks, I stumbled across this page, which indicated that adding “nomodeset” to the grub command line fixed the problem. This was a standard part of my build (it kept the console from getting too large when using VNC to connect to the guest), but it required that I was able to successfully boot the VM first. I’d noted that once I had been able to successfully boot a guest and add “nomodeset” to the grub configuration, I didn’t have any further issues with that particular guest; however, I explained that away by saying that the intermittent boot issue must have been some sort of first-time boot issue.

In any case, that page linked to this ServerFault entry, which also indicated that the use of “nomodeset” helped fix some (seemingly) random boot problems. The symptoms described there—recovery mode worked fine, booting normally after booting into recovery mode resulted in an “initctl: event failed” error—were consistent with what I’d been seeing as well.

So, I took one of the VMs that was experiencing this problem, booted it into recovery mode, edited the /etc/default/grub file to include “nomodeset” on the GRUB_CMDLINE_LINUX_DEFAULT line, and rebooted. The KVM guest booted without any issues. Problem fixed (apparently).

Thus far, this has fixed the intermittent boot issue on every KVM guest I’ve tried, so I’m relatively comfortable recommending it as a potential change you should explore if you experience the same problem/symptoms. I can’t guarantee it will work, but it has worked for me so far.

Good luck!

Tags: , ,

Welcome to Technology Short Take #42, another installation in my ongoing series of irregularly published collections of news, items, thoughts, rants, raves, and tidbits from around the Internet, with a focus on data center-related technologies. Here’s hoping you find something useful!

Networking

  • Anthony Burke’s series on VMware NSX continues with part 5.
  • Aaron Rosen, a Neutron contributor, recently published a post about a Neutron extension called Allowed-Address-Pairs and how you can use it to create high availability instances using VRRP (via keepalived). Very cool stuff, in my opinion.
  • Bob McCouch has a post over at Network Computing (where I’ve recently started blogging as well—see my first post) discussing his view on how software-defined networking (SDN) will trickle down to small and mid-sized businesses. He makes comparisons among server virtualization, 10 Gigabit Ethernet, and SDN, and feels that in order for SDN to really hit this market it needs to be “not a user-facing feature, but rather a means to an end” (his words). I tend to agree—focusing on SDN is focusing on the mechanism, rather than focusing on the problems the mechanism can address.
  • Want or need to use multiple external networks in your OpenStack deployment? Lars Kellogg-Stedman shows you how in this post on multiple external networks with a single L3 agent.

Servers/Hardware

  • There was some noise this past week about Cisco UCS moving into the top x86 blade server spot for North America in Q1 2014. Kevin Houston takes a moment to explore some ideas why Cisco was so successful in this post. I agree that Cisco had some innovative ideas in UCS—integrated management and server profiles come to mind—but my biggest beef with UCS right now is that it is still primarily a north/south (server-to-client) architecture in a world where east/west (server-to-server) traffic is becoming increasingly critical. Can UCS hold on in the face of a fundamental shift like that? I don’t know.

Security

  • Need to scramble some data on a block device? Check out this command. (I love the commandlinefu.com site. It reminds me that I still have so much yet to learn.)

Cloud Computing/Cloud Management

  • Want to play around with OpenDaylight and OpenStack? Brent Salisbury has a write-up on how to OpenStack Icehouse (via DevStack) together with OpenDaylight.
  • Puppet Labs has released a module that allows users to programmatically (via Puppet) provision and configure Google Compute Platform (GCP) instances. More details are available in the Puppet Labs blog post.
  • I love how developers come up with these themes around certain projects. Case in point: “Heat” is the name of the project for orchestrating resources in OpenStack, HOT is the name for the format of Heat templates, and Flame is the name of a new project to automatically generate Heat templates.

Operating Systems/Applications

  • I can’t imagine that anyone has been immune to the onslaught of information on Docker, but here’s an article that might be helpful if you’re still looking for a quick and practical introduction.
  • Many of you are probably familiar with Razor, the project that former co-workers Nick Weaver and Tom McSweeney created when they were at EMC. Tom has since moved on to CSC (via the vCHS team at VMware) and has launched a “next-generation” version of Razor called Hanlon. Read more about Hanlon and why this is a new/separate project in Tom’s blog post here.
  • Looking for a bit of clarity around CoreOS and Project Atomic? I found this post by Major Hayden to be extremely helpful and informative. Both of these projects are on my radar, though I’ll probably focus on CoreOS first as the (currently) more mature solution.
  • Linux Journal has a nice multi-page write-up on Docker containers that might be useful if you are still looking to understand Docker’s basic building blocks.
  • I really enjoyed Donnie Berkholz’ piece on microservices and the migrating Unix philosophy. It was a great view into how composability can (and does) shift over time. Good stuff, I highly recommend reading it.
  • cURL is an incredibly useful utility, especially in today’s age of HTTP-based REST API. Here’s a list of 9 uses for cURL that are worth knowing. This article on testing REST APIs with cURL is handy, too.
  • And for something entirely different…I know that folks love to beat up AppleScript, but it’s cross-application tasks like this that make it useful.

Storage

  • Someone recently brought the open source Open vStorage project to my attention. Open vStorage compares itself to VMware VSAN, but supporting multiple storage backends and supporting multiple hypervisors. Like a lot of other solutions, it’s implemented as a VM that presents NFS back to the hypervisors. If anyone out there has used it, I’d love to hear your feedback.
  • Erik Smith at EMC has published a series of articles on “virtual storage networks.” There’s some interesting content there—I haven’t finished reading all of the posts yet, as I want to be sure to take the time to digest them properly. If you’re interested, I suggest starting out with his introductory post (which, strangely enough, wasn’t the first post in the series), then moving on to part 1, part 2, and part 3.

Virtualization

  • Did you happen to see this write-up on migrating a VMware Fusion VM to VMware’s vCloud Hybrid Service? For now—I believe there are game-changing technologies out there that will alter this landscape—one of the very tangible benefits of vCHS is its strong interoperability with your existing vSphere (and Fusion!) workloads.
  • Need a listing of the IP addresses in use by the VMs on a given Hyper-V host? Ben Armstrong shares a bit of PowerShell code that produces just such a listing. As Ben points out, this can be pretty handy when you’re trying to track down a particular VM.
  • vCenter Log Insight 2.0 was recently announced; Vladan Seget has a decent write-up. I’m thinking of putting this into my home lab soon for gathering event information from VMware NSX, OpenStack, and the underlying hypervisors. I just need more than 24 hours in a day…
  • William Lam has an article on lldpnetmap, a little-known utility for mapping ESXi interfaces to physical switches. As the name implies, this relies on LLDP, so switches that don’t support LLDP or that don’t have LLDP enabled won’t work correctly. Still, a useful utility to have in your toolbox.
  • Technology previews of the next versions of Fusion (Fusion 7) and Workstation (Workstation 11) are available; see Eric Sloof’s articles (here and here for Fusion and Workstation, respectively) for more details.
  • vSphere 4 (and associated pieces) are no longer under general support. Sad face, but time stops for no man (or product).
  • Having some problems with VMware Fusion’s networking? Cody Bunch channels his inner Chuck Norris to kick VMware Fusion networking in the teeth.
  • Want to preview OS X Yosemite? Check out William Lam’s guide to using Fusion or vSphere to preview the new OS X beta release.

I’d better wrap this up now, or it’s going to turn into one of Chad’s posts. (Just kidding, Chad!) Thanks for taking the time to read this far!

Tags: , , , , , , , , , , , , , , ,

Today marked the start of Dockercon, and Docker, Inc., the commercial entity behind the incredibly popular open source Docker project, is taking this opportunity to make its move. Already the focus of quite a bit of attention (some might say a disproportionate amount of attention), Docker is making several announcements today that indicate an intent to broaden its reach:

  • Docker (also referred to as Docker Engine) 1.0 is released today, signaling the project’s readiness to be officially used in production environments (although many firms were/are already using pre–1.0 releases in production)
  • Docker is introducing a new Enterprise Support program to give enterprises the warm cuddlies that using Docker in production environments is safe. The new program aims to provide (in Docker’s words) “training, expertise, and support necessary to stand-up mission-critical workloads built on the Docker platform.”
  • Docker is announcing a set of ten systems integrator partners to help drive commercial adoption.
  • Docker is also unveiling Docker Hub, a cloud-based service (naturally!) for users, content, and workflows. Docker Hub will offer a registry for a comprehensive list of “Dockerized” applications (taking the place of the Docker Index, I assume), along with a management console, user authentication, an automated build service, the Docker Hub API, and collaboration tools to help users manage and share applications.
  • A key part of Docker Hub is Official Repositories; these are “Dockerized” applications that are maintained, supported, and available to all Docker Hub users. This will initially include applications like MongoDB, MySQL, Nginx, Redis, and WordPress, but it is open to any community group or ISV. ()Obviously, there are some requirements around committing to maintenance of the application on an ongoing basis.)

I’m certainly a fan of using Docker (or other container technologies) where it makes sense. I fear, though, that the intense focus (some might say hype) around Docker will lead more than a few organizations down a path of trying to make Docker containers their “be all end all” solution. If that trend grows too large, that could be as damaging (if not more damaging) to Docker than being ignored. Time will tell. In the meantime, I’d strongly recommend getting a grasp on the basics of Docker so that you can better understand how it might fit into your overall solution. (My introductory post on Docker, while a bit dated, might prove useful in that regard.)

Tags: , ,

Some time ago, I introduced you to the idea of Linux network namespaces, and provided an overview of some of the commands needed to interact with network namespaces. In this post, I’ll follow up on that post with some additional information on using network namespaces with other types of network interfaces.

In the previous network namespaces article, I mentioned (incorrectly, it turns out) that you had to use virtual Ethernet (veth) interfaces in order to connect a namespace to the physical network:

It turns out you can only assign virtual Ethernet (veth) interfaces to a network namespace.

I say “incorrectly” because you are able to assign more than just virtual Ethernet interfaces to a Linux network namespace. I’m not sure why I arrived at that conclusion, because subsequent testing—using Ubuntu 12.04 LTS as with the original testing—showed no problem assigning physical interfaces to a particular network namespace. The VMs I used for the subsequent testing are using a different kernel (the 3.11 kernel instead of whatever I used in the previous testing), so it’s entirely possible that’s the difference. If I get the opportunity, I’ll try with earlier kernel builds to see if that makes any difference.

In any case, assigning other types of network interfaces to a network namespace is just like assigning veth interfaces. First, you create the network namespace:

ip netns add <new namespace name>

Then, you’d assign the interface to the namespace:

ip link set <device name> netns <namespace name>

For example, if you wanted to assign eth1 to the “blue” namespace, you’d run this:

ip link set eth1 netns blue

Please note that I haven’t found a way to unassign an interface from a network namespace other than deleting the namespace entirely.

If you want to assign a VLAN interface to a namespace, the process is slightly different. You’ll have to create the namespace first, as with physical and veth interfaces, but you’ll also have to create the VLAN interface in the “default” namespace first, then move it over to the desired namespace.

For example, first you’d create the namespace “red”:

ip netns add red

Then you’d create the VLAN interface for VLAN 100 on physical interface eth1:

ip link add link eth1 name eth1.100 type vlan id 100

The generic form of this command is this:

ip link add link <physical device> name <VLAN device name> type vlan id <VLAN ID>

Note that you can’t use ip netns exec to run the command to create the VLAN interface in the network namespace directly; it won’t work because the parent interface upon which the VLAN interface is based doesn’t exist in the namespace. So you’ll create the VLAN interface in the default namespace first, then move it over.

Once the VLAN interface is created, then move it to the target namespace:

ip link set eth1.100 netns red

One (sort of) interesting thing I noted in my testing was that link status and IP addresses don’t move between namespaces. Therefore, don’t bother assigning an IP address or setting the link state of an interface before you move it to the final namespace, because you’ll just have to do it again.

To make the interface functional inside the target namespace, you’ll use ip netns exec to target the specific configuration commands against the desired namespace. For example, if the VLAN interface eth1.100 exists in the namespace “blue”, you’d run these commands:

ip netns exec blue ip addr add 10.1.1.1/24 dev eth1.100

That adds the IP address to the interface in the namespace; then you’d use this command to set the link status to up:

ip netns exec blue ip link set eth1.100 up

Then you can test network connectivity with our good friend ping like this:

ip netns exec blue ping -c 4 10.1.1.2

(Obviously, you’d want to substitute an appropriate IP address there for your specific configuration and environment.)

I hope this additional information on working with Linux network namespaces is useful. As always, I invite and encourage any questions, thoughts, or corrections in the comments below. All courteous comments are welcome!

Tags: , ,

In this post, I’m going to talk about a way to modify QCOW2-based images without having to boot up a full-blown virtual machine based on the image. I found this process while trying to change the behavior of cloud-init on the official Ubuntu cloud images from Canonical.

While I had successfully built my own Ubuntu 12.04 LTS image for my OpenStack-based home lab (and it worked just fine), I was keen to use Canonical’s official cloud images as they were much smaller (around 250MB versus 1.5GB for my home-built image). Given that disk space is at a bit of a premium in my home lab, using the smaller images would be quite beneficial. The problem was that cloud-init wasn’t configured to operate in a way that worked for my home lab environment.

The question, of course, was…how do I do this? I tried a couple different approaches:

  1. I installed VirtualBox on one of my OS X systems, because I knew VirtualBox supported QCOW2 images. Unfortunately, trying to launch a VM using one of the official cloud images as a root disk failed. (I presume this is because the cloud images assume the use of the virtio block driver, which VirtualBox doesn’t support.)

  2. Next, I installed a plain jane Ubuntu VM in VirtualBox and attached the official cloud image QCOW2 file as a second hard disk. That didn’t work either; the Ubuntu VM refused to recognize the filesystem(s) on the QCOW2 image.

I was about at my wits’ end when I found this post. Aha! I’d heard of guestfish before, but I hadn’t thought to try it in this instance. Guestfish is part of the larger libguestfs project.

So I tried the instructions in the post I’d found. Unfortunately, it failed the first time I tried, but I later—thanks to qemu-img check—found that the Ubuntu 12.04 cloud image I’d downloaded was somehow damaged/corrupted. Using a new download of the image was successful.

Here are the steps I used to modify the contents of a QCOW2 disk image using guestfish:

  1. Starting from a vanilla Ubuntu 12.04 LTS install, use apt-get update and apt-get install guestfish to install guestfish. (You’ll probably need/want to use sudo with both of these commands.)

  2. During the installation of guestfish, you can select to create a “virtual appliance” that guestfish uses. If you choose not to do this during installation, you can always do it later with sudo update-guestfs-appliance.

  3. Once guestfish is installed and the virtual appliance is created/updated, launch guestfish itself with sudo guestfish. Note that if you don’t have root privileges or don’t use sudo, guestfish will still launch and act like everything is working—but it will fail. Once guestfish is up and running, you’ll be sitting at a “> <fs>” prompt.

  4. Type add <QCOW2 image filename> (obviously substituting the full path and correct filename for the QCOW2 image you’d like to modify).

  5. Type run and press Enter. You’ll see a progress bar that updates the status of the guestfs virtual appliance. Once it’s complete, you’ll be returned to the “> <fs>” prompt.

  6. Enter list-filesystems. Guestfish should return a list of filesystems on the QCOW2 disk image. In the case of the official Canonical cloud images, only “/dev/vda1″ is returned.

  7. To gain access to the filesystem in the QCOW2 image, enter mount <device specification> <guestfish mount point>. For example, when I was modifying the Canonical cloud images, I used mount /dev/vda1 /. Keep in mind you’re not mounting to the system on which guestfish is running; you’re mounting to the guestfish virtual appliance.

  8. Now just edit the files in the QCOW2 image using whatever method you prefer. In my case, I needed to edit /etc/cloud/cloud.cfg, so I just used vi to edit the file and make whatever changes were needed.

  9. When you’re done, type exit to quit guestfish and commit changes to the QCOW2 image.

And that’s it! Using this technique I was able to quickly and easily modify the Ubuntu cloud images so that cloud-init behaved in the way that I wanted.

Hopefully this information helps someone else. If you have any questions or corrections, I invite you to share in the comments below.

Tags: , , ,

If you’ve been following trends in the “cloud” world, you’ve probably heard of Docker. In this post I’m going to provide a quick introduction to Docker, which might be helpful if you’re wondering what Docker is and why it’s garnered so much attention.

The best way to describe Docker is to use the phrase from the Docker web site—Docker is “an open source project to pack, ship and run any application as a lightweight container.” The idea is to provide a comprehensive abstraction layer that allows developers to “containerize” or “package” any application and have it run on any infrastructure. The use of container here refers more to the consistent, standard packaging of applications rather than referring to any underlying technology (a distinction that will be important in a moment). The most common analogy used to help people understand Docker is saying that Docker containers are like shipping containers: they provide a standard, consistent way of shipping just about anything. Docker containers provide a standard, consistent way of packaging just about any application. (At least, that’s the idea.)

So what are the underlying technologies that make up Docker?

  • Docker leverages LXC (Linux Containers), which encompasses Linux features like cgroups and namespaces for strong process isolation and resource control. If you’re not familiar with LXC, I suggest having a look at my introductory post on LXC. Docker’s use of LXC is why I mentioned earlier that Docker’s use of container is more about consistent/standard packaging than any specific technology. While Docker currently leverages LXC, the Docker architecture isn’t necessarily limited to LXC. In theory, Docker could leverage KVM to do the same things it does today with LXC. (Update: While in the process of finishing this post, Docker released version 0.9, which does offer the ability to replace LXC with a different back-end “engine” called libcontainer.)
  • Docker leverages a copy-on-write filesystem (currently AUFS, but other filesystems are being investigated). This allows Docker to instantiate containers (using that term to refer to Docker’s construct, not the LXC construct) very quickly—instead of having to make full copies of whatever files comprise a container, Docker can basically use “pointers” back to existing files. It seems to me this is a big part of what makes Docker so useful.
  • Perhaps due in part to the use of a copy-on-write file system, Docker containers are easily “linked” to other containers. Some people refer to this as stacking containers (continuing the shipping container analogy); to me, it makes more sense to talk about layering one container on top of another. For example, you might create a container that is based on a base Ubuntu image, and then in turn create another container that is based on the first container.
  • Docker uses a “plain text” configuration language (I suppose you could think of it as a domain-specific language, or DSL) to control the configuration of an application container, such as what files should (or should not) be included, what network ports should be open, and what processes/applications should be running inside the container. (I’ll provide an example of a Docker file in just a few moments.)

Now that you have an idea of the basics, let’s take a deeper look. For the purposes of this article, I’m using Ubuntu 12.04 LTS as the Linux platform. Please keep in mind that different distributions may use different commands.

Installing Docker

To install Docker on Ubuntu, it’s a pretty straightforward process.

  1. First, you’ll want to make sure you are running at least the 3.8 kernel. (Newer versions of 12.04 LTS, such as 12.04.4, appear to be installing the 3.11 kernel by default, so you may not have to do this step.) To install the 3.8 kernel (in case you’re running something older), just run these two commands and reboot:
sudo apt-get update
sudo apt-get install linux-image-generic-lts-raring \
linux-headers-generic-lts-raring
  1. Next, add the Docker repository key to your local Apt keychain:
sudo apt-key adv --keyserver keyserver.ubuntu.com \
--recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
  1. Add the Docker repository to your list of repositories:
sudo sh -c "echo deb http://get.docker.io/ubuntu docker main \
> /etc/apt/sources.list.d/docker.list"
  1. Finally, install Docker with an apt-get combo:
sudo apt-get update
sudo apt-get install lxc-docker

Note that this automatically installs LXC, but an “older” version (0.7.5). I haven’t tested to see if Docker will work with the newer alpha release of LXC 1.0.0 that is available in the precise-backports repository (see this post for more information).

Launching a Simple Docker Container

Now that Docker is installed, you can launch a quick Docker container like this:

sudo docker run -i -t ubuntu /bin/bash

If everything is working, the first time you run this command it will automatically download an Ubuntu image (as specified by “ubuntu” on that command line) and create a simple Docker container that just runs the bash shell. You’ll get dropped into an odd prompt that will look something like this (the numbers will change from system to system):

root@4a2f737d6e2e:/#

You’re now running in an Ubuntu environment, but it’s a clean environment—there are no daemons running, nothing listening on various network ports, nothing else except the bash shell. If you run ifconfig, you’ll see that eth0 has an IP address, but that’s it. When you type exit, the container will terminate and you’ll be returned to the prompt for your host system.

Now, let’s take a look at what happened as a result of running that command. If you run sudo docker images, you will see a list of available Docker images that were downloaded to your local system. They’ll all be listed as “ubuntu” in the first column, and it’s the second column that will differentiate them—you’ll see listings for 12.04, 12.10, 13.04, 13.10, etc. You could run a specific Ubuntu image with a command like this:

sudo docker run -i -t ubuntu:13.10 /bin/bash

If you run this command, you’ll note that you’re launched into this new container very quickly. This is one of the strengths of Docker: once an image has been downloaded, launching containers based on that image is very fast.

One other thing you’ll note about Docker is that the containers you start up are ephemeral—meaning that changes you make to the container aren’t persistent. Try this: start up a container (you can use a simple container like the examples I’ve provided already), create a file, and then exit the container. Start the container again, and you’ll see that the file you created previously no longer exists. There are workarounds for this, but for the sake of keeping this post to a manageable size I’ll save discussing those workarounds for another time.

Launching More Complex Docker Containers

So what if you need a Docker container that does need to have a daemon running and listening on a network port? It’s great that you can run a container for the bash shell, but that’s not particularly useful. What if you wanted to run a MySQL container, an SSH container, or a web server container?

To do something like that, you’ll need to utilize a Dockerfile. A Dockerfile is a plain text file that tells Docker exactly how to construct a Docker image. (I mentioned earlier that Docker had its own DSL; this is what I’m talking about.) Here’s an example Dockerfile:

FROM ubuntu:12.04
MAINTAINER Joe Shmoe "[email protected]"
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y nginx
EXPOSE 80
CMD ["nginx"]

The syntax for a Dockerfile is INSTRUCTION arguments, where “INSTRUCTION” refers to commands like FROM, RUN, or CMD. Here are a few quick notes:

  • The FROM command is what enables Docker containers to be based on other Docker containers. In this case, the Docker container is based on the Ubuntu image; specifically, the Ubuntu 12.04 image.
  • The RUN commands execute various commands within the new container. You’d use RUN instructions to install packages, modify files, etc.
  • The EXPOSE command exposes a listening port on a Docker container.
  • The CMD instruction tells what command/daemon to run when the container launches.

I don’t have room here for a full Dockerfile tutorial, but Docker themselves has a great one. Suffice it to say that if you want anything more complex than a simple bash shell, you’re probably going to need to use a Dockerfile.

Once you have a working Dockerfile (just as a heads-up, the above example doesn’t actually work—sorry), you can create a Docker image from that Dockerfile using the docker build command. Once you have a working image, you can then base other Docker containers on that image (using the FROM instruction in a subsequent Dockerfile).

Exploring Other Docker Commands

There are also some other Docker commands you’ll find useful:

  • The docker images command lists all the Docker images available on the system. This includes images that you’ve downloaded from elsewhere as well as images you’ve created.
  • As I mentioned earlier, docker build allows you to create your own images from a Dockerfile.
  • To see which Docker containers are running, use docker ps.
  • The docker rmi command removes Docker images.
  • Finally, just run docker help if you need to see what other commands are available.

Trying Out Docker

You don’t need a lot to try out Docker; a simple Ubuntu 12.04 LTS VM running on VirtualBox, VMware Fusion, or VMware Workstation will work fine. Nested virtualization support isn’t required when using LXC, and the Docker containers themselves are pretty lightweight. To try out Docker, you only need to create a VM and follow the instructions provided earlier for installing Docker. Once you’ve got that, you’re ready to explore it for yourself.

For additional information or another perspective on Docker, I recommend taking a look at this presentation on SlideShare by James Turnbull (formerly with Puppet Labs, now with Docker).

As always, all courteous questions, thoughts, suggestions, or corrections are welcome.

Tags: , ,

It seems as if APIs are popping up everywhere these days. While this isn’t a bad thing, it does mean that IT professionals need to have a better understanding of how to interact with these APIs. In this post, I’m going to discuss how to use the popular command line utility curl to interact with a couple of RESTful APIs—specifically, the OpenStack APIs and the VMware NSX API.

Before I go any further, I want to note that to work with the OpenStack and VMware NSX APIs you’ll be sending and receiving information in JSON (JavaScript Object Notation). If you aren’t familiar with JSON, don’t worry—I’ve have an introductory post on JSON that will help get you up to speed. (Mac users might also find this post helpful as well.)

Also, please note that this post is not intended to be a comprehensive reference to the (quite extensive) flexibility of curl. My purpose here is to provide enough of a basic reference to get you started. The rest is up to you!

To make consuming this information easier (I hope), I’ll break this information down into a series of examples. Let’s start with passing some JSON data to a REST API to authenticate.

Example 1: Authenticating to OpenStack

Let’s say you’re working with an OpenStack-based cloud, and you need to authenticate to OpenStack using OpenStack Identity (“Keystone”). Keystone uses the idea of tokens, and to obtain a token you have to pass correct credentials. Here’s how you would perform that task using curl.

You’re going to use a couple of different command-line options:

  • The “-d” option allows us to pass data to the remote server (in this example, the remote server running OpenStack Identity). We can either embed the data in the command or pass the data using a file; I’ll show you both variations.
  • The “-H” option allows you to add an HTTP header to the request.

If you want to embed the authentication credentials into the command line, then your command would look something like this:

curl -d '{"auth":{"passwordCredentials":{"username": "admin",
"password": "secret"},"tenantName": "customer-A"}}'
-H "Content-Type: application/json" http://192.168.100.100:5000/v2.0/tokens

I’ve wrapped the text above for readability, but on the actual command line it would all run together with no breaks. (So don’t try to copy and paste, it probably won’t work.) You’ll naturally want to substitute the correct values for the username, password, tenant, and OpenStack Identity URL.

As you might have surmised by the use of the “-H” header in that command, the authentication data you’re passing via the “-d” parameter is actually JSON. (Run it through python -m json.tool and see.) Because it’s actually JSON, you could just as easily put this information into a file and pass it to the server that way. Let’s say you put this information (which you could format for easier readability) into a file named credentials.json. Then the command would look something like this (you might need to include the full path to the file):

curl -d @credentials.json -H "Content-Type: application/json" http://192.168.100.100:35357/v2.0/tokens

What you’ll get back from OpenStack—assuming your command is successful—is a wealth of JSON. I highly recommend piping the output through python -m json.tool as it can be difficult to read otherwise. (Alternately, you could pipe the output into a file.) Of particular usefulness in the returned JSON is a section that gives you a token ID. Using this token ID, you can prove that you’ve authenticated to OpenStack, which allows you to run subsequent commands (like listing tenants, users, etc.).

Example 2: Authenticating to VMware NSX

Not all RESTful APIs handle authentication in the same way. In the previous example, I showed you how to pass some credentials in JSON-encoded format to authenticate. However, some systems use other methods for authentication. VMware NSX is one example.

In this example, you’ll need to use a different set of curl command-line options:

  • The “–insecure” option tells curl to ignore HTTPS certificate validation. VMware NSX controllers only listen on HTTPS (not HTTP).
  • The “-c” option stores data received by the server (one of the NSX controllers, in this case) into a cookie file. You’ll then re-use this data in subsequent commands with the “-b” option.
  • The “-X” option allows you to specify the HTTP method, which normally defaults to GET. In this case, you’ll use the POST method along the the “-d” parameter you saw earlier to pass authentication data to the NSX controller.

Putting all this together, the command to authenticate to VMware NSX would look something like this (naturally you’d want to substitute the correct username and password where applicable):

curl --insecure -c cookies.txt -X POST -d 'username=admin&password=admin' https://192.168.100.50/ws.v1/login

Example 3: Gathering Information from OpenStack

Once you’ve gotten an authentication token from OpenStack as I showed you in example #1 above, then you can start using API requests to get information from OpenStack.

For example, let’s say you wanted to list the instances for a particular tenant. Once you’ve authenticated, you’d want to get the ID for the tenant in question, so you’d need to ask OpenStack to give you a list of the tenants (you’ll only see the tenants your credentials permit). The command to do that would look something like this:

curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:5000/v2.0/tenants

The value to be substituted for token ID in the above command is returned by OpenStack when you authenticate (that’s why it’s important to pay attention to the data being returned). In this case, the data returned by the command will be a JSON-encoded list of tenants, tenant IDs, and tenant descriptions. From that data, you can get the ID of the tenant for whom you’d like to list the instances, then use a command like this:

curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:8774/v2/<Tenant ID>/servers

This will return a stream of JSON-encoded data that includes the list of instances and each instance’s unique ID—which you could then use to get more detailed information about that instance:

curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:8774/v2/<Tenant ID>/servers/<Server ID>

By and large, the API is reasonably well-documented; you just need to be sure that you are pointing the API call against the right endpoint. For example, authentication has to happen against the server running Keystone, which may or may not be the same server that is running the Nova API services. (In the examples I just provided, Keystone and the Nova API services are running on the same host, which is why the IP address is the same in the command lines.)

Example 4: Creating Objects in VMware NSX

Getting information from VMware NSX using the RESTful API is very much like what you’ve already seen in getting information from OpenStack. Of course, the API can also be used to create objects. To create objects—such as logical switches, logical switch ports, or ACLs—you’ll use a combination of curl options:

  • You’ll use the “-b” option to pass cookie data (stored when you authenticated to NSX) back for authentication.
  • The “-X” option allows you to specify the HTTP method (in this case, POST).
  • The “-d” option lets us transfer JSON-encoded data to form the request for the object we are going to create. We’ll specify a filename here, preceded by the “@” symbol.
  • The “-H” option adds an appropriate “Content-Type: application/json” header to the request, since we are passing JSON-encoded data to the NSX controller.

When you put it all together, it looks something like this (substituting appropriate values where applicable):

curl --insecure -b cookies.txt -d @new-switch.json 
-H "Content-Type: application/json" -X POST https://192.168.100.50/ws.v1/lswitch

As I mentioned earlier, you’re passing JSON-encoded data to the NSX controller; here are the contents of the new-switch.json file referenced in the above command example:

If you can’t see the code block, please click here.

Once again, I recommend piping the output of this command through python -m json.tool, as what you’ll get back on a successful call is some useful JSON data that includes, among other things, the UUID of the object (logical switch, in this case) that you just created. You can use this UUID in subsequent API calls to list properties, change properties, add logical switch ports, etc.

Clearly, there is much more that can be done with the OpenStack and VMware NSX APIs, but this at least should give you a starting point from which you can continue to explore in more detail. If anyone has any corrections, clarifications, or questions, please feel free to post them in the comments section below. All courteous comments (with vendor disclosure, where applicable) are welcome!

Tags: , , , ,

In this article, I’ll show you how to install Ubuntu packages from a specific repository. It’s not that this is a terribly difficult process, but it also isn’t necessarily intuitive for those who haven’t had to do it before. I ran into this while trying to install an alpha release of LXC 1.0.0 for my recent post on automatically connecting LXC to Open vSwitch (OVS).

Normally, you’d install a package using apt-get like this:

apt-get install package-name

However, when you want to install a package from a specific repository, the command syntax shifts slightly so that it looks like this instead:

apt-get install package-name/repository

So, in my particular case, I was trying to install LXC. However, I needed the alpha LXC 1.0.0 package from the precise-backports repository. So the command to do that looked like this:

apt-get install lxc/precise-backports

Are there other useful apt-get tidbits like this that other readers might find particularly useful? Feel free to share in the comments below. Thanks for reading!

Tags: , ,

I’ve previously discussed using Open vSwitch (OVS) with Linux Containers (LXC) in a couple of previous posts (here and here). In this post, I’m going to show you one way to have your containers automatically connected to OVS on startup without having to use libvirt.

I tested this configuration using Ubuntu 12.04 with the Linux 3.8.0 kernel and an alpha release of LXC 1.0.0 from the precise-backports repository. The version of LXC in the 12.04 repositories (version 0.7.5, if I recall correctly) isn’t new enough to support the specific feature I’m describing here, so plan accordingly.

If you aren’t familiar with LXC, I’d suggest you first read my LXC introductory post. You’ll probably also find the post on using LXC, OVS, and GRE tunnels useful, as some of the information there is applicable here also.

Ready? Let’s get started.

Configuring the Container

You can create your container using the standard LXC tools:

lxc-create -n cn-01 -t ubuntu

As you may already know, this will create a container named “cn–01″ based on the Ubuntu template. The configuration for this container will be found, by default, at /var/lib/lxc/cn–01/config. By default, unless you’ve changed the configuration of your system, this container will be configured to use virtual Ethernet network interfaces and be attached to the default LXC bridge.

The changes required to make the container connect to OVS are, fortunately, quite minimal:

  1. First, remove or comment out the lxc.network.link line. This is the configuration parameter that causes the container to attach to the default LXC bridge (normally called “lxcbr0″).
  2. Add a configuration line to run a script after creating the network interfaces. In my examples here, I’ll assume the script is called “ovsup” and is stored in the /etc/lxc/ directory. The configuration parameter should look something like this:
lxc.network.script.up = /etc/lxc/ovsup

(Note that there is also a corresponding lxc.network.script.down configuration parameter, but I won’t be using it in this example.)

Once you’ve made these changes to the container’s configuration, then you’re ready to create the actual script.

Creating the Network Attachment Script

Your script—the one referenced on the lxc.network.script.up in the container’s configuration file—should look something like this:

(If you can’t see the code block above, please click here.)

LXC passes five parameters to the script when it is called:

  1. The name of the container
  2. The configuration section of the container’s configuration (“net” in this case)
  3. Either “up” or “down”, depending on which configuration option is calling the script (lxc.network.script.up passes “up”, lxc.network.script.down passes “down”)
  4. The type of networking (“veth” in this case)
  5. The name of the interface (randomly generated unless you have included lxc.network.veth.peer in the container’s configuration)

This simple script doesn’t really need anything other than the interface name, so it only uses parameter 5 (the $5 in the script). The script first ensures that the appropriate OVS bridge exists (creating it if necessary), then deletes the interface from the OVS bridge (if it exists) and adds it back to the OVS bridge.

(Note: If you are using the lxc.network.script.down configuration parameter, you could eliminate the line to delete the port from the OVS bridge and place it in the down script instead. Or, you could write logic into the script to see if “down” is being called and delete the port. There are a variety of ways to approach the situation.)

Using this configuration, when you start the container the host-side virtual Ethernet interface created by LXC will be automatically added to OVS, and your container will have whatever network connectivity is dictated by the OVS configuration. This could include tunneled connectivity (as described here) or bridged connectivity.

If you have any questions, feedback, or corrections, please feel free to speak up in the comments below. I encourage reader interaction!

Tags: , , ,

For the last couple of years, I’ve been sharing my annual “projects list” and then grading myself on the progress (or lack thereof) on the projects at the end of the year. For example, I shared my 2012 project list in early January 2012, then gave myself grades on my progress in early January 2013.

In this post, I’m going to grade myself on my 2013 project list. Here’s the project list I posted just under a year ago:

  1. Continue to learn German.
  2. Reinforce base Linux knowledge.
  3. Continue using Puppet for automation.
  4. Reinforce data center networking fundamentals.

So, how did I do? Here’s my assessment of my progress:

  1. Continue to learn German: I have made some progress here, though certainly not the progress that I wanted to learn. I’ve incorporated the use of Memrise, which has been helpful, but I still haven’t made the progress I’d like. If anyone has any other suggestions for additional tools, I’m open to your feedback. Grade: D (below average)

  2. Reinforce base Linux knowledge: I’ve been suggesting to VMUG attendees that they needed to learn Linux, as it’s popping up all over the place in all sorts of roles. In my original 2013 project list, I said that I was going to focus on RHEL and RHEL variants, but over the course of the year ended up focusing more on Debian and Ubuntu instead (due to more up-to-date packages and closer alignment with OpenStack). Despite that shift in focus, I think I’ve made decent progress here. There’s always room to grow, of course. Grade: B (above average)

  3. Continue using Puppet for automation: I’ve made reasonable progress here, expanding my use of Puppet to include managing Debian/Ubuntu software repositories (see here and here for examples), managing SSH keys, managing Open vSwitch (OVS) via a third-party module, and—most recently—exploring the use of Puppet with OpenStack (no blog posts—yet). There’s still quite a bit I need to learn (some of my manifests don’t work quite as well as I’d like), but I did make progress here. Grade: C (average)

  4. Reinforce data center networking fundamentals: Naturally, my role at VMware has me spending a great deal of time on how network virtualization affects DC networking, and this translated into some progress on this project. While I gained solid high-level knowledge on a number of DC networking topics, I think I was originally thinking I needed more low-level “in the weeds” knowledge. In that regard, I don’t feel like I did well; on the flip side, though, I’m not sure whether I really needed more low-level “in the weeds” knowledge. This highlights a key struggle for me personally: how to balance the deep, “in the weeds” knowledge with the high-level knowledge. Suggestions on how others have overcome this challenge are welcome. Grade: C (average)

In summary: not bad, but could have been better!

What’s not reflected in this project list is the progress I made with understanding OpenStack, or my deepened level of knowledge of OVS (just browse articles tagged OVS for an idea of what I’ve been doing in that area).

Over the next week or two, I’ll be reflecting on my progress with my 2013 projects and thinking about what projects I should be taking in 2014. In the meantime, I would love to hear any feedback, suggestions, or thoughts on projects I should consider, technologies that should be incorporated, or learning techniques I should leverage. Feel free to speak up in the comments below.

Tags: , , , , , , ,

« Older entries