Crossing the Threshold

Last week while attending the CloudStack Collaboration Conference in my home city of Denver, I had a bit of a realization. I wanted to share it here in the hopes that it might serve as an encouragement for others out there.

Long-time readers know that one of my projects over the last couple of years has been to become more fluent in Linux (refer back to my 2012 project list and my 2013 project list). I gave myself a B+ for my efforts last year, feeling that I had made good progress over the course of the year. Even so, I still felt like there was still so much that I needed to learn. As so many of us are inclined to do, I was more focused on what I still hadn’t learned instead of taking a look at what I had learned.

This is where last week comes in. Before the conference started, I participated in a couple of “mini boot camps” focused on CloudStack and related tools/clients/libraries. (You may have seen some of my tweets about tools like cloudmonkey, Apache libcloud, and awscli/ec2stack.) As I worked through the boot camps, I could hear the questions that other attendees were asking as well as the tasks with which others were struggling. Folks were wrestling with what I thought were pretty simple tasks; these were not, after all, very complex exercises. So the lab guide wasn’t complete or correct; you should be able to figure it out, right?

Then it hit me. I’m a Linux guy now.

That’s right—I had crossed the threshold between “working on being a Linux guy” and “being a Linux guy.” It’s not that I know everything there is to know (far from it!), but that the base level of knowledge had finally accrued to a level where—upon closer inspection—I realized that I was fluent enough that I could perform most common tasks without a great deal of effort. I knew enough to know what to do when something didn’t work, or wasn’t configured properly, and the general direction in which to look when trying to determine exactly what was going on.

At this point you might be wondering, “What does that have to do with encouraging me?” That’s a fair question.

As IT professionals—especially those on the individual contributor (IC) track instead of the management track—we are tasked with having to constantly learn new products, new technologies, and new methodologies. Because the learning never stops (and that isn’t a bad thing, in my humble opinion), we tend to focus on what we haven’t mastered. We forget to look at what we have learned, at the progress that we have made. Maybe, like me, you’re on a journey of learning and education to move from being a specialist in one type of technology to a practitioner of another type. If that’s the case, perhaps it’s time you stop saying “I will be a <new technology> person” and say “I am a <new technology> person.” Perhaps it’s time for you to cross the threshold.

Tags: , ,

Welcome to Technology Short Take #40. The content is a bit light this time around; I thought I’d give you, my readers, a little break. Hopefully there’s still some useful and interesting stuff here. Enjoy!

Networking

  • Bob McCouch has a nice write-up on options for VPNs to AWS. If you’re needing to build out such a solution, you might want to read his post for some additional perspectives.
  • Matthew Brender touches on a networking issue present in VMware ESXi with regard to VMkernel multi-homing. This is something others have touched on before (including myself, back in 2008—not 2006 as I tweeted one day), but Matt’s write-up is concise and to the point. You’ll definitely want to keep this consideration in mind for your designs. Another thing to consider: vSphere 5.5 introduces the idea of multiple TCP/IP stacks, each with its own routing table. As the ability to use multiple TCP/IP stacks extends throughout vSphere, it’s entirely possible this limitation will go away entirely.
  • YAOFC (Yet Another OpenFlow Controller), interesting only because it focuses on issues of scale (tens of thousands of switches with hundreds of thousands of endpoints). See here for details.

Servers/Hardware

  • Intel recently announced a refresh of the E5 CPU line; Kevin Houston has more details here.

Security

  • This one slipped past me in the last Technology Short Take, so I wanted to be sure to include it here. Mike Foley—whom I’m sure many of you know—recently published an ESXi security whitepaper. His blog post provides more details, as well as a link to download the whitepaper.
  • The OpenSSL “Heartbleed” vulnerability has captured a great deal of attention (justifiably so). Here’s a quick article on how to assess if your Linux-based server is affected.

Cloud Computing/Cloud Management

  • I recently built a Windows Server 2008 R2 image for use in my OpenStack home lab. This isn’t as straightforward as building a Linux image (no surprises there), but I did find a few good articles that helped along the way. If you find yourself needing to build a Windows image for OpenStack, check out creating a Windows image on OpenStack (via Gridcentric) and building a Windows image for OpenStack (via Brent Salisbury). You might also check out Cloudbase.it, which offers a version of cloud-init for Windows as well as some prebuilt evaluation images. (Note: I was unable to get the prebuilt images to download, but YMMV.)
  • Speaking of building OpenStack images, here’s a “how to” guide on building a Debian 7 cloud image for OpenStack.
  • Sean Roberts recently launched a series of blog posts about various OpenStack projects that he feels are important. The first project he highlights is Congress, a policy management project that has recently gotten a fair bit of attention (see a reference to Congress at the end of this recent article on the mixed messages from Cisco on OpFlex). In my opinion, Congress is a big deal, and I’m really looking forward to seeing how it evolves.
  • I have a related item below under Virtualization, but I wanted to point this out here: work is being done on a VIF driver to connect Docker containers to Open vSwitch (and thus to OpenStack Neutron). Very cool. See here for details.
  • I love that Cody Bunch thinks a lot like I do, like this quote from a recent post sharing some links on OpenStack Heat: “That generally means I’ve got way too many browser tabs open at the moment and need to shut some down. Thus, here comes a huge list of OpenStack links and resources.” Classic! Anyway, check out the list of Heat resources, you’re bound to find something useful there.

Operating Systems/Applications

  • A short while back I had a Twitter conversation about spinning up a Minecraft server for my kids in my OpenStack home lab. That led to a few other discussions, one of which was how cool it would be if you could use Heat autoscaling to scale Minecraft. Then someone sends me this.
  • Per the Microsoft Windows Server Team’s blog post, the Windows Server 2012 R2 Udpate is now generally available (there’s also a corresponding update for Windows 8.1).

Storage

  • Did you see that EMC released a virtual edition of VPLEX? It’s being called the “data plane” for software-defined storage. VPLEX is an interesting product, no doubt, and the introduction of a virtual edition is intriguing (but not entirely unexpected). I did find it unusual that the release of the virtual edition signalled the addition of a new feature called “MetroPoint”, which allows two sites to replicate back to a single site. See Chad Sakac’s blog post for more details.
  • This discussion on MPIO and in-guest iSCSI is a great reminder that designing solutions in a virtualized data center (or, dare I say it—a software-defined data center?) isn’t the same as designing solutions in a non-virtualized environment.

Virtualization

  • Ben Armstrong talks briefly about Hyper-V protected networks, which is a way to protect a VM against network outage by migrating the VM to a different host if a link failure occurs. This is kind of handy, but requires Windows Server clustering in order to function (since live migration in Hyper-V requires Windows Server clustering). A question for readers: is Windows Server clustering still much the same as it was in years past? It was a great solution in years past, but now it seems outdated.
  • At the same time, though, Microsoft is making some useful networking features easily accessible in Hyper-V. Two more of Ben’s articles show off the DHCP Guard and Router Guard features available in Hyper-V on Windows Server 2012.
  • There have been a pretty fair number of posts talking about nested ESXi (ESXi running as a VM on another hypervisor), either on top of ESXi or on top of VMware Fusion/VMware Workstation. What I hadn’t seen—until now—was how to get that working with OpenStack. Here’s how Mathias Ewald made it work.
  • And while we’re talking nested hypervisors, be sure to check out William Lam’s post on running a nested Xen hypervisor with VMware Tools on ESXi.
  • Check out this potential way to connect Docker containers with Open vSwitch (which then in turn opens up all kinds of other possibilities).
  • Jason Boche regales us with a tale of a vCenter 5.5 Update 1 upgrade that results in missing storage providers. Along the way, he also shares some useful information about Profile-Driven Storage in general.
  • Eric Gray shares information on how to prepare an ESXi ISO for PXE booting.
  • PowerCLI 5.5 R2 has some nice new features. Skip over to Alan Renouf’s blog to read up on what is included in this latest release.

I should close things out now, but I do have one final link to share. I really enjoyed Nick Marshall’s recent post about the power of a tweet. In the post, Nick shares how three tweets—one with Duncan Epping, one with Cody Bunch, and one with me—have dramatically altered his life and his career. It’s pretty cool, if you think about it.

Anyway, enough is enough. I hope that you found something useful here. I encourage readers to contribute to the discussion in the comments below. All courteous comments are welcome.

Tags: , , , , , , , , , , ,

Some time ago, I introduced you to the idea of Linux network namespaces, and provided an overview of some of the commands needed to interact with network namespaces. In this post, I’ll follow up on that post with some additional information on using network namespaces with other types of network interfaces.

In the previous network namespaces article, I mentioned (incorrectly, it turns out) that you had to use virtual Ethernet (veth) interfaces in order to connect a namespace to the physical network:

It turns out you can only assign virtual Ethernet (veth) interfaces to a network namespace.

I say “incorrectly” because you are able to assign more than just virtual Ethernet interfaces to a Linux network namespace. I’m not sure why I arrived at that conclusion, because subsequent testing—using Ubuntu 12.04 LTS as with the original testing—showed no problem assigning physical interfaces to a particular network namespace. The VMs I used for the subsequent testing are using a different kernel (the 3.11 kernel instead of whatever I used in the previous testing), so it’s entirely possible that’s the difference. If I get the opportunity, I’ll try with earlier kernel builds to see if that makes any difference.

In any case, assigning other types of network interfaces to a network namespace is just like assigning veth interfaces. First, you create the network namespace:

ip netns add <new namespace name>

Then, you’d assign the interface to the namespace:

ip link set <device name> netns <namespace name>

For example, if you wanted to assign eth1 to the “blue” namespace, you’d run this:

ip link set eth1 netns blue

Please note that I haven’t found a way to unassign an interface from a network namespace other than deleting the namespace entirely.

If you want to assign a VLAN interface to a namespace, the process is slightly different. You’ll have to create the namespace first, as with physical and veth interfaces, but you’ll also have to create the VLAN interface in the “default” namespace first, then move it over to the desired namespace.

For example, first you’d create the namespace “red”:

ip netns add red

Then you’d create the VLAN interface for VLAN 100 on physical interface eth1:

ip link add link eth1 name eth1.100 type vlan id 100

The generic form of this command is this:

ip link add link <physical device> name <VLAN device name> type vlan id <VLAN ID>

Note that you can’t use ip netns exec to run the command to create the VLAN interface in the network namespace directly; it won’t work because the parent interface upon which the VLAN interface is based doesn’t exist in the namespace. So you’ll create the VLAN interface in the default namespace first, then move it over.

Once the VLAN interface is created, then move it to the target namespace:

ip link set eth1.100 netns red

One (sort of) interesting thing I noted in my testing was that link status and IP addresses don’t move between namespaces. Therefore, don’t bother assigning an IP address or setting the link state of an interface before you move it to the final namespace, because you’ll just have to do it again.

To make the interface functional inside the target namespace, you’ll use ip netns exec to target the specific configuration commands against the desired namespace. For example, if the VLAN interface eth1.100 exists in the namespace “blue”, you’d run these commands:

ip netns exec blue ip addr add 10.1.1.1/24 dev eth1.100

That adds the IP address to the interface in the namespace; then you’d use this command to set the link status to up:

ip netns exec blue ip link set eth1.100 up

Then you can test network connectivity with our good friend ping like this:

ip netns exec blue ping -c 4 10.1.1.2

(Obviously, you’d want to substitute an appropriate IP address there for your specific configuration and environment.)

I hope this additional information on working with Linux network namespaces is useful. As always, I invite and encourage any questions, thoughts, or corrections in the comments below. All courteous comments are welcome!

Tags: , ,

In this post, I’m going to talk about a way to modify QCOW2-based images without having to boot up a full-blown virtual machine based on the image. I found this process while trying to change the behavior of cloud-init on the official Ubuntu cloud images from Canonical.

While I had successfully built my own Ubuntu 12.04 LTS image for my OpenStack-based home lab (and it worked just fine), I was keen to use Canonical’s official cloud images as they were much smaller (around 250MB versus 1.5GB for my home-built image). Given that disk space is at a bit of a premium in my home lab, using the smaller images would be quite beneficial. The problem was that cloud-init wasn’t configured to operate in a way that worked for my home lab environment.

The question, of course, was…how do I do this? I tried a couple different approaches:

  1. I installed VirtualBox on one of my OS X systems, because I knew VirtualBox supported QCOW2 images. Unfortunately, trying to launch a VM using one of the official cloud images as a root disk failed. (I presume this is because the cloud images assume the use of the virtio block driver, which VirtualBox doesn’t support.)

  2. Next, I installed a plain jane Ubuntu VM in VirtualBox and attached the official cloud image QCOW2 file as a second hard disk. That didn’t work either; the Ubuntu VM refused to recognize the filesystem(s) on the QCOW2 image.

I was about at my wits’ end when I found this post. Aha! I’d heard of guestfish before, but I hadn’t thought to try it in this instance. Guestfish is part of the larger libguestfs project.

So I tried the instructions in the post I’d found. Unfortunately, it failed the first time I tried, but I later—thanks to qemu-img check—found that the Ubuntu 12.04 cloud image I’d downloaded was somehow damaged/corrupted. Using a new download of the image was successful.

Here are the steps I used to modify the contents of a QCOW2 disk image using guestfish:

  1. Starting from a vanilla Ubuntu 12.04 LTS install, use apt-get update and apt-get install guestfish to install guestfish. (You’ll probably need/want to use sudo with both of these commands.)

  2. During the installation of guestfish, you can select to create a “virtual appliance” that guestfish uses. If you choose not to do this during installation, you can always do it later with sudo update-guestfs-appliance.

  3. Once guestfish is installed and the virtual appliance is created/updated, launch guestfish itself with sudo guestfish. Note that if you don’t have root privileges or don’t use sudo, guestfish will still launch and act like everything is working—but it will fail. Once guestfish is up and running, you’ll be sitting at a “> <fs>” prompt.

  4. Type add <QCOW2 image filename> (obviously substituting the full path and correct filename for the QCOW2 image you’d like to modify).

  5. Type run and press Enter. You’ll see a progress bar that updates the status of the guestfs virtual appliance. Once it’s complete, you’ll be returned to the “> <fs>” prompt.

  6. Enter list-filesystems. Guestfish should return a list of filesystems on the QCOW2 disk image. In the case of the official Canonical cloud images, only “/dev/vda1″ is returned.

  7. To gain access to the filesystem in the QCOW2 image, enter mount <device specification> <guestfish mount point>. For example, when I was modifying the Canonical cloud images, I used mount /dev/vda1 /. Keep in mind you’re not mounting to the system on which guestfish is running; you’re mounting to the guestfish virtual appliance.

  8. Now just edit the files in the QCOW2 image using whatever method you prefer. In my case, I needed to edit /etc/cloud/cloud.cfg, so I just used vi to edit the file and make whatever changes were needed.

  9. When you’re done, type exit to quit guestfish and commit changes to the QCOW2 image.

And that’s it! Using this technique I was able to quickly and easily modify the Ubuntu cloud images so that cloud-init behaved in the way that I wanted.

Hopefully this information helps someone else. If you have any questions or corrections, I invite you to share in the comments below.

Tags: , , ,

Welcome to part 11 of the Learning NSX blog series, in which I provide a high-level overview of the basics on integrating VMware NSX into an OpenStack deployment using OpenStack Neutron. In case you’re just now catching up on this blog series, I encourage you to visit my Learning NVP/NSX page, which has a brief summary of all the posts in the series.

Now that I shown you how to build all the different components of NSX (NSX controllers, NSX Manager, gateway appliances, and service nodes), it’s time to add integration into a cloud management platform. VMware NSX was designed to be integrated into a cloud management platform. Because VMware NSX offers a full-featured RESTful API, you could—in theory—integrate NSX into just about any cloud management platform. However, for the purposes of this series, I’ll limit the discussion to focus on how one would integrate VMware NSX with OpenStack via the OpenStack Neutron (formerly “Quantum”) project, which provides virtual networking functionality for OpenStack-based clouds.

The challenge in discussing OpenStack-NSX integration is that one must first understand the basics of OpenStack Neutron before looking at how to integrate VMware NSX into Neutron. Therefore, the primary goal of this post in the series is to provide an overview of Neutron, and then discuss how NSX integrates into Neutron. The next post in the series will provide more in-depth technical details on exactly how the integration is configured.

Let’s start with a generic overview of OpenStack Neutron and its components.

Examining OpenStack Neutron Components

OpenStack Neutron itself has a number of different components:

  • The Neutron server, which supplies the Neutron API and is typically—but not required to be—deployed co-resident on an OpenStack “controller” node (not to be confused with an NSX controller).
  • The Neutron DHCP agent, which provides DHCP services for the various logical networks created in Neutron (at least, whenever DHCP is enabled for a logical network).
  • The Neutron L3 agent, which provides L3 (routed) connectivity for Neutron logical networks. This includes both L3 between logical networks as well as L3 in and out of logical networks.
  • The Neutron metadata agent, which is responsible for providing connectivity between instances and the Nova metadata service (for customizing instances appropriately).
  • Finally, for many Open vSwitch (OVS)-based installations, there is the Neutron OVS agent, which provides a way to program OVS to do what Neutron needs it to do.

The Neutron DHCP, L3, and metadata agents are typically—but not required to be—installed on a so-called “network” node. This network node thus provides DHCP services to the various logical networks (often using Linux network namespaces, if the Linux distribution supports them), routed connectivity in and out of logical networks (once again with network namespaces, Linux bridges, and iptables rules), and metadata service connectivity.

From a traffic flow perspective, traffic from one subnet in a logical network to another subnet in a logical network must hairpin through the network node (where the L3 agent resides). Traffic headed out of a logical network must flow through the network node, where iptables rules will perform the necessary network address translation (NAT) functions associated with the use of floating IPs in an OpenStack environment. And, as I’ve already mentioned, the network node provides DHCP services to all the logical networks as well.

This is, by necessity, a very high-level overview of OpenStack Neutron and the core components. Let’s now take a look at how these components are affected when you choose to use VMware NSX with OpenStack Neutron.

Reviewing OpenStack Neutron With NSX

When you’re using VMware NSX as the mechanism behind OpenStack Neutron, some of Neutron’s functionality is provided by NSX itself:

  • You no longer need the L3 agent. L3 connectivity is provided by the NSX gateway appliances and logical gateway services (refer to part 6 and part 9 of the series, respectively).
  • You won’t need the OVS agent on the hypervisors, because the NSX controllers are responsible for configuring/programming OVS on transport nodes. More details on this interaction is provided in part 4 of the series. (Transport nodes is a generic term referring to nodes that participate in the data plane, such as hypervisors, gateways, and service nodes.)
  • You will still need the Neutron server, the DHCP agent, and the metadata agent.

Therefore, if you choose to deploy the OpenStack Neutron components in a “typical” fashion, you’d have a setup something like this:

  • An OpenStack “controller” node would host the Neutron server, which provides the API with which other parts of OpenStack will interact. This node generally does not have OVS installed, but would have the NSX plugin for Neutron installed. This plugin implements the integration between OpenStack and NSX, and will communicate with NSX via the NSX northbound RESTful API.
  • An OpenStack “network” node would host the Neutron DHCP agent and the Neutron metadata agent. This node would have OVS installed, and would be registered into NSX as a hypervisor (even though it is not a hypervisor and will not host any VMs).

At this point, you should have a pretty good understanding of how, at a high level, NSX integrates with and affects OpenStack Neutron. In the next post in the series, I’ll provide more details on exactly how to configure the integration between VMware NSX and OpenStack Neutron.

Tags: , , , , , ,

If you’ve been following trends in the “cloud” world, you’ve probably heard of Docker. In this post I’m going to provide a quick introduction to Docker, which might be helpful if you’re wondering what Docker is and why it’s garnered so much attention.

The best way to describe Docker is to use the phrase from the Docker web site—Docker is “an open source project to pack, ship and run any application as a lightweight container.” The idea is to provide a comprehensive abstraction layer that allows developers to “containerize” or “package” any application and have it run on any infrastructure. The use of container here refers more to the consistent, standard packaging of applications rather than referring to any underlying technology (a distinction that will be important in a moment). The most common analogy used to help people understand Docker is saying that Docker containers are like shipping containers: they provide a standard, consistent way of shipping just about anything. Docker containers provide a standard, consistent way of packaging just about any application. (At least, that’s the idea.)

So what are the underlying technologies that make up Docker?

  • Docker leverages LXC (Linux Containers), which encompasses Linux features like cgroups and namespaces for strong process isolation and resource control. If you’re not familiar with LXC, I suggest having a look at my introductory post on LXC. Docker’s use of LXC is why I mentioned earlier that Docker’s use of container is more about consistent/standard packaging than any specific technology. While Docker currently leverages LXC, the Docker architecture isn’t necessarily limited to LXC. In theory, Docker could leverage KVM to do the same things it does today with LXC. (Update: While in the process of finishing this post, Docker released version 0.9, which does offer the ability to replace LXC with a different back-end “engine” called libcontainer.)
  • Docker leverages a copy-on-write filesystem (currently AUFS, but other filesystems are being investigated). This allows Docker to instantiate containers (using that term to refer to Docker’s construct, not the LXC construct) very quickly—instead of having to make full copies of whatever files comprise a container, Docker can basically use “pointers” back to existing files. It seems to me this is a big part of what makes Docker so useful.
  • Perhaps due in part to the use of a copy-on-write file system, Docker containers are easily “linked” to other containers. Some people refer to this as stacking containers (continuing the shipping container analogy); to me, it makes more sense to talk about layering one container on top of another. For example, you might create a container that is based on a base Ubuntu image, and then in turn create another container that is based on the first container.
  • Docker uses a “plain text” configuration language (I suppose you could think of it as a domain-specific language, or DSL) to control the configuration of an application container, such as what files should (or should not) be included, what network ports should be open, and what processes/applications should be running inside the container. (I’ll provide an example of a Docker file in just a few moments.)

Now that you have an idea of the basics, let’s take a deeper look. For the purposes of this article, I’m using Ubuntu 12.04 LTS as the Linux platform. Please keep in mind that different distributions may use different commands.

Installing Docker

To install Docker on Ubuntu, it’s a pretty straightforward process.

  1. First, you’ll want to make sure you are running at least the 3.8 kernel. (Newer versions of 12.04 LTS, such as 12.04.4, appear to be installing the 3.11 kernel by default, so you may not have to do this step.) To install the 3.8 kernel (in case you’re running something older), just run these two commands and reboot:
sudo apt-get update
sudo apt-get install linux-image-generic-lts-raring \
linux-headers-generic-lts-raring
  1. Next, add the Docker repository key to your local Apt keychain:
sudo apt-key adv --keyserver keyserver.ubuntu.com \
--recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
  1. Add the Docker repository to your list of repositories:
sudo sh -c "echo deb http://get.docker.io/ubuntu docker main \
> /etc/apt/sources.list.d/docker.list"
  1. Finally, install Docker with an apt-get combo:
sudo apt-get update
sudo apt-get install lxc-docker

Note that this automatically installs LXC, but an “older” version (0.7.5). I haven’t tested to see if Docker will work with the newer alpha release of LXC 1.0.0 that is available in the precise-backports repository (see this post for more information).

Launching a Simple Docker Container

Now that Docker is installed, you can launch a quick Docker container like this:

sudo docker run -i -t ubuntu /bin/bash

If everything is working, the first time you run this command it will automatically download an Ubuntu image (as specified by “ubuntu” on that command line) and create a simple Docker container that just runs the bash shell. You’ll get dropped into an odd prompt that will look something like this (the numbers will change from system to system):

root@4a2f737d6e2e:/#

You’re now running in an Ubuntu environment, but it’s a clean environment—there are no daemons running, nothing listening on various network ports, nothing else except the bash shell. If you run ifconfig, you’ll see that eth0 has an IP address, but that’s it. When you type exit, the container will terminate and you’ll be returned to the prompt for your host system.

Now, let’s take a look at what happened as a result of running that command. If you run sudo docker images, you will see a list of available Docker images that were downloaded to your local system. They’ll all be listed as “ubuntu” in the first column, and it’s the second column that will differentiate them—you’ll see listings for 12.04, 12.10, 13.04, 13.10, etc. You could run a specific Ubuntu image with a command like this:

sudo docker run -i -t ubuntu:13.10 /bin/bash

If you run this command, you’ll note that you’re launched into this new container very quickly. This is one of the strengths of Docker: once an image has been downloaded, launching containers based on that image is very fast.

One other thing you’ll note about Docker is that the containers you start up are ephemeral—meaning that changes you make to the container aren’t persistent. Try this: start up a container (you can use a simple container like the examples I’ve provided already), create a file, and then exit the container. Start the container again, and you’ll see that the file you created previously no longer exists. There are workarounds for this, but for the sake of keeping this post to a manageable size I’ll save discussing those workarounds for another time.

Launching More Complex Docker Containers

So what if you need a Docker container that does need to have a daemon running and listening on a network port? It’s great that you can run a container for the bash shell, but that’s not particularly useful. What if you wanted to run a MySQL container, an SSH container, or a web server container?

To do something like that, you’ll need to utilize a Dockerfile. A Dockerfile is a plain text file that tells Docker exactly how to construct a Docker image. (I mentioned earlier that Docker had its own DSL; this is what I’m talking about.) Here’s an example Dockerfile:

FROM ubuntu:12.04
MAINTAINER Joe Shmoe "[email protected]"
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y nginx
EXPOSE 80
CMD ["nginx"]

The syntax for a Dockerfile is INSTRUCTION arguments, where “INSTRUCTION” refers to commands like FROM, RUN, or CMD. Here are a few quick notes:

  • The FROM command is what enables Docker containers to be based on other Docker containers. In this case, the Docker container is based on the Ubuntu image; specifically, the Ubuntu 12.04 image.
  • The RUN commands execute various commands within the new container. You’d use RUN instructions to install packages, modify files, etc.
  • The EXPOSE command exposes a listening port on a Docker container.
  • The CMD instruction tells what command/daemon to run when the container launches.

I don’t have room here for a full Dockerfile tutorial, but Docker themselves has a great one. Suffice it to say that if you want anything more complex than a simple bash shell, you’re probably going to need to use a Dockerfile.

Once you have a working Dockerfile (just as a heads-up, the above example doesn’t actually work—sorry), you can create a Docker image from that Dockerfile using the docker build command. Once you have a working image, you can then base other Docker containers on that image (using the FROM instruction in a subsequent Dockerfile).

Exploring Other Docker Commands

There are also some other Docker commands you’ll find useful:

  • The docker images command lists all the Docker images available on the system. This includes images that you’ve downloaded from elsewhere as well as images you’ve created.
  • As I mentioned earlier, docker build allows you to create your own images from a Dockerfile.
  • To see which Docker containers are running, use docker ps.
  • The docker rmi command removes Docker images.
  • Finally, just run docker help if you need to see what other commands are available.

Trying Out Docker

You don’t need a lot to try out Docker; a simple Ubuntu 12.04 LTS VM running on VirtualBox, VMware Fusion, or VMware Workstation will work fine. Nested virtualization support isn’t required when using LXC, and the Docker containers themselves are pretty lightweight. To try out Docker, you only need to create a VM and follow the instructions provided earlier for installing Docker. Once you’ve got that, you’re ready to explore it for yourself.

For additional information or another perspective on Docker, I recommend taking a look at this presentation on SlideShare by James Turnbull (formerly with Puppet Labs, now with Docker).

As always, all courteous questions, thoughts, suggestions, or corrections are welcome.

Tags: , ,

Welcome to Technology Short Take #39, in which I share a random assortment of links, articles, and thoughts from around the world of data center-related technologies. I hope you find something useful—or at least something interesting!

Networking

  • Jason Edelman has been talking about the idea of a Common Programmable Abstraction Layer (CPAL). He introduces the idea, then goes on to explore—as he puts it—the power of a CPAL. I can’t help but wonder if this is the right level at which to put the abstraction layer. Is the abstraction layer better served by being integrated into a cloud management platform, like OpenStack? Naturally, the argument then would be, “Not everyone will use a cloud management platform,” which is a valid argument. For those customers who won’t use a cloud management platform, I would then ask: will they benefit from a CPAL? I mean, if they aren’t willing to embrace the abstraction and automation that a cloud management platform brings, will abstraction and automation at the networking layer provide any significant benefit? I’d love to hear others’ thoughts on this.
  • Ethan Banks also muses on the need for abstraction.
  • Craig Matsumoto of SDN Central helps highlight a recent (and fairly significant) development in networking protocols—the submission of the Generic Network Virtualization Encapsulation (Geneve) proposal to the IETF. Jointly authored by VMware, Microsoft, Red Hat, and Intel, this new protocol proposal attempts to bring together the strengths of the various network virtualization encapsulation protocols out there today (VXLAN, STT, NVGRE). This is interesting enough that I might actually write up a separate blog post about it; stay tuned for that.
  • Lee Doyle provides an analysis of the market for network virtualization, which includes some introductory information for those who might be unfamiliar with what network virtualization is. I might contend that Open vSwitch (OVS) alone isn’t an option for network virtualization, but that’s just splitting hairs. Overall, this is a quick but worthy read if you are trying to get started in this space.
  • Don’t think this “software-defined networking” thing is going to take off? Read this, and then let me know what you think.
  • Chris Margret has a nice dissection of how bash completion works, particularly in regards to the Cumulus Networks implementation.

Servers/Hardware

  • Via Kevin Houston, you can get more details on the Intel E7 v2 and new blade servers based on the new CPU. x86 marches on!
  • Another interesting tidbit regarding hardware: it seems as if we are now seeing the emergence of another round of “hardware offloads.” The first round came about around 2006 when Intel and AMD first started releasing their hardware assists for virtualization (Intel VT and AMD-V, respectively). That technology was only “so-so” at first (VMware ESX continued to use binary translation [BT] because it was still faster than the hardware offloads), but it quickly matured and is now leveraged by every major hypervisor on the market. This next round of hardware offloads seems targeted at network virtualization and related technologies. Case in point: a relatively small company named Netronome (I’ve spoken about them previously, first back in 2009 and again a year later), recently announced a new set of network interface cards (NICs) expressly designed to provide hardware acceleration for software-defined networking (SDN), network functions virtualization (NFV), and network virtualization solutions. You can get more details from the Netronome press release. This technology is actually quite interesting; I’m currently talking with Netronome about testing it with VMware NSX and will provide more details as that evolves.

Security

  • Ben Rossi tackles the subject of security in a software-defined world, talking about how best to integrate security into SDN-driven architectures and solutions. It’s a high-level article and doesn’t get into a great level of detail, but does point out some of the key things to consider.

Cloud Computing/Cloud Management

  • “Racker” James Denton has some nice articles on OpenStack Neutron that you might find useful. He starts out with discussing the building blocks of Neutron, then goes on to discuss building a simple flat network, using VLAN provider networks, and Neutron routers and the L3 agent. And if you need a breakdown of provider vs. tenant networks in Neutron, this post is also quite handy.
  • Here’s a couple (first one, second one) of quick walk-throughs on installing OpenStack. They don’t provide any in-depth explanations of what’s going on, why you’re doing what you’re doing, or how it relates to the rest of the steps, but you might find something useful nevertheless.
  • Thinking of building your own OpenStack cloud in a home lab? Kevin Jackson—who along with Cody Bunch co-authored the OpenStack Cloud Computing Cookbook, 2nd Edition—has three articles up on his home OpenStack setup. (At least, I’ve only found three articles so far.) Part 1 is here, part 2 is here, and part 3 is here. Enjoy!
  • This post attempts to describe some of the core (mostly non-technical) differences between OpenStack and OpenNebula. It is published on the OpenNebula.org site, so keep that in mind as it is (naturally) biased toward OpenNebula. It would be quite interesting to me to see a more technically-focused discussion of the two approaches (and, for that matter, let’s include CloudStack as well). Perhaps this already exists—does anyone know?
  • CloudScaling recently added a Google Compute Engine (GCE) API compatibility module to StackForge, to allow users to leverage the GCE API with OpenStack. See more details here.
  • Want to run Hyper-V in your OpenStack environment? Check this out. Also from the same folks is a version of cloud-init for Windows instances in cloud environments. I’m testing this in my OpenStack home lab now, and hope to have more information soon.

Operating Systems/Applications

Storage

Virtualization

  • Brendan Gregg of Joyent has an interesting write-up comparing virtualization performance between Zones (apparently referring to Solaris Zones, a form of OS virtualization/containerization), Xen, and KVM. I might disagree that KVM is a Type 2 hardware virtualization technology, pointing out that Xen also requires a Linux-based dom0 in order to function. (The distinction between a Type 1 that requires a general purpose OS in a dom0/parent partition and a Type 2 that runs on top of a general purpose OS is becoming increasingly blurred, IMHO.) What I did find interesting was that they (Joyent) run a ported version of KVM inside Zones for additional resource controls and security. Based on the results of his testing—performed using DTrace—it would seem that the “double-hulled virtualization” doesn’t really impact performance.
  • Pete Koehler—via Jason Langer’s blog—has a nice post on converting in-guest iSCSI volumes to native VMDKs. If you’re in a similar situation, check out the post for more details.
  • This is interesting. Useful, I’m not so sure about, but definitely interesting.
  • If you are one of the few people living under a rock who doesn’t know about PowerCLI, Alan Renouf is here to help.

It’s time to wrap up; this post has already run longer than usual. There was just so much information that I want to share with you! I’ll be back soon-ish with another post, but until then feel free to join (or start) the conversation by adding your thoughts, ideas, links, or responses in the comments below.

Tags: , , , , , , , , , , , ,

Welcome to part 10 of the Learning NSX blog series, in which I will walk through adding an NSX service node to your NSX configuration.

In the event you’ve joined this series mid-way, here’s what I’ve covered thus far:

In this installation of the series, I’ll walk you through setting up an NSX service node and adding it to the NSX domain. Before I do that, though, it’s probably useful to set some context around the role a service node plays in an NSX environment.

Reviewing Service Nodes in VMware NSX

VMware NSX offers two different ways of handling BUM (Broadcast, Unknown unicast, and Multicast) traffic:

  • NSX can perform source replication, which means that each hypervisor is responsible for replicating BUM packets and transmitting them onto the logical network(s). In small environments, this is probably fine.
  • NSX can also perform service node replication, which—as you probably guessed—uses dedicated service node appliances to offload BUM packet replication and transmission. (Service nodes also play a role in multi-DC deployments with remote gateways, but that’s a topic for a different day.)

My environment is pretty small and limited on resources, so I don’t really need a service node. However, in the current implementation of the integration between OpenStack Neutron and NSX, it assumes the presence of a service node. There is a workaround (I’ll probably blog about that later), but I figured I would just go ahead and add a service node to make things easier.

Building an NSX Service Node

Like the NSX controllers, the NSX gateways, and NSX Manager, the NSX service node software is distributed as an ISO. To install a service node on a physical server, you’d just burn the ISO to an optical disk and boot the server from the optical disk. From the boot menu, select to perform an automated installation, and in a few minutes you’re done.

While it is possible to run a service node as a VM (that’s what I’m doing), be aware this isn’t a supported configuration. In addition, if you think about it, it’s kind of crazy—you’re building a VM that runs on a hypervisor to offload packet replication from the hypervisor. Doesn’t really make sense, does it?

Once the service node is finished installation, you’re ready to configure the service node and then add it to NSX.

Configuring an NSX Service Node

Like the controllers, the gateway, and NSX Manager, the configuration of an NSX service node is pretty straightforward:

  1. Set a password for the admin user (optional, but highly recommended).

  2. Set the hostname for the service node (also optional, but recommended as well).

  3. Assign IP addresses to the service node.

  4. Configure DNS and NTP settings.

Let’s take a look at each of these steps.

To set the password for the default admin user, just use this command:

set user admin password

You’ll be prompted to supply the new password, then retype it for confirmation. Easy, right? (And pretty familiar if you’ve used Linux before.)

Setting the hostname for the service node is equally straightforward:

set hostname <hostname>

Now you’re ready to assign IP addresses to the service node. Note that I said “IP addresses” (plural). This is because the service node needs to have connectivity on the management network (so that it can communicate with the NSX controller cluster) as well as the transport network (so that it can set up tunnels with other transport nodes, like hypervisors and gateways). Use this command to see the network interfaces that are present in the controller:

show network interfaces

You’ll note that for each physical interface in the system, the NSX service node installation procedure created a corresponding bridge (this is actually an OVS bridge). So, for a server that has two interfaces (eth0 and eth1), the installation process will automatically create breth0 and breth1. Generally, you’ll want to assign your IP addresses to the bridge interfaces, and not to the physical interfaces.

Let’s say that you wanted to assign an IP address to breth0, which corresponds to the physical eth0 interface. You’d use this command:

set network interface breth0 ip config static 192.168.1.5 255.255.255.0

Naturally, you’d want to substitute the correct IP address and subnet mask in that command. Once the interface is configured, you can use the standard ping command to test connectivity (note, though, that you can’t use any switches to ping, as they aren’t supported by the streamlined NSX appliance CLI). For a service node, you’ll want to assign breth0 an IP address on the management network, and assign breth1 an IP address on your transport network.

Note that you may also need to add a default route using this command:

add network route 0.0.0.0 0.0.0.0 <Default gateway IP address>

Assuming connectivity is good, you’re ready to add DNS and NTP servers to your configuration. Use these commands:

add network dns-server <DNS server IP address>
add network ntp-server <NTP server IP address>

Repeat these commands as needed to add multiple DNS and/or NTP servers. If you accidentally fat finger an IP address, you can remove the incorrect IP address using the remove command, like this:

remove network dns-server <Incorrect DNS IP address>

Substitute ntp-server for dns-server in the above command to remove an incorrect NTP server address.

To add a DNS search domain to the service node, use this command:

add network dns-search-domain <Domain name>

If you are using DHCP and your appliance happened to pick up some settings from the DHCP server, you may need to use the clear network dns-servers and/or clear network routes command before you can add DNS servers or routes to the service node.

Once you’ve added IP addresses, DNS servers, NTP servers, and successfully tested connectivity over both the management and transport networks, then you’re ready to proceed with adding the service node to NSX.

Adding Service Nodes to NSX

As with adding a gateway appliance in part 6, you’ll use NSX Manager (which you set up in part 3) to add the new service node to NSX. Once you’ve logged into the NSX Manager web UI via your browser, you’re ready to start the process of adding the service node.

  1. From the NSX Manager web UI, click on the Dashboard link across the top. If you’ve just logged into NSX Manager, you’re probably already at the Dashboard and can skip this step.

  2. In the Summary of Transport Components box, click the Add button on the row for Service Nodes. This opens the Create Service Node dialog box.

  3. In step 1 (Type), the Transport Node Type drop-down should already be set to “Service Node.” Click Next.

  4. Set the display name and (optionally) add one or more tags to the service node object. Click Next to proceed.

  5. Make sure that “Admin Status Enabled” is selected, and leave the other options untouched (unless you know you need to change them). Click Next.

  6. On step 4 (Credentials), you’ll need the SSL security certificate from the service node. Since you have established network connectivity to the service node, just SSH into the new service node and issue show switch certificate. Then copy the output and paste it into the Security Certificate box in NSX Manager. Click Next to continue.

  7. The final step in NSX Manager is to add a transport connector. A transport connector tells NSX how transport nodes can communicate over a transport zone (I described transport zones back in part 5). Click Add Connector, then specify the transport type (which tunneling protocol to use), the transport zone, and the IP address. The IP address you specify should match an IP address you assigned to the service node’s interface on the transport network. Click OK, then click Save.

At this point, you’ll see the number of registered service nodes increment to 1 (assuming this is the first) in the Summary of Transport Components box in NSX Manager. Active, however, will remain zero until you perform the final step.

The final step is performed back on the service node itself. If you opened an SSH session earlier to get the switch certificate, you can just re-use that connection. On the service node, set up communications with the NSX controller cluster using the command set switch manager-cluster W.X.Y.Z, where W.X.Y.Z represents the IP address of the one of the controllers in your controller cluster. This IP address should be reachable across the interface on the service node assigned to management traffic (which would typically be breth0).

Now go back and refresh the Summary of Transport Components box in NSX Manager, and you should see both Registered and Active Service Nodes set to 1 (again, assuming this is the first).

That’s really all there is to it. Given their role, you likely won’t have a lot of interaction with the service nodes directly. Still, if you want to use NSX with OpenStack Neutron today, you’ll want to have a service node present (and, honestly, if you’re using NSX and OpenStack in any sort of production environment, you’re probably big enough to want a service node anyway).

As always, feel free to post any questions, comments, thoughts, or ideas below. All courteous comments, with vendor disclosures where applicable, are welcome.

Tags: , , , , , ,

Welcome to part 9 in the Learning NVP/NSX blog series, in which I’ll discuss adding a gateway service to a logical network.

If you are just now joining me for this series, let me bring you up to speed real quick:

This installation in the series builds upon all the previous articles; in particular, I assume that you have a logical network created and configured and that you have an NSX gateway appliance up running and added to the NSX domain. In this post, I’ll show you how to add a logical gateway service to your network so that you can provide routed (layer 3) connectivity into and out of your logical network.

Before I start, I think it’s important to distinguish between a gateway appliance and a gateway service. A gateway appliance is a physical installation (or a VM; it’s supported as a VM in certain configurations) of the NSX gateway software; this is what I showed you how to set up in part 6 of the series. A gateway service, on the other hand, is a logical construct. Typically, you would have multiple gateway appliances. Then you would create a gateway service that would be instantiated across multiple gateway appliances for redundancy and availability. Gateway services can be either layer 2 (provided bridged connectivity between a logical network and VLANs on the physical network) or layer 3 (providing routed—with or without NAT—connectivity between a logical network and the physical network).

It’s also important to understand the distinction between a gateway service and a logical router. A gateway service is an NSX construct; a logical router, on the other hand, is usually a construct of the cloud management platform (like OpenStack). A single gateway service can host many logical routers.

In this series, I’m only going to focus on layer 3 gateway services (primarily because I have limited resources in my environment and can’t run both layer 2 and layer 3 gateway services).

To create a layer 3 gateway service, you’ll follow these steps from within NSX Manager (formerly NVP Manager, which I showed you how to set up in part 3):

  1. From the menu across the top of the NSX Manager page, select Network Components > Services > Gateway Services. This will take you to a page titled “Network Components Query Results,” where NSX Manager has precreated and executed a query for the list of gateway services. Your list will be empty, naturally.

  2. Click the Add button. This will open the Create Gateway Service dialog.

  3. Select “L3 Gateway Service” from the list. Other options in this list include “L2 Gateway Service” (to create a layer 2 gateway service) and “VTEP L2 Gateway Service” (to integrate a third-party top-of-rack [ToR] switch into NSX). Click Next, or click on the “2. Basics” button on the left.

  4. Provide a display name for the new layer 3 gateway service, then click Next (or click on “3. Transport Nodes” on the left). You can optionally add tags here as well, in case you wanted to associate additional metadata with this logical object in NSX.

  5. On the Transport Nodes screen, click Add Gateway to select a gateway appliance (which is classified as a transport node within NSX; hypervisors are also transport nodes) to host this layer 3 gateway service.

  6. From the Edit Gateway dialog box that pops up, you’ll need to select a transport node, a device ID, and a failure zone ID. The first option, the transport node, is pretty straightforward; this is a gateway appliance on which to host this gateway service. The device ID is the bridge (recall that NSX gateway appliances, by default, create OVS bridges to map to their interfaces) connected to the external network. The failure zone ID lets you “group” gateway appliances with regard to their relationship to the gateway service. For example, you could choose to host a gateway service on a gateway from failure zone 1 as well as failure zone 2. (Failure zones are intended to help represent different failure domains within your data center.)

  7. Once you’ve added at least two gateway appliances as transport nodes for your gateway service, click Save to create the gateway service and return to NSX Manager. That’s it—you’re done!

Note that this is more of an implementation task than an operational task. In other words, you’d generally deploy your gateway services when you first set up NSX and your cloud management platform, or when you are adding capacity to your environment. This isn’t something that you have to do when a cloud tenant (customer) needs a logical router; that’s handled automatically through the integration between NSX and the cloud management platform. As I mentioned earlier, a single layer 3 gateway service could host many logical routers.

In the next installation of the series, I’ll walk you through setting up an NSX service node to offload packet replication (for broadcast, unknown unicast, and multicast traffic).

As always, your feedback is welcome and encouraged, so feel free to speak up in the comments below.

Tags: , , , ,

It seems as if APIs are popping up everywhere these days. While this isn’t a bad thing, it does mean that IT professionals need to have a better understanding of how to interact with these APIs. In this post, I’m going to discuss how to use the popular command line utility curl to interact with a couple of RESTful APIs—specifically, the OpenStack APIs and the VMware NSX API.

Before I go any further, I want to note that to work with the OpenStack and VMware NSX APIs you’ll be sending and receiving information in JSON (JavaScript Object Notation). If you aren’t familiar with JSON, don’t worry—I’ve have an introductory post on JSON that will help get you up to speed. (Mac users might also find this post helpful as well.)

Also, please note that this post is not intended to be a comprehensive reference to the (quite extensive) flexibility of curl. My purpose here is to provide enough of a basic reference to get you started. The rest is up to you!

To make consuming this information easier (I hope), I’ll break this information down into a series of examples. Let’s start with passing some JSON data to a REST API to authenticate.

Example 1: Authenticating to OpenStack

Let’s say you’re working with an OpenStack-based cloud, and you need to authenticate to OpenStack using OpenStack Identity (“Keystone”). Keystone uses the idea of tokens, and to obtain a token you have to pass correct credentials. Here’s how you would perform that task using curl.

You’re going to use a couple of different command-line options:

  • The “-d” option allows us to pass data to the remote server (in this example, the remote server running OpenStack Identity). We can either embed the data in the command or pass the data using a file; I’ll show you both variations.
  • The “-H” option allows you to add an HTTP header to the request.

If you want to embed the authentication credentials into the command line, then your command would look something like this:

curl -d '{"auth":{"passwordCredentials":{"username": "admin",
"password": "secret"},"tenantName": "customer-A"}}'
-H "Content-Type: application/json" http://192.168.100.100:5000/v2.0/tokens

I’ve wrapped the text above for readability, but on the actual command line it would all run together with no breaks. (So don’t try to copy and paste, it probably won’t work.) You’ll naturally want to substitute the correct values for the username, password, tenant, and OpenStack Identity URL.

As you might have surmised by the use of the “-H” header in that command, the authentication data you’re passing via the “-d” parameter is actually JSON. (Run it through python -m json.tool and see.) Because it’s actually JSON, you could just as easily put this information into a file and pass it to the server that way. Let’s say you put this information (which you could format for easier readability) into a file named credentials.json. Then the command would look something like this (you might need to include the full path to the file):

curl -d @credentials.json -H "Content-Type: application/json" http://192.168.100.100:35357/v2.0/tokens

What you’ll get back from OpenStack—assuming your command is successful—is a wealth of JSON. I highly recommend piping the output through python -m json.tool as it can be difficult to read otherwise. (Alternately, you could pipe the output into a file.) Of particular usefulness in the returned JSON is a section that gives you a token ID. Using this token ID, you can prove that you’ve authenticated to OpenStack, which allows you to run subsequent commands (like listing tenants, users, etc.).

Example 2: Authenticating to VMware NSX

Not all RESTful APIs handle authentication in the same way. In the previous example, I showed you how to pass some credentials in JSON-encoded format to authenticate. However, some systems use other methods for authentication. VMware NSX is one example.

In this example, you’ll need to use a different set of curl command-line options:

  • The “–insecure” option tells curl to ignore HTTPS certificate validation. VMware NSX controllers only listen on HTTPS (not HTTP).
  • The “-c” option stores data received by the server (one of the NSX controllers, in this case) into a cookie file. You’ll then re-use this data in subsequent commands with the “-b” option.
  • The “-X” option allows you to specify the HTTP method, which normally defaults to GET. In this case, you’ll use the POST method along the the “-d” parameter you saw earlier to pass authentication data to the NSX controller.

Putting all this together, the command to authenticate to VMware NSX would look something like this (naturally you’d want to substitute the correct username and password where applicable):

curl --insecure -c cookies.txt -X POST -d 'username=admin&password=admin' https://192.168.100.50/ws.v1/login

Example 3: Gathering Information from OpenStack

Once you’ve gotten an authentication token from OpenStack as I showed you in example #1 above, then you can start using API requests to get information from OpenStack.

For example, let’s say you wanted to list the instances for a particular tenant. Once you’ve authenticated, you’d want to get the ID for the tenant in question, so you’d need to ask OpenStack to give you a list of the tenants (you’ll only see the tenants your credentials permit). The command to do that would look something like this:

curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:5000/v2.0/tenants

The value to be substituted for token ID in the above command is returned by OpenStack when you authenticate (that’s why it’s important to pay attention to the data being returned). In this case, the data returned by the command will be a JSON-encoded list of tenants, tenant IDs, and tenant descriptions. From that data, you can get the ID of the tenant for whom you’d like to list the instances, then use a command like this:

curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:8774/v2/<Tenant ID>/servers

This will return a stream of JSON-encoded data that includes the list of instances and each instance’s unique ID—which you could then use to get more detailed information about that instance:

curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:8774/v2/<Tenant ID>/servers/<Server ID>

By and large, the API is reasonably well-documented; you just need to be sure that you are pointing the API call against the right endpoint. For example, authentication has to happen against the server running Keystone, which may or may not be the same server that is running the Nova API services. (In the examples I just provided, Keystone and the Nova API services are running on the same host, which is why the IP address is the same in the command lines.)

Example 4: Creating Objects in VMware NSX

Getting information from VMware NSX using the RESTful API is very much like what you’ve already seen in getting information from OpenStack. Of course, the API can also be used to create objects. To create objects—such as logical switches, logical switch ports, or ACLs—you’ll use a combination of curl options:

  • You’ll use the “-b” option to pass cookie data (stored when you authenticated to NSX) back for authentication.
  • The “-X” option allows you to specify the HTTP method (in this case, POST).
  • The “-d” option lets us transfer JSON-encoded data to form the request for the object we are going to create. We’ll specify a filename here, preceded by the “@” symbol.
  • The “-H” option adds an appropriate “Content-Type: application/json” header to the request, since we are passing JSON-encoded data to the NSX controller.

When you put it all together, it looks something like this (substituting appropriate values where applicable):

curl --insecure -b cookies.txt -d @new-switch.json
-H "Content-Type: application/json" -X POST https://192.168.100.50/ws.v1/lswitch

As I mentioned earlier, you’re passing JSON-encoded data to the NSX controller; here are the contents of the new-switch.json file referenced in the above command example:

If you can’t see the code block, please click here.

Once again, I recommend piping the output of this command through python -m json.tool, as what you’ll get back on a successful call is some useful JSON data that includes, among other things, the UUID of the object (logical switch, in this case) that you just created. You can use this UUID in subsequent API calls to list properties, change properties, add logical switch ports, etc.

Clearly, there is much more that can be done with the OpenStack and VMware NSX APIs, but this at least should give you a starting point from which you can continue to explore in more detail. If anyone has any corrections, clarifications, or questions, please feel free to post them in the comments section below. All courteous comments (with vendor disclosure, where applicable) are welcome!

Tags: , , , ,

« Older entries