In this post, I’m going to provide a brief introduction to working with Linux containers via LXC. Linux containers are getting a fair amount of attention these days (perhaps due to Docker, which leverages LXC on the back-end) as a lightweight alternative to full machine virtualization such as that provided by “traditional” hypervisors like KVM, Xen, or ESXi.

Both full machine virtualization and containers have their advantages and disadvantages. Full machine virtualization offers greater isolation at the cost of greater overhead, as each virtual machine runs its own full kernel and operating system instance. Containers, on the other hand, generally offer less isolation but lower overhead through sharing certain portions of the host kernel and operating system instance. In my opinion full machine virtualization and containers are complementary; each offers certain advantages that might be useful in specific situations.

Now that you have a rough idea of what containers are, let’s take a closer look at using containers with LXC. I’m using Ubuntu 12.04.3 LTS for my testing; if you’re using something different, keep in mind that certain commands may differ from what I show you here.

Installing LXC is pretty straightforward, at least on Ubuntu. To install LXC, simply use apt-get:

apt-get install lxc

Once you have LXC installed, your next step is creating a container. To create a container, you’ll use the lxc-create command and supply the name of the container template as well as the name you want to assign to the new container:

lxc-create -t <template> -n <container name>

You’ll need Internet access to run this command, as it will download (via your configured repositories) the necessary files to build a container according to the template you specified on the command line. For example, to use the “ubuntu” template and create a new container called “cn–01″, the command would look like this:

lxc-create -t ubuntu -n cn-01

Note that the “ubuntu” template specified in this command has some additional options supported; for example, you can opt to create a container with a different release of Ubuntu (it defaults to the latest LTS) or a different architecture (it defaults to the host’s architecture).

Once you have at least one container created, you can list the containers that exist on your host system:

lxc-list

This will show you all the containers that have been created, grouped according to whether the container is stopped, frozen (paused), or running.

To start a container, use the lxc-start command:

lxc-start -n <container name>

Using the lxc-start command as shown above is fine for initial testing of your container, to ensure that it boots up as you expect. However, you won’t want to run your containers long-term like this, as the container “takes over” your console with this command. Instead, you want the container to run in the background, detached from the console. To do that, you’ll add the “-d” parameter to the command:

lxc-start -d -n <container name>

This launches your container in the background. To attach to the console of the container, you can use the lxc-console command:

lxc-console -n <container name>

To escape out of the container’s console back to the host’s console, use the “Ctrl-a q” key sequence (press Ctrl-a, release, then press q).

You can freeze (pause) a container using the lxc-freeze command:

lxc-freeze -n <container name>

Once frozen, you can unfreeze (resume) a container just as easily with the lxc-unfreeze command:

lxc-unfreeze -n <container name>

You can also make a clone (a copy) of a container:

lxc-clone -o <existing container> -n <new container name>

On Ubuntu, LXC is configured by default to start containers in /var/lib/lxc. Each container will have a directory there. In a container’s directory, that container’s configuration will be stored in a file named config. I’m not going to provide a comprehensive breakdown of the settings available in the container’s configuration (this is a brief introduction), but I will call out a few that are worth noting in my opinion:

  • The lxc.network.type option controls what kind of networking the container will use. The default is “veth”; this uses virtual Ethernet pairs. (If you aren’t familiar with veth pairs, see my post on Linux network namespaces.)
  • The lxc.network.veth.pair configuration option controls the name of the veth interface created in the host. By default, a container sees one side of the veth pair as eth0 (as would be expected), and the host sees the other side as either a random name (default) or whatever you specify here. Personally, I find it useful to rename the host interface so that it’s easier to tell which veth interface goes to which container, but YMMV.
  • lxc.network.link specifies a bridge to which the host side of the veth pair should be attached. If you leave this blank, the host veth interface is unattached.
  • The configuration option lxc.rootfs specifies where the container’s root file system is stored. By default it is /var/lib/lxc/<container name>/rootfs.

There are a great deal of other configuration options, naturally; check out man 5 lxc.conf for more information. You may also find this Ubuntu page on LXC to be helpful; I certainly did.

I’ll have more posts on Linux containers in the future, but this should suffice to at least help you get started. If you have any questions, any suggestions for additional resources other readers should consider, or any feedback on the post, please add your comment below. I’d love to hear from you (courteous comments are always welcome).

Tags: , , , ,

In this post, I’m going to provide an update on using GRE tunnels with Open vSwitch (OVS) to include more than 2 hosts. I previously showed you how to use GRE tunnels with OVS to connect VMs on different hypervisor hosts, but in my testing I didn’t use this technique with more than two hypervisors. A few readers posted comments to that article asking how to extend the solution to more than 2 hypervisors, but I hadn’t had the time to test anything more.

Now, as a result of some related work I’ve been doing, I have an update on using this technique for more than two hosts. If you didn’t read the post on using GRE tunnels with OVS, go back and read that now. Also, be sure to read my post on examining OVS traffic patterns, as this is also useful information. Finally, note that this information applies to any use of GRE tunnels with OVS, not just GRE tunnels with OVS on hypervisors.

Let’s say you have three hosts:

  1. HostA, with an IP address of 10.1.1.1
  2. HostB, with an IP address of 10.1.1.2
  3. HostC, with an IP address of 10.1.1.3

To connect entities (VMs, containers, etc.) on these hosts using GRE tunnels, you’d need to manually configure OVS on each of hosts:

  • On HostA, you’d need a GRE tunnel to HostB (10.1.1.2) and a GRE tunnel to HostC (10.1.1.3)
  • On HostB, you’d need a GRE tunnel to HostA (10.1.1.1) and one to HostC (10.1.1.3)
  • On HostC, you’d need two GRE tunnels, one to HostA (10.1.1.1) and one to HostB (10.1.1.2)

I won’t repeat the specific commands to create those tunnels here, as it is well explained in my earlier article. What this creates is a virtual topology like this:

GRE tunnel full mesh

What you’ll find when you try this yourself is that everything works fine when there are just two hosts; this is what I also found when I first wrote the article. When you add the third host, though, you’ll find—assuming you created a full mesh of GRE tunnels—is that everything stops working.

Here’s how to fix that. Run this command on each of the hosts running OVS:

ovs-vsctl set bridge <bridge name> stp_enable=true

Yes, that’s right: looping is the culprit here. Look back at the topology figure above. In the physical world, a topology like that using switches (without STP) would take down your network because of a bridging loop. The same applies here as well. In both cases (physical or virtual) you have two choices: you can either not create a full mesh topology (you could use a star topology or something if you wanted) or you run STP. It’s up to you.

Assuming you turn on STP, then you’ll find after a few minutes that you’ll be able to happily ping between VMs on these hypervisors.

I do want to share one final note before I wrap up. STP is needed in this instance because we are relying on OVS in MAC learning mode (just like a physical switch). If we were to add an OpenFlow controller to this mix and push flow rules down to OVS, OVS would stop using MAC learning, and we would no longer need STP in order to build full-mesh topologies of tunnels.

Feel free to post any questions or comments below. All courteous comments are welcome!

Tags: , , , ,

In this post I’m going to show you how to make JSON (JavaScript Object Notation) output more readable using a BBEdit Text Filter. This post comes out of some recent work I’ve done in learning how to interact with various REST APIs. My initial REST API explorations have focused on the NVP/NSX API, but I plan to soon expand my explorations to include other APIs, like OpenStack.

<aside>You might be wondering why I’m exploring REST APIs and stuff like JSON. I believe that having a better understanding of the APIs these products use will help drive a deeper and more complete understanding of the underlying products. I could be wrong…time will tell.</aside>

BBEdit Text Filters, as you may already know, simply take the current text (or selected text) in BBEdit, do something to it, and then output the result. The “do something to it” is, of course, the magic. You can, for example—and this something that I do—use the MultiMarkdown command-line executable to transform a (Multi)Markdown document in BBEdit to HTML. All that is required is to place the script (or a link to the script) in the ~/Library/Application Support/BBEdit/Text Filters directory. The script just needs to accept input on STDIN, transform it in whatever way you want, and spit out the results on STDOUT. BBEdit does the rest.

In this case, you’re going to use an extremely simple Bash shell script containing a single Python command to transform JSON-serialized output into a more human-readable format.

First, let’s take a look at some JSON-serialized output. Here’s the output from an API call to NVP/NSX to list the logical switches:

(To view the information if the code block isn’t available, click here.)

It is human-readable, but just barely. How can we make this a bit easier for humans to read and parse? Well, it turns out that OS X (and probably most recent flavors of Linux) come with a version of Python pre-installed, and the pre-installed version of Python comes with the ability to “prettify” (make more human readable) JSON text. (In the case of OS X 10.8 “Mountain Lion”, the pre-installed version of Python is version 2.7.2.) With grateful thanks to the folks on Twitter who introduced me to this trick, the command you would use in this instance is as follows:

python -m json.tool

Very simple, right? To turn this into a BBEdit Text Filter, we need only wrap this into a very simple shell script, such as this:

(If you aren’t able to see the code block above, please click here.)

Place this script (or a link to this script) in the ~/Library/Application Support/BBEdit/Text Filters directory, restart BBEdit, and you should be good to go. Now you can copy and paste the output from an API call like the output above, run it through this text filter, and get output that looks like this:

(Click here if the code block above isn’t visible.)

Given that I’m new to a lot of this stuff, I’m sure that I have probably overlooked something along the way. There might be better and/or more efficient ways of handling this, or better tools to use. If you have any suggestions on how to improve any of this—or just suggestions on how I might do better in my API explorations—feel free to speak out in the comments below.

Tags: , , , , ,

As part of some work I’ve been doing to stretch myself and my boundaries, I’ve recently started diving a bit deeper into working with REST APIs. As I started this exploration, one thing that kept coming up again and again was JSON. In this post, I’m going to try to provide an introduction to JSON for non-programmers (like me).

Let’s start with the acronym: “JSON” stands for “JavaScript Object Notation”. It’s a lightweight, text-based format, and is frequently used in conjunction with REST APIs and web-based services. You can find more details on the specifics of the JSON format at the JSON web site.

The basic structures of JSON are:

  • A set of name/value pairs
  • An ordered list of values

Now, that sounds simple enough, but let’s look at some examples to really bring this home. The examples that I’ll use are taken from API responses in my virtualized NVP/NSX lab using the NVP/NSX API.

First, here’s an example of a set of name/value pairs (I’ve taken the liberty of making the raw output from the API calls more readable for clarity’s sake; raw JSON data typically wouldn’t have line returns or whitespace):

(Click here if you don’t see a code block above.)

Let’s break that down a bit:

  • Each object is surrounded by curly braces (referred to just as braces by some). The entire JSON response is itself an object—at least this is how I view it—so it is surrounded by braces. It contains three objects, which are part of the “results” array (more on that in just a second).
  • Each object may have multiple name/value pairs separated by a comma. Name/value pairs may represent a single value (as with “result_count”) or multiple values in an array (as with “results”). So, in this example, there are two name/value pairs: one named “result_count” and one named “results”. Note the use of the colon separating the name from the associated value(s).
  • The second item (object, if you will) in the API response is named “results”, but note that its value isn’t a single value; rather, it’s an array of values. Arrays are surrounded by brackets, and each element/item in the array is separated by a comma. In this particular case—and this will vary from API to API, as far as I know—note that the “result_count” value tells you exactly how many items are in the “results” array, making it incredibly easy to iterate through the items in the array.
  • In the “results” array, there are three items (or objects). Each of these items—each surrounded by braces—has three name/value pairs, separated by commas, with a colon separating the name from the value.

As you can see, JSON has a very simple structure and format, once you’re able to break it down.

There are numerous other examples and breakdowns of JSON around the web; here are a few that I found helpful in my education (which is still ongoing):

JSON Basics: What You Need to Know
JSON: What It Is, How It Works, & How to Use It (This one gets a bit deep for non-programmers, but you might find it helpful nevertheless.)
JSON Tutorial

You may also see the term “JSON-serialized”; this generally refers to data that has been formatted as JSON. To JSON-serialize data means to put it into JSON format; to deserialize JSON data means to parse (or deconstruct) the JSON output into some other format.

I’m sure there’s a great deal more that could (and perhaps should) be said about JSON, but I did say this would be a non-programmer’s introduction to JSON. If you have any questions, thoughts, suggestions, or clarifications, please feel free to speak up in the comments below.

UPDATE: I’ve edited the text above based on some feedback in the comments. Thanks for your feedback; the post is better for it!

Tags: , ,

Welcome to part 7 of the Learning NVP blog series, in which I will discuss transitioning from a focus on NVP to looking at NSX.

If you’re just now joining me for this series, here’s what’s transpired thus far:

When I first started this series back in May of this year, I said this:

Before continuing, it might be useful to set some context around NVP and NSX… The architecture I’m describing here will also be applicable to NSX, which VMware announced in early March. Because NSX will leverage NVP’s architecture, spending some time with NVP now will pay off with NSX later.

Well, the “later” that I referenced is now upon us. I had hoped to be much farther along with this blog series by now, but it has proven more difficult than I had anticipated to get this content written and published. Given that NSX officially GA’d last week at VMworld EMEA in Barcelona, I figured it was time to make the transition from NVP to NSX.

The way I’ll handle the transition from talking NVP to discussing VMware NSX is through an upgrade. I have a completely virtualized environment that is currently running all the NVP components: three controllers, NVP Manager, three nested hypervisors running Ubuntu+KVM+OVS, two gateways, and a service node. (I know, I know—I haven’t written about service nodes yet. Sorry.) The idea is to take you through the upgrade process, upgrading my environment from NVP 3.1.1 to NVP 3.2.1 and then to NSX 4.0.0. From that point forward, the series will change from “Learning NVP” to “Learning NSX”, and I’ll continue with discussing all the topics that I have planned. These include (among others):

  • Deploying service nodes
  • Using an L2 gateway service
  • Using an L3 gateway service
  • Enabling distributed east-west routing
  • Many, many more topics…

Unfortunately, my travel schedule over the next few weeks is pretty hectic, which will probably limit my ability to move quickly on performing and documenting the upgrade process. Nevertheless, I will press forward as quickly as possible, so stay tuned to the site for more updates as soon as I’m able to get them published.

Questions? Comments? Feel free to add them below. All I ask for is common courtesy and disclosure of vendor affiliations, where applicable. Thanks!

Tags: , , , ,

In this post, I’ll share my thoughts on the Timbuk2 Commute messenger bag. It was about two months ago that I tweeted that I bought a new bag:

@scott_lowe: I picked up a new @timbuk2 messenger bag yesterday. Looking forward to seeing how well it works on my next business trip.

The bag I ended up purchasing was the Timbuk2 Commute in black. Since I bought it in early September (just after returning from San Francisco for VMworld), I’ve traveled with it to various points in the US, to Canada, and most recently to Barcelona for VMworld EMEA. Here are my thoughts on the bag now that I’ve logged a decent amount of travel with it:

  • Although it’s a “small” bag—the smallest size offered in the Commute line—I’ve found that it has plenty of space to carry the stuff that I regularly need. I regularly carry my 13″ MacBook Air, my full-size iPad, my Bose QC15 headphones, all my various power adapters/chargers/cables, a small notebook, and I still have some room left over. (If you have a larger laptop, you’ll need a larger bag; the small Commute only accommodates up to a 13″ laptop.)
  • The default shoulder pad that comes with the bag is woefully inadequate. I strongly recommend getting the Deluxe Strap Pad. My first couple of trips were with the default pad, and after a few hours the bag’s presence was noticeable. With the Deluxe Strap Pad, carrying my bag for a few hours is a breeze, and carrying it for 12 hours a day during VMworld EMEA was bearable (I can’t imagine doing it with the default shoulder pad.)
  • The TSA-friendly “lie flat” design doesn’t necessarily lie very flat, especially if the main compartment is full. This can make it a little challenging in the security line, but this is a very minor nit overall. The design does, however, make it super easy to get to my laptop (or anything else in that compartment).
  • While getting to my laptop is easy, getting to stuff in the bag isn’t quite so easy. (This is probably by design.) If you have smaller items in your bag that you’re going to need to get out and put back in frequently, the clips+velcro on the Commute’s flap make this a big more work. Again, this is probably by design (to prevent other people from being able to easily get into your bag).
  • The zip-open rear compartment has a space on one side for the laptop; here my 13" MacBook Air (with a Speck case) fits very well. On the opposite side is a pair of slightly smaller compartments separated by a velcro divider. These smaller compartments are just a tad too small to hold a full-size iPad, though I suspect an iPad mini (or similarly-sized tablet) would fit quite well there.
  • A full-size iPad does fit well, however, in the pocket on the inside of the main compartment.
  • The complement of pockets and organizers inside the main compartment makes it easy to store (and subsequently find) all the small things you often need when traveling. In my case, the pockets and organizers easily keep my chargers and charging cables, pens, business cards, and such neatly organized and accessible.

Overall, I’m pretty happy with the bag, and I would recommend it to anyone who travels relatively light and is looking for a messenger-style bag. This bag wouldn’t have worked in years past when I was doing implementations/installations at customer sites (you invariably end up having to carry a ton of cables, tools, software, connectors, etc. in situations like that), but now that I’m able to focus on core essentials—laptop, tablet, notebook, and limited accessories—this bag is perfect.

If you have any additional feedback on the Timbuk2 Commute bag you’d like to share, I’d love to hear it (and I’m sure other readers would as well). Feel free to add your thoughts in the comments below.

Tags: ,

Recently a couple of open source software (OSS)-related announcements have passed through my Inbox, so I thought I’d make brief mention of them here on the site.

Mirantis OpenStack

Last week Mirantis announced the general availability of Mirantis OpenStack, its own commercially-supported OpenStack distribution. Mirantis joins a number of other vendors also offering OpenStack distributions, though Mirantis claims to be different on the basis that its OpenStack distribution is not tied to a particular Linux distribution. Mirantis is also differentiating through support for some additional projects:

  • Fuel (Mirantis’ own OpenStack deployment tool)
  • Savanna (for running Hadoop on OpenStack)
  • Murano (a service for assisting in the deployment of Windows-based services on OpenStack)

It’s fairly clear to me that at this stage in OpenStack’s lifecycle, professional services are a big play in helping organizations stand up OpenStack (few organizations lack the deep expertise to really stand up sizable installations of OpenStack on their own). However, I’m not yet convinced that building and maintaining your own OpenStack distribution is going to be as useful and valuable for the smaller players, given the pending competition from the major open source players out there. Of course, I’m not an expert, so I could be wrong.

Inktank Ceph Enterprise

Ceph, the open source distributed software system, is now coming in a fully-supported version aimed at enterprise markets. Inktank has announced Inktank Ceph Enterprise, a bundle of software and support aimed to increase adoption of Ceph among enterprise customers. Inktank Ceph Enterprise will include:

  • Open source Ceph (version 0.67)
  • New “Calamari” graphical manager that provides management tools and performance data with the intent of simplifying management and operation of Ceph clusters
  • Support services provided by Inktank; this includes technical support, hot fixes, bug prioritization, and roadmap input

Given Ceph’s integration with OpenStack, CloudStack, and open source hypervisors and hypervisor management tools (such as libvirt), it will be interesting to see how Inktank Ceph Enterprise takes off. Will the adoption of Inktank Ceph Enterprise be gated by enterprise adoption of these related open source technologies, or will it help drive their adoption? I wonder if it would make sense for Inktank to pursue some integration with VMware, given VMware’s strong position in the enterprise market. One thing is for certain: it will be interesting to see how things play out.

As always, feel free to speak up in the comments to share your thoughts on these announcements (or any other related topic). All courteous comments are welcome.

Tags: , , ,

Welcome to part 6 of the Learning NVP blog series. In this part, I’m going to show you how to add an NVP gateway appliance to your NVP environment. In future posts, you’ll use this NVP gateway to host either L2 or L3 gateway services (more on those in a moment). First, though, let’s take a quick recap of what’s transpired so far:

In this part, I’m going to walk you through setting up an NVP gateway appliance. If you’ll recall from our introductory high-level architecture overview, the role of the gateway is to provide L2 (switched/bridged) and L3 (routed) connectivity between logical networks and physical networks. So, adding a gateway would then enable you to extend the logical network you created in part 4 to include either L2 or L3 connectivity to the outside world.

<aside>Many of you have probably seen some of the announcements from VMworld about NSX integrations from various networking suppliers (Arista, Brocade, Dell, and Juniper, for example). These announcements will allow NSX—which I’ve said before will leverage a great deal of NVP’s architecture—to use these hardware devices as L2 gateways, providing bridged/switched connectivity between logical networks and physical networks.</aside>

This post will focus only on getting the gateway appliance set up; in future posts, I’ll show you how to actually add the L2 or L3 connectivity to your logical network.

Building the NVP Gateway

The NVP gateway software is distributed as an ISO, like the NVP controller software. You’d typically install this software on a bare metal server, though with recent releases of NVP it is supported to install the gateway into a VM (refer to the latest NVP release notes for more details). As with the NVP controllers and NVP Manager, the gateway is built on Ubuntu 12.04, and the installer process is completely automated. Once you boot from the ISO, the installation will proceed automatically; when completed, you’ll be left at the login prompt.

Configuring the NVP Gateway

Once the NVP gateway software is installed, configuring the gateway is really straightforward. In fact, it feels a lot like configuring NVP controllers (I suspect this is by design). Here are the steps:

  1. Set the password for the admin user (optional, but highly recommended).

  2. Set the hostname for the gateway appliance (also optional, but strongly recommended).

  3. Configure the network interfaces; you’ll need management, transport, and external connectivity. (I’ll explain those in more detail later.)

  4. Configure DNS and NTP settings.

Let’s take a closer look at these steps. The first step is to set the password for the admin user, which you can accomplish with this command:

set user admin password

From here, you can proceed with setting the hostname for the gateway:

set hostname <hostname>

(So far, these commands should be pretty familiar. They are the same commands used when you set up the NVP controllers and NVP Manager.)

The next step is configure network connectivity; you’ll start by listing the available network interfaces with this command:

show network interfaces

As you’ve seen with the other NVP appliances, the NVP gateway software builds an Open vSwitch (OVS) bridge for each physical interface. In the case of a gateway, you’ll need at least three interfaces—a management interface, a transport network interface, and an external network interface. The diagram below provides a bit more context around how these interfaces are used:

NVP gateway appliance interfaces

Since these interfaces have very different responsibilities, it’s important that you properly configure them. Otherwise, things won’t work as expected. Take the time to identify which interface listed in the show network interfaces output corrsponds to each function. You’ll first want to establish management connectivity, so that should be the first interface to configure. Assuming that breth1 (the bridge matching the physical eth2 interface) is your management interface, you’ll configure it using this command:

set network interface breth1 ip config static 192.168.1.12 255.255.255.0

You’ll want to repeat this command for the other interfaces in the gateway, assigning appropriate IP addresses to each of them.

You may also need to configure the routing for the gateway. Check the routing table(s) with this command:

show network routes

If there is no default route, you can set one using this command:

add network route 0.0.0.0 0.0.0.0 <Default gateway IP address>

Once the appropriate network connectivity has been established, then you can proceed with the next step: adding DNS and NTP servers. Here are the commands for this step:

add network dns-server <DNS server IP address>
add network ntp-server <NTP server IP address>

If you accidentally fat-finger an IP address or hostname along the way, use the remove network dns-server or remove network ntp-server command to remove the incorrect entry, then re-add it correctly with the commands above.

Congrats! The NVP gateway appliance is now up and running. You’re ready to add it to NVP. Once it’s added to NVP, you’ll be able to use the gateway appliance to add gateway services to your logical networks.

Adding the Gateway to NVP

To add the new gateway appliance to NVP, you’ll use NVP Manager (I showed you how to set up NVP Manager in part 3 of the series). Once you’ve opened a web browser, navigated to the NVP Manager web UI, and logged in, then you can start the process of adding the gateway to NVP.

  1. Once you’re logged into NVP Manager, click on the Dashboard link across the top. (If you’re already at the Dashboard, you can skip this step.)

  2. In the Summary of Transport Components box, click the Connect & Add Transport Node button. This will open the Connect to Transport Node dialog box.

  3. Supply the management IP address of the gateway appliance, along with the appropriate username and password, then click Connect.

  4. After a moment, the Connect to Transport Node dialog box will show details of the gateway appliance, such as the interfaces, the bridges, the NIC bonds (if any), and the gateway’s SSL certificate. Click Configure at the bottom of the dialog box to continue.

  5. Supply a display name (something like nvp-gw–01) and, optionally, one or more tags. Click Next.

  6. Unless you know you need to select any of the options on the next screen (I’ll try to cover them in a later blog post), just click Next.

  7. On the final screen, you’ll need to establish connectivity to a transport zone. You’ll want to select the appropriate interface (in my example environment, it was breth2) and the appropriate encapsulation protocol (STT is generally recommended for connectivity back to hypervisors). Then select the appropriate transport zone from the drop-down list. In the end, you’ll have a screen that looks something like this (note that your interfaces, IP addresses, and transport zone information will likely be different):

  8. Adding a gateway to NVP

  9. Click Save to finish the process. The number of gateways listed in the Summary of Transport Components box should increment by 1 in the Registered column. However, the Active column will remain unchanged—that’s because there’s one more step needed.

  10. Back on the gateway appliance itself, run this command (you can use the IP address of any controller in the NVP controller cluster):

  11. set switch manager-cluster <NVP controller IP address>
  12. Back in NVP Manager, refresh the Summary of Transport Components box (there’s a small refresh icon in the corner), and you’ll see the Active column update to show the gateway appliance is now registered and active in NVP.

That’s it—you’re all done adding a gateway appliance to NVP. In future posts, you’ll leverage the gateway appliance to add L2 (bridged) and L3 (routed) connectivity in and out of logical networks. First, though, I’ll need to address the transition from NVP to NSX, so look for that coming soon. In the meantime, feel free to post any questions, thoughts, or suggestions in the comments below. I welcome all courteous comments (even if you disagree with something I’ve said!).

Tags: , , , ,

Welcome to Technology Short Take #36. In this episode, I’ll share a variety of links from around the web, along with some random thoughts and ideas along the way. I try to keep things related to the key technology areas you’ll see in today’s data centers, though I do stray from time to time. In any case, enough with the introduction—bring on the content! I hope you find something useful.

Networking

  • This post is a bit older, but still useful in the event if you’re interested in learning more about OpenFlow and OpenFlow controllers. Nick Buraglio has put together a basic reference OpenFlow controller VM—this is a KVM guest with CentOS 6.3 with the Floodlight open source controller.
  • Paul Fries takes on defining SDN, breaking it down into two “flavors”: host dominant and network dominant. This is a reasonable way of grouping the various approaches to SDN (using SDN in the very loose industry sense, not the original control plane-data plane separation sense). I’d like to add to Paul’s analysis that it’s important to understand that, in reality, host dominant and network dominant systems can coexist. It’s not at all unreasonable to think that you might have a fabric controller that is responsible for managing/optimizing traffic flows across the physical transport network/fabric, and an overlay controller—like VMware NSX—that integrates tightly with the hypervisor(s) and workloads running on those hypervisors to create and manage logical connectivity and logical network services.
  • This is an older post from April 2013, but still useful, I think. In his article titled “OpenFlow Test Deployment Options“, Brent Salisbury—a rock star new breed network engineer emerging in the new world of SDN—discusses some practical deployment strategies for deploying OpenFlow into an existing network topology. One key statement that I really liked from this article was this one: “SDN does not represent the end of networking as we know it. More than ever, talented operators, engineers and architects will be required to shape the future of networking.” New technologies don’t make talented folks who embrace change obsolete; if anything, these new technologies make them more valuable.
  • Great post by Ivan (is there a post by Ivan that isn’t great?) on flow table explosion with OpenFlow. He does a great job of explaining how OpenFlow works and why OpenFlow 1.3 is needed in order to see broader adoption of OpenFlow.

Servers/Hardware

  • Intel announced the E5 2600 v2 series of CPUs back at Intel Developer Forum (IDF) 2013 (you can follow my IDF 2013 coverage by looking at posts with the IDF2013 tag). Kevin Houston followed up on that announcement with a useful post on vSphere compatibility with the E5 2600 v2. You can also get more details on the E5 2600 v2 itself in this related post by Kevin as well. (Although I’m just now catching Kevin’s posts, they were published almost immediately after the Intel announcements—thanks for the promptness, Kevin!)
  • blah

Security

Nothing this time around, but I’ll keep my eyes posted for content to share with you in future posts.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • I found this refresher on some of the most useful apt-get/apt-cache commands to be helpful. I don’t use some of them on a regular basis, and so it’s hard to remember the specific command and/or syntax when you do need one of these commands.
  • I wouldn’t have initially considered comparing Docker and Chef, but considering that I’m not an expert in either technology it could just be my limited understanding. However, this post on why Docker and why not Chef does a good job of looking at ways that Docker could potentially replace certain uses for Chef. Personally, I tend to lean toward the author’s final conclusions that it is entirely possible that we’ll see Docker and Chef being used together. However, as I stated, I’m not an expert in either technology, so my view may be incorrect. (I reserve the right to revise my view in the future.)

Storage

  • Using Dell EqualLogic with VMFS? Better read this heads-up from Cormac Hogan and take the recommended action right away.
  • Erwin van Londen proposes some ideas for enhancing FC error detection and notification with the idea of making hosts more aware of path errors and able to “route” around them. It’s interesting stuff; as Erwin points out, though, even if the T11 accepted the proposal it would be a while before this capability showed up in actual products.

Virtualization

That’s it for this time around, but feel free to continue to conversation in the comments below. If you have any additional information to share regarding any of the topics I’ve mentioned, please take the time to add that information in the comments. Courteous comments are always welcome!

Tags: , , , , , , , , , , , ,

In this post, I’ll show you how I extended my solution for managing user accounts with Puppet to include managing SSH authorized keys. With this solution in place, user accounts managed through Puppet can also include their SSH public key, and that public key will automatically be installed on hosts where the account is realized. All in all, I think it’s a pretty cool solution.

Just to refresh your memory, here’s the original Puppet manifest code I posted in the original article; this code uses define-based virtual user resources that you then realize on a per-host basis.

(If the code block showing the Puppet code isn’t appearing above, click here.)

Since I posted this original code, I’ve made a few changes. I switched some of the hard-coded values to parameters (stored in a separate subclass), and I made a few stylistic/syntactic changes based on running the code through puppet-lint. But, by and large, this is still quite similar to the code I’m running right now.

Here’s the code after I modified it to include managing SSH authorized keys for user accounts:

(Can’t see the code block? Click here.)

Let’s walk through the changes between the two snippets of code:

  • Two new parameters are added, $sshkeytype and $sshkey. These parameters hold, quite naturally, the SSH key type and the SSH key itself.
  • Several values are parameterized, pulling values from the accounts::params manifest.
  • You can note a number of stylistic and syntactical changes.
  • The accounts::virtual class now includes a stanza using the built-in ssh_authorized_key resource type. This is the real heart of the changes—by adding this to the virtual user resource, it makes sure that when users are realized, their SSH public keys are added to the host.

With this code in place, you’d then define a user like this:

(Click here if the code block doesn’t appear above.)

The requirement for Class[‘accounts::config'] is to ensure that various configuration tasks are finished before the user account is defined; I discussed this in more detail in this post on Puppet, user accounts, and configuration files. Now, when I realize a virtual user resource, Puppet will also ensure that the user’s SSH public key is automatically added to the user’s .ssh/authorized_keys file on that host. Pretty sweet, eh? Further, if the key ever changes, you need only change it on the Puppet server itself, and on the next Puppet agent run the hosts will update themselves.

I freely admit that I’m not a Puppet expert, so there might be better/faster/more efficient ways of doing this. If you are a Puppet expert, please feel free to weigh in below in the comments. I welcome all courteous comments!

Tags: , , ,

« Older entries § Newer entries »