You are currently browsing articles tagged Virtualization.

If you’ve been following trends in the “cloud” world, you’ve probably heard of Docker. In this post I’m going to provide a quick introduction to Docker, which might be helpful if you’re wondering what Docker is and why it’s garnered so much attention.

The best way to describe Docker is to use the phrase from the Docker web site—Docker is “an open source project to pack, ship and run any application as a lightweight container.” The idea is to provide a comprehensive abstraction layer that allows developers to “containerize” or “package” any application and have it run on any infrastructure. The use of container here refers more to the consistent, standard packaging of applications rather than referring to any underlying technology (a distinction that will be important in a moment). The most common analogy used to help people understand Docker is saying that Docker containers are like shipping containers: they provide a standard, consistent way of shipping just about anything. Docker containers provide a standard, consistent way of packaging just about any application. (At least, that’s the idea.)

So what are the underlying technologies that make up Docker?

  • Docker leverages LXC (Linux Containers), which encompasses Linux features like cgroups and namespaces for strong process isolation and resource control. If you’re not familiar with LXC, I suggest having a look at my introductory post on LXC. Docker’s use of LXC is why I mentioned earlier that Docker’s use of container is more about consistent/standard packaging than any specific technology. While Docker currently leverages LXC, the Docker architecture isn’t necessarily limited to LXC. In theory, Docker could leverage KVM to do the same things it does today with LXC. (Update: While in the process of finishing this post, Docker released version 0.9, which does offer the ability to replace LXC with a different back-end “engine” called libcontainer.)
  • Docker leverages a copy-on-write filesystem (currently AUFS, but other filesystems are being investigated). This allows Docker to instantiate containers (using that term to refer to Docker’s construct, not the LXC construct) very quickly—instead of having to make full copies of whatever files comprise a container, Docker can basically use “pointers” back to existing files. It seems to me this is a big part of what makes Docker so useful.
  • Perhaps due in part to the use of a copy-on-write file system, Docker containers are easily “linked” to other containers. Some people refer to this as stacking containers (continuing the shipping container analogy); to me, it makes more sense to talk about layering one container on top of another. For example, you might create a container that is based on a base Ubuntu image, and then in turn create another container that is based on the first container.
  • Docker uses a “plain text” configuration language (I suppose you could think of it as a domain-specific language, or DSL) to control the configuration of an application container, such as what files should (or should not) be included, what network ports should be open, and what processes/applications should be running inside the container. (I’ll provide an example of a Docker file in just a few moments.)

Now that you have an idea of the basics, let’s take a deeper look. For the purposes of this article, I’m using Ubuntu 12.04 LTS as the Linux platform. Please keep in mind that different distributions may use different commands.

Installing Docker

To install Docker on Ubuntu, it’s a pretty straightforward process.

  1. First, you’ll want to make sure you are running at least the 3.8 kernel. (Newer versions of 12.04 LTS, such as 12.04.4, appear to be installing the 3.11 kernel by default, so you may not have to do this step.) To install the 3.8 kernel (in case you’re running something older), just run these two commands and reboot:
sudo apt-get update
sudo apt-get install linux-image-generic-lts-raring \
  1. Next, add the Docker repository key to your local Apt keychain:
sudo apt-key adv --keyserver \
--recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
  1. Add the Docker repository to your list of repositories:
sudo sh -c "echo deb docker main \
> /etc/apt/sources.list.d/docker.list"
  1. Finally, install Docker with an apt-get combo:
sudo apt-get update
sudo apt-get install lxc-docker

Note that this automatically installs LXC, but an “older” version (0.7.5). I haven’t tested to see if Docker will work with the newer alpha release of LXC 1.0.0 that is available in the precise-backports repository (see this post for more information).

Launching a Simple Docker Container

Now that Docker is installed, you can launch a quick Docker container like this:

sudo docker run -i -t ubuntu /bin/bash

If everything is working, the first time you run this command it will automatically download an Ubuntu image (as specified by “ubuntu” on that command line) and create a simple Docker container that just runs the bash shell. You’ll get dropped into an odd prompt that will look something like this (the numbers will change from system to system):


You’re now running in an Ubuntu environment, but it’s a clean environment—there are no daemons running, nothing listening on various network ports, nothing else except the bash shell. If you run ifconfig, you’ll see that eth0 has an IP address, but that’s it. When you type exit, the container will terminate and you’ll be returned to the prompt for your host system.

Now, let’s take a look at what happened as a result of running that command. If you run sudo docker images, you will see a list of available Docker images that were downloaded to your local system. They’ll all be listed as “ubuntu” in the first column, and it’s the second column that will differentiate them—you’ll see listings for 12.04, 12.10, 13.04, 13.10, etc. You could run a specific Ubuntu image with a command like this:

sudo docker run -i -t ubuntu:13.10 /bin/bash

If you run this command, you’ll note that you’re launched into this new container very quickly. This is one of the strengths of Docker: once an image has been downloaded, launching containers based on that image is very fast.

One other thing you’ll note about Docker is that the containers you start up are ephemeral—meaning that changes you make to the container aren’t persistent. Try this: start up a container (you can use a simple container like the examples I’ve provided already), create a file, and then exit the container. Start the container again, and you’ll see that the file you created previously no longer exists. There are workarounds for this, but for the sake of keeping this post to a manageable size I’ll save discussing those workarounds for another time.

Launching More Complex Docker Containers

So what if you need a Docker container that does need to have a daemon running and listening on a network port? It’s great that you can run a container for the bash shell, but that’s not particularly useful. What if you wanted to run a MySQL container, an SSH container, or a web server container?

To do something like that, you’ll need to utilize a Dockerfile. A Dockerfile is a plain text file that tells Docker exactly how to construct a Docker image. (I mentioned earlier that Docker had its own DSL; this is what I’m talking about.) Here’s an example Dockerfile:

FROM ubuntu:12.04
MAINTAINER Joe Shmoe "[email protected]"
RUN echo "deb precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y nginx
CMD ["nginx"]

The syntax for a Dockerfile is INSTRUCTION arguments, where “INSTRUCTION” refers to commands like FROM, RUN, or CMD. Here are a few quick notes:

  • The FROM command is what enables Docker containers to be based on other Docker containers. In this case, the Docker container is based on the Ubuntu image; specifically, the Ubuntu 12.04 image.
  • The RUN commands execute various commands within the new container. You’d use RUN instructions to install packages, modify files, etc.
  • The EXPOSE command exposes a listening port on a Docker container.
  • The CMD instruction tells what command/daemon to run when the container launches.

I don’t have room here for a full Dockerfile tutorial, but Docker themselves has a great one. Suffice it to say that if you want anything more complex than a simple bash shell, you’re probably going to need to use a Dockerfile.

Once you have a working Dockerfile (just as a heads-up, the above example doesn’t actually work—sorry), you can create a Docker image from that Dockerfile using the docker build command. Once you have a working image, you can then base other Docker containers on that image (using the FROM instruction in a subsequent Dockerfile).

Exploring Other Docker Commands

There are also some other Docker commands you’ll find useful:

  • The docker images command lists all the Docker images available on the system. This includes images that you’ve downloaded from elsewhere as well as images you’ve created.
  • As I mentioned earlier, docker build allows you to create your own images from a Dockerfile.
  • To see which Docker containers are running, use docker ps.
  • The docker rmi command removes Docker images.
  • Finally, just run docker help if you need to see what other commands are available.

Trying Out Docker

You don’t need a lot to try out Docker; a simple Ubuntu 12.04 LTS VM running on VirtualBox, VMware Fusion, or VMware Workstation will work fine. Nested virtualization support isn’t required when using LXC, and the Docker containers themselves are pretty lightweight. To try out Docker, you only need to create a VM and follow the instructions provided earlier for installing Docker. Once you’ve got that, you’re ready to explore it for yourself.

For additional information or another perspective on Docker, I recommend taking a look at this presentation on SlideShare by James Turnbull (formerly with Puppet Labs, now with Docker).

As always, all courteous questions, thoughts, suggestions, or corrections are welcome.

Tags: , ,

Welcome to Technology Short Take #39, in which I share a random assortment of links, articles, and thoughts from around the world of data center-related technologies. I hope you find something useful—or at least something interesting!


  • Jason Edelman has been talking about the idea of a Common Programmable Abstraction Layer (CPAL). He introduces the idea, then goes on to explore—as he puts it—the power of a CPAL. I can’t help but wonder if this is the right level at which to put the abstraction layer. Is the abstraction layer better served by being integrated into a cloud management platform, like OpenStack? Naturally, the argument then would be, “Not everyone will use a cloud management platform,” which is a valid argument. For those customers who won’t use a cloud management platform, I would then ask: will they benefit from a CPAL? I mean, if they aren’t willing to embrace the abstraction and automation that a cloud management platform brings, will abstraction and automation at the networking layer provide any significant benefit? I’d love to hear others’ thoughts on this.
  • Ethan Banks also muses on the need for abstraction.
  • Craig Matsumoto of SDN Central helps highlight a recent (and fairly significant) development in networking protocols—the submission of the Generic Network Virtualization Encapsulation (Geneve) proposal to the IETF. Jointly authored by VMware, Microsoft, Red Hat, and Intel, this new protocol proposal attempts to bring together the strengths of the various network virtualization encapsulation protocols out there today (VXLAN, STT, NVGRE). This is interesting enough that I might actually write up a separate blog post about it; stay tuned for that.
  • Lee Doyle provides an analysis of the market for network virtualization, which includes some introductory information for those who might be unfamiliar with what network virtualization is. I might contend that Open vSwitch (OVS) alone isn’t an option for network virtualization, but that’s just splitting hairs. Overall, this is a quick but worthy read if you are trying to get started in this space.
  • Don’t think this “software-defined networking” thing is going to take off? Read this, and then let me know what you think.
  • Chris Margret has a nice dissection of how bash completion works, particularly in regards to the Cumulus Networks implementation.


  • Via Kevin Houston, you can get more details on the Intel E7 v2 and new blade servers based on the new CPU. x86 marches on!
  • Another interesting tidbit regarding hardware: it seems as if we are now seeing the emergence of another round of “hardware offloads.” The first round came about around 2006 when Intel and AMD first started releasing their hardware assists for virtualization (Intel VT and AMD-V, respectively). That technology was only “so-so” at first (VMware ESX continued to use binary translation [BT] because it was still faster than the hardware offloads), but it quickly matured and is now leveraged by every major hypervisor on the market. This next round of hardware offloads seems targeted at network virtualization and related technologies. Case in point: a relatively small company named Netronome (I’ve spoken about them previously, first back in 2009 and again a year later), recently announced a new set of network interface cards (NICs) expressly designed to provide hardware acceleration for software-defined networking (SDN), network functions virtualization (NFV), and network virtualization solutions. You can get more details from the Netronome press release. This technology is actually quite interesting; I’m currently talking with Netronome about testing it with VMware NSX and will provide more details as that evolves.


  • Ben Rossi tackles the subject of security in a software-defined world, talking about how best to integrate security into SDN-driven architectures and solutions. It’s a high-level article and doesn’t get into a great level of detail, but does point out some of the key things to consider.

Cloud Computing/Cloud Management

  • “Racker” James Denton has some nice articles on OpenStack Neutron that you might find useful. He starts out with discussing the building blocks of Neutron, then goes on to discuss building a simple flat network, using VLAN provider networks, and Neutron routers and the L3 agent. And if you need a breakdown of provider vs. tenant networks in Neutron, this post is also quite handy.
  • Here’s a couple (first one, second one) of quick walk-throughs on installing OpenStack. They don’t provide any in-depth explanations of what’s going on, why you’re doing what you’re doing, or how it relates to the rest of the steps, but you might find something useful nevertheless.
  • Thinking of building your own OpenStack cloud in a home lab? Kevin Jackson—who along with Cody Bunch co-authored the OpenStack Cloud Computing Cookbook, 2nd Edition—has three articles up on his home OpenStack setup. (At least, I’ve only found three articles so far.) Part 1 is here, part 2 is here, and part 3 is here. Enjoy!
  • This post attempts to describe some of the core (mostly non-technical) differences between OpenStack and OpenNebula. It is published on the site, so keep that in mind as it is (naturally) biased toward OpenNebula. It would be quite interesting to me to see a more technically-focused discussion of the two approaches (and, for that matter, let’s include CloudStack as well). Perhaps this already exists—does anyone know?
  • CloudScaling recently added a Google Compute Engine (GCE) API compatibility module to StackForge, to allow users to leverage the GCE API with OpenStack. See more details here.
  • Want to run Hyper-V in your OpenStack environment? Check this out. Also from the same folks is a version of cloud-init for Windows instances in cloud environments. I’m testing this in my OpenStack home lab now, and hope to have more information soon.

Operating Systems/Applications



  • Brendan Gregg of Joyent has an interesting write-up comparing virtualization performance between Zones (apparently referring to Solaris Zones, a form of OS virtualization/containerization), Xen, and KVM. I might disagree that KVM is a Type 2 hardware virtualization technology, pointing out that Xen also requires a Linux-based dom0 in order to function. (The distinction between a Type 1 that requires a general purpose OS in a dom0/parent partition and a Type 2 that runs on top of a general purpose OS is becoming increasingly blurred, IMHO.) What I did find interesting was that they (Joyent) run a ported version of KVM inside Zones for additional resource controls and security. Based on the results of his testing—performed using DTrace—it would seem that the “double-hulled virtualization” doesn’t really impact performance.
  • Pete Koehler—via Jason Langer’s blog—has a nice post on converting in-guest iSCSI volumes to native VMDKs. If you’re in a similar situation, check out the post for more details.
  • This is interesting. Useful, I’m not so sure about, but definitely interesting.
  • If you are one of the few people living under a rock who doesn’t know about PowerCLI, Alan Renouf is here to help.

It’s time to wrap up; this post has already run longer than usual. There was just so much information that I want to share with you! I’ll be back soon-ish with another post, but until then feel free to join (or start) the conversation by adding your thoughts, ideas, links, or responses in the comments below.

Tags: , , , , , , , , , , , ,

Welcome to part 10 of the Learning NSX blog series, in which I will walk through adding an NSX service node to your NSX configuration.

In the event you’ve joined this series mid-way, here’s what I’ve covered thus far:

In this installation of the series, I’ll walk you through setting up an NSX service node and adding it to the NSX domain. Before I do that, though, it’s probably useful to set some context around the role a service node plays in an NSX environment.

Reviewing Service Nodes in VMware NSX

VMware NSX offers two different ways of handling BUM (Broadcast, Unknown unicast, and Multicast) traffic:

  • NSX can perform source replication, which means that each hypervisor is responsible for replicating BUM packets and transmitting them onto the logical network(s). In small environments, this is probably fine.
  • NSX can also perform service node replication, which—as you probably guessed—uses dedicated service node appliances to offload BUM packet replication and transmission. (Service nodes also play a role in multi-DC deployments with remote gateways, but that’s a topic for a different day.)

My environment is pretty small and limited on resources, so I don’t really need a service node. However, in the current implementation of the integration between OpenStack Neutron and NSX, it assumes the presence of a service node. There is a workaround (I’ll probably blog about that later), but I figured I would just go ahead and add a service node to make things easier.

Building an NSX Service Node

Like the NSX controllers, the NSX gateways, and NSX Manager, the NSX service node software is distributed as an ISO. To install a service node on a physical server, you’d just burn the ISO to an optical disk and boot the server from the optical disk. From the boot menu, select to perform an automated installation, and in a few minutes you’re done.

While it is possible to run a service node as a VM (that’s what I’m doing), be aware this isn’t a supported configuration. In addition, if you think about it, it’s kind of crazy—you’re building a VM that runs on a hypervisor to offload packet replication from the hypervisor. Doesn’t really make sense, does it?

Once the service node is finished installation, you’re ready to configure the service node and then add it to NSX.

Configuring an NSX Service Node

Like the controllers, the gateway, and NSX Manager, the configuration of an NSX service node is pretty straightforward:

  1. Set a password for the admin user (optional, but highly recommended).

  2. Set the hostname for the service node (also optional, but recommended as well).

  3. Assign IP addresses to the service node.

  4. Configure DNS and NTP settings.

Let’s take a look at each of these steps.

To set the password for the default admin user, just use this command:

set user admin password

You’ll be prompted to supply the new password, then retype it for confirmation. Easy, right? (And pretty familiar if you’ve used Linux before.)

Setting the hostname for the service node is equally straightforward:

set hostname <hostname>

Now you’re ready to assign IP addresses to the service node. Note that I said “IP addresses” (plural). This is because the service node needs to have connectivity on the management network (so that it can communicate with the NSX controller cluster) as well as the transport network (so that it can set up tunnels with other transport nodes, like hypervisors and gateways). Use this command to see the network interfaces that are present in the controller:

show network interfaces

You’ll note that for each physical interface in the system, the NSX service node installation procedure created a corresponding bridge (this is actually an OVS bridge). So, for a server that has two interfaces (eth0 and eth1), the installation process will automatically create breth0 and breth1. Generally, you’ll want to assign your IP addresses to the bridge interfaces, and not to the physical interfaces.

Let’s say that you wanted to assign an IP address to breth0, which corresponds to the physical eth0 interface. You’d use this command:

set network interface breth0 ip config static

Naturally, you’d want to substitute the correct IP address and subnet mask in that command. Once the interface is configured, you can use the standard ping command to test connectivity (note, though, that you can’t use any switches to ping, as they aren’t supported by the streamlined NSX appliance CLI). For a service node, you’ll want to assign breth0 an IP address on the management network, and assign breth1 an IP address on your transport network.

Note that you may also need to add a default route using this command:

add network route <Default gateway IP address>

Assuming connectivity is good, you’re ready to add DNS and NTP servers to your configuration. Use these commands:

add network dns-server <DNS server IP address>
add network ntp-server <NTP server IP address>

Repeat these commands as needed to add multiple DNS and/or NTP servers. If you accidentally fat finger an IP address, you can remove the incorrect IP address using the remove command, like this:

remove network dns-server <Incorrect DNS IP address>

Substitute ntp-server for dns-server in the above command to remove an incorrect NTP server address.

To add a DNS search domain to the service node, use this command:

add network dns-search-domain <Domain name>

If you are using DHCP and your appliance happened to pick up some settings from the DHCP server, you may need to use the clear network dns-servers and/or clear network routes command before you can add DNS servers or routes to the service node.

Once you’ve added IP addresses, DNS servers, NTP servers, and successfully tested connectivity over both the management and transport networks, then you’re ready to proceed with adding the service node to NSX.

Adding Service Nodes to NSX

As with adding a gateway appliance in part 6, you’ll use NSX Manager (which you set up in part 3) to add the new service node to NSX. Once you’ve logged into the NSX Manager web UI via your browser, you’re ready to start the process of adding the service node.

  1. From the NSX Manager web UI, click on the Dashboard link across the top. If you’ve just logged into NSX Manager, you’re probably already at the Dashboard and can skip this step.

  2. In the Summary of Transport Components box, click the Add button on the row for Service Nodes. This opens the Create Service Node dialog box.

  3. In step 1 (Type), the Transport Node Type drop-down should already be set to “Service Node.” Click Next.

  4. Set the display name and (optionally) add one or more tags to the service node object. Click Next to proceed.

  5. Make sure that “Admin Status Enabled” is selected, and leave the other options untouched (unless you know you need to change them). Click Next.

  6. On step 4 (Credentials), you’ll need the SSL security certificate from the service node. Since you have established network connectivity to the service node, just SSH into the new service node and issue show switch certificate. Then copy the output and paste it into the Security Certificate box in NSX Manager. Click Next to continue.

  7. The final step in NSX Manager is to add a transport connector. A transport connector tells NSX how transport nodes can communicate over a transport zone (I described transport zones back in part 5). Click Add Connector, then specify the transport type (which tunneling protocol to use), the transport zone, and the IP address. The IP address you specify should match an IP address you assigned to the service node’s interface on the transport network. Click OK, then click Save.

At this point, you’ll see the number of registered service nodes increment to 1 (assuming this is the first) in the Summary of Transport Components box in NSX Manager. Active, however, will remain zero until you perform the final step.

The final step is performed back on the service node itself. If you opened an SSH session earlier to get the switch certificate, you can just re-use that connection. On the service node, set up communications with the NSX controller cluster using the command set switch manager-cluster W.X.Y.Z, where W.X.Y.Z represents the IP address of the one of the controllers in your controller cluster. This IP address should be reachable across the interface on the service node assigned to management traffic (which would typically be breth0).

Now go back and refresh the Summary of Transport Components box in NSX Manager, and you should see both Registered and Active Service Nodes set to 1 (again, assuming this is the first).

That’s really all there is to it. Given their role, you likely won’t have a lot of interaction with the service nodes directly. Still, if you want to use NSX with OpenStack Neutron today, you’ll want to have a service node present (and, honestly, if you’re using NSX and OpenStack in any sort of production environment, you’re probably big enough to want a service node anyway).

As always, feel free to post any questions, comments, thoughts, or ideas below. All courteous comments, with vendor disclosures where applicable, are welcome.

Tags: , , , , , ,

Welcome to part 9 in the Learning NVP/NSX blog series, in which I’ll discuss adding a gateway service to a logical network.

If you are just now joining me for this series, let me bring you up to speed real quick:

This installation in the series builds upon all the previous articles; in particular, I assume that you have a logical network created and configured and that you have an NSX gateway appliance up running and added to the NSX domain. In this post, I’ll show you how to add a logical gateway service to your network so that you can provide routed (layer 3) connectivity into and out of your logical network.

Before I start, I think it’s important to distinguish between a gateway appliance and a gateway service. A gateway appliance is a physical installation (or a VM; it’s supported as a VM in certain configurations) of the NSX gateway software; this is what I showed you how to set up in part 6 of the series. A gateway service, on the other hand, is a logical construct. Typically, you would have multiple gateway appliances. Then you would create a gateway service that would be instantiated across multiple gateway appliances for redundancy and availability. Gateway services can be either layer 2 (provided bridged connectivity between a logical network and VLANs on the physical network) or layer 3 (providing routed—with or without NAT—connectivity between a logical network and the physical network).

It’s also important to understand the distinction between a gateway service and a logical router. A gateway service is an NSX construct; a logical router, on the other hand, is usually a construct of the cloud management platform (like OpenStack). A single gateway service can host many logical routers.

In this series, I’m only going to focus on layer 3 gateway services (primarily because I have limited resources in my environment and can’t run both layer 2 and layer 3 gateway services).

To create a layer 3 gateway service, you’ll follow these steps from within NSX Manager (formerly NVP Manager, which I showed you how to set up in part 3):

  1. From the menu across the top of the NSX Manager page, select Network Components > Services > Gateway Services. This will take you to a page titled “Network Components Query Results,” where NSX Manager has precreated and executed a query for the list of gateway services. Your list will be empty, naturally.

  2. Click the Add button. This will open the Create Gateway Service dialog.

  3. Select “L3 Gateway Service” from the list. Other options in this list include “L2 Gateway Service” (to create a layer 2 gateway service) and “VTEP L2 Gateway Service” (to integrate a third-party top-of-rack [ToR] switch into NSX). Click Next, or click on the “2. Basics” button on the left.

  4. Provide a display name for the new layer 3 gateway service, then click Next (or click on “3. Transport Nodes” on the left). You can optionally add tags here as well, in case you wanted to associate additional metadata with this logical object in NSX.

  5. On the Transport Nodes screen, click Add Gateway to select a gateway appliance (which is classified as a transport node within NSX; hypervisors are also transport nodes) to host this layer 3 gateway service.

  6. From the Edit Gateway dialog box that pops up, you’ll need to select a transport node, a device ID, and a failure zone ID. The first option, the transport node, is pretty straightforward; this is a gateway appliance on which to host this gateway service. The device ID is the bridge (recall that NSX gateway appliances, by default, create OVS bridges to map to their interfaces) connected to the external network. The failure zone ID lets you “group” gateway appliances with regard to their relationship to the gateway service. For example, you could choose to host a gateway service on a gateway from failure zone 1 as well as failure zone 2. (Failure zones are intended to help represent different failure domains within your data center.)

  7. Once you’ve added at least two gateway appliances as transport nodes for your gateway service, click Save to create the gateway service and return to NSX Manager. That’s it—you’re done!

Note that this is more of an implementation task than an operational task. In other words, you’d generally deploy your gateway services when you first set up NSX and your cloud management platform, or when you are adding capacity to your environment. This isn’t something that you have to do when a cloud tenant (customer) needs a logical router; that’s handled automatically through the integration between NSX and the cloud management platform. As I mentioned earlier, a single layer 3 gateway service could host many logical routers.

In the next installation of the series, I’ll walk you through setting up an NSX service node to offload packet replication (for broadcast, unknown unicast, and multicast traffic).

As always, your feedback is welcome and encouraged, so feel free to speak up in the comments below.

Tags: , , , ,

Welcome to Technology Short Take #38, another installment in my irregularly-published series that collects links and thoughts on data center-related technologies from around the web. But enough with the introduction, let’s get on to the content already!


  • Jason Edelman does some experimenting with the Python APIs on a Cisco Nexus 3000. In the process, he muses about the value of configuration management tool chains such as Chef and Puppet in a world of “open switch” platforms such as Cumulus Linux.
  • Speaking of Cumulus Linux…did you see the announcement that Dell has signed a reseller agreement with Cumulus Networks? I’m pretty excited about this announcement, and I hope that Cumulus sees great success as a result. There are a variety of write-ups about the announcement; so good, many not so good. The not-so-good variety typically refers to Cumulus’ product as an SDN product when technically it isn’t. This article on Barron’s by Tiernan Ray is a pretty good summary of the announcement and some of its implications.
  • Pete Welcher has launched a series of articles discussing “practical SDN,” focusing on the key leaders in the market: NSX, DFA, and the yet-to-be-launched ACI. In the initial installation of the series, he does a good job of providing some basics around each of the products, although (as would be expected of a product that hasn’t launched yet) he has to do some guessing when it comes to ACI. The series continues with a discussion of L2 forwarding and L3 forwarding across the various products. Definitely worth reading, in my opinion.
  • Nick Buraglio takes away all your reasons for not collecting flow-based data from your environment with his write-up on installing nfsen and nfdump for NetFlow and/or sFlow collection.
  • Terry Slattery has a nice write-up on new network designs that are ideally suited for SDN. If you are looking for a primer on “next-generation” network designs, this is worth reviewing.
  • Need some Debian packages for Open vSwitch 2.0? Here’s another article from Nick Buraglio—he has some information to help you out.


Nothing this time, but check back next time.


Nothing from my end. Maybe you have something you’d like to share in the comments?

Cloud Computing/Cloud Management

  • Christian Elsen (who works in Integration Engineering at VMware) has a nice series of articles going on using OpenStack with vSphere and NSX. The series starts here, but follow the links at the bottom of that article for the rest of the posts. This is really good stuff—he includes the use of the NSX vSwitch with vSphere 5.5, and talks about vSphere OpenStack Virtual Appliance (VOVA) as well. All in all, well worth a read in my opinion.
  • Maish Saidel-Keesing (one of my co-authors on the first edition of VMware vSphere Design and also a super-sharp guy) recently wrote an article on how adoption of OpenStack will slow the adoption of SDN. While I agree that widespread adoption of OpenStack could potentially retard the evolution of enterprise IT, I’m not necessarily convinced that it will slow the adoption of SDN and network virtualization solutions. Why? Because, in part, I believe that the full benefits of something like OpenStack need a good network virtualization solution in order to be realized. Yes, some vendors are writing plugins for Neutron that manipulate physical switches. But for developers to get true isolation, application portability, the ability to re-create production environments in development—all that is going to require network virtualization.
  • Here’s a useful OpenStack CLI cheat sheet for some commonly-used commands.

Operating Systems/Applications

  • If you’re using Ansible (a product I haven’t had a chance to use but I’m closely watching), but I came across this article on an upcoming change to the SSH transport that Ansible uses. This change, referred to as “ssh_alt,” promises a significant performance increase for Ansible. Good stuff.
  • I don’t think I’ve mentioned this before, but Forbes Guthrie (my co-author on the VMware vSphere Design books and an already great guy) has a series going on using Linux as a domain controller for a vSphere-based lab. The series is up to four parts now: part 1, part 2, part 3, and part 4.
  • Need (or want) to increase the SCSI timeout for a KVM guest? See these instructions.
  • I’ve been recommending that IT pros get more familiar with Linux, as I think its influence in the data center will continue to grow. However, the problem that I sometimes face is that experienced folks tend to share these “super commands” that ordinary folks have a hard time decomposing. However, this site should make that easier. I’ve tried it—it’s actually pretty handy.


  • Jim Ruddy (an EMCer, former co-worker of mine, and an overall great guy) has a pretty cool series of articles discussing the use of EMC ViPR in conjunction with OpenStack. Want to use OpenStack Glance with EMC ViPR using ViPR’s Swift API support? See here. Want a multi-node Cinder setup with ViPR? Read how here. Multi-node Glance with ViPR? He’s got it. If you’re new to ViPR (who outside of EMC isn’t?), you might also find his articles on deploying EMC ViPR, setting up back-end storage for ViPR, or deploying object services with ViPR to also be helpful.
  • Speaking of ViPR, EMC has apparently decided to release it for free for non-commercial use. See here.
  • Looking for more information on VSAN? Look no further than Cormac Hogan’s extensive VSAN series (up to Part 14 at last check!). The best way to find this stuff is to check articles tagged VSAN on Cormac’s site. The official VMware vSphere blog also has a series of articles running; check out part 1 and part 2.


  • Did you happen to see this news about Microsoft Hyper-V Recovery Manager (HRM)? This is an Azure-hosted service that can be roughly compared to VMware’s Site Recovery Manager (SRM). However, unlike SRM (which is hosted on-premise), HRM is hosted by Microsoft Azure. As the article points out, it’s important to understand that this doesn’t mean your VMs are replicated to Azure—it’s just the orchestration portion of HRM that is running in Azure.
  • Oh, and speaking of Hyper-V…in early January Microsoft released version 3.5 of their Linux Integration Services, which primarily appears to be focused on adding Linux distribution support (CentOS/RHEL 6.5 is now supported).
  • Gregory Gee has a write-up on installing the Cisco CSR 1000V in VirtualBox. (I’m a recent VirtualBox convert myself; I find the vboxmanage command just so very handy.) Note that I haven’t tried this myself, as I don’t have a Cisco login to get the CSR 1000V code. If any readers have tried it, I’d love to hear your feedback. Gregory also has a few other interesting posts I’m planning to review in the next few weeks as well.
  • Sunny Dua, who works with VMware PSO in India, has a series of blog posts on architecting vSphere environments. It’s currently up to five parts; I don’t know how many more (if any) are planned. Here are the links: part 1 (clusters), part 2 (vCenter SSO), part 3 (storage), part 4 (design process), and part 5 (networking).

It’s time to wrap up now before this gets any longer. If you have any thoughts or tidbits you’d like to share, I welcome any and all courteous comments. Join (or start) the conversation!

Tags: , , , , , , , , , , , ,

A couple of weeks ago I had the privilege of joining Richard Campbell on RunAs Radio to talk VMware NSX and network virtualization. If you’d like to hear us get geeky about network virtualization and where it might take our industry, head over and listen to episode 346. I’d love to hear your feedback!

Tags: , , ,

Welcome to Technology Short Take #37, the latest in my irregularly-published series in which I share interesting articles from around the Internet, miscellaneous thoughts, and whatever else I feel like throwing in. Here’s hoping you find something useful!


  • Ivan does a great job of describing the difference between the management, control, and data planes, as well as providing examples. Of course, the distinction between control plane protocols and data plane protocols isn’t always perfectly clear.
  • You’ve heard me talk about snowflake servers before. In this post on why networking needs a Chaos Monkey, Mike Bushong applies to the terms to networks—a snowflake network is an intricately crafted network that is carefully tailored to utilize a custom subset of networking features unique to your environment. What is the fix—if one exists—to snowflake networks? Designing your network for resiliency and unleashing a Chaos Monkey on it is one way, as Mike points out. A fan of network virtualization might also say that decomposing today’s complex physical networks into multiple simple logical networks on top of a simpler physical transport network—similar to Mike’s suggestion of converging on a smaller set of reference architectures—might also help. (Of course, I am a fan of network virtualization, since I work with/on VMware NSX.)
  • Martijn Smit has launched a series of articles on VMware NSX. Check out part 1 (general introduction) and part 2 (distributed services) for more information.
  • The elephants and mice post at Network Heresy has sparked some discussion across the “blogosphere” about how to address this issue. (Note that my name is on the byline for that Network Heresy post, but I didn’t really contribute all that much.) Jason Edelman took up the idea of using OpenFlow to provide a dedicated core/spine for elephant flows, while Marten Terpstra at Plexxi talks about how Plexxi’s Affinities could be used to help address the problem of elephant flows. Peter Phaal speaks up in the comments to Marten’s article about how sFlow can be used to rapidly detect elephant flows, and points to a demo taking place during SC13 that shows sFlow tracking elephant flows on SCinet (the SC13 network).
  • Want some additional information on layer 2 and layer 3 services in VMware NSX? Here’s a good source.
  • This looks interesting, but I’m not entirely sure how I might go about using it. Any thoughts?


Nothing this time around, but I’ll keep my eyes peeled for something to include next time!


I don’t have anything to share this time—feel free to suggest something to include next time.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • I found this post on getting the most out of HAProxy—in which Twilio walks through some of the configuration options they’re using and why—to be quite helpful. If you’re relatively new to HAProxy, as I am, then I’d recommend giving this post a look.
  • This list is reasonably handy if you’re not a Terminal guru. While written for OS X, most of these tips apply to Linux or other Unix-like operating systems as well. I particularly liked tip #3, as I didn’t know about that particular shortcut.
  • Mike Preston has a great series going on tuning Debian Linux running under vSphere. In part 1, he covered installation, primarily centered around LVM and file system mount options. In part 2, Mike discusses things like using the appropriate virtual hardware, the right kernel modules for VMXNET3, getting rid of unnecessary hardware (like the virtual floppy), and similar tips. Finally, in part 3, he talks about a hodgepodge of tips—things like blacklisting other unnecessary kernel drivers, time synchronization, and modifying the Linux I/O scheduler. All good stuff, thanks Mike!


  • “Captain KVM,” aka Jon Benedict, takes on the discussion of enterprise storage vs. open source storage solutions in OpenStack environments. One good point that Jon makes is that solutions need to be evaluated on a variety of criteria. In other words, it’s not just about cost nor is it just about performance. You need to use the right solution for your particular needs. It’s nice to see Jon say that if your needs are properly met by an open source solution, then “by all means stick with Ceph, Gluster, or any of the other cool software storage solutions out there.” More vendors need to adopt this viewpoint, in my humble opinion. (By the way, if you’re thinking of using NetApp storage in an OpenStack environment, here’s a “how to” that Jon wrote.)
  • Duncan Epping has a quick post about a VMware KB article update regarding EMC VPLEX and Storage DRS/Storage IO Control. The update is actually applicable to all vMSC configurations, so have a look at Duncan’s article if you’re using or considering the use of vMSC in your environment.
  • Vladan Seget has a look at Microsoft ReFS.


I’d better wrap it up here so this doesn’t get too long for folks. As always, your courteous comments and feedback are welcome, so feel free to start (or join) the discussion below.

Tags: , , , , , , ,

In part 7 of the Learning NVP series, I mentioned that I was planning to transition this series from NVP to NSX through an upgrade. I had an existing NVP installation running (all virtually) inside an OpenStack cloud, and I would just upgrade that to NSX 4.0.0. Here’s a quick update on that plan and the NVP-to-NSX transition.

As I mentioned, I have an installation of NVP 3.1.1 running successfully in a nested (virtualized) environment. (Yes, it is possible to run all of NVP completely virtualized, though we don’t support that for production environments.) Starting with NVP 3.1.x, NVP offered an “Update Coordinator” that coordinated and orchestrated the upgrade of the various components within an NVP domain. Since I was running NVP 3.1.1, I could just use the Update Coordinator to upgrade my installation and walk you (the readers) through the process along the way.

Using the Update Coordinator (which is built into NVP Manager), an NVP upgrade would typically look something like this:

  • You’d log into NVP Manager and go to the Update Coordinator screen.
  • If you hadn’t already, you’d upload the update files (appliance update files and OVS update files) to NVP Manager.
  • Once all the update files were uploaded, you’d select the version to which you’re upgrading and kick it off.
  • NVP Manager itself is upgraded first.
  • Next, the Update Coordinator pushes out the appliance update files (sometimes called NUB files because of their .nub extension) out to all the appliances (service node, gateways, and controllers).
  • Next, the non-hypervisor transport nodes are upgraded (this is the service nodes and gateways).
  • Following that, the hypervisors need to be upgraded, though this isn’t handled by the Update Coordinator. (You could, of course, leverage a tool like Puppet or Chef or similar to help automate this process.)
  • After you’ve verified that the hypervisors have been updated, then the Update Coordinator upgrades the controller nodes.
  • Following the successful upgrade of the controller nodes, there is a cleanup phase and then you’re all set.

This is really high-level and I’m glossing over some details, naturally. Because an NVP upgrade is a pretty big deal—it could have an effect on the network connectivity of all the VMs and hypervisors within the NVP domain—it typically involves lots of planning, lots of testing, proper backups of all the components, and so on. However, since this was a lab environment and not a real production environment, just running through the Update Coordinator should have been fine.

As it turns out, though, I ran into a few problems—not problems with NVP, but problems with how I had deployed it. Basically, I didn’t do my due diligence and read the documentation.

When I first deployed the virtualized NVP appliances, I selected VMs that had a 10GB root disk. While this was enough to get NVP up and running, it turns out that it is not enough space to perform an upgrade. Specifically, it’s not enough space to do an upgrade on the controllers; the transport nodes upgraded successfully. After the installation of the controllers, I was left with only a couple gigabytes of free space remaining. A fair portion of that is taken up then by the appliance update file, and this did not leave enough to actually perform the controller software upgrade.

Unfortunately, there was no easy workaround. Because the NVP controller cluster is scale out and highly available, I could have taken the controllers out (one at a time), rebuilt them with more disk space, and then re-joined the cluster—a rolling upgrade, if you will. However, because NVP 3.1.1 is a much older build of NVP, it wasn’t possible to rebuild the controllers with a matching software version (not easily, anyway).

So, long story short: instead of wasting cycles trying to fix a deployment issue that is completely my fault (and, by the way, completely documented—had I paid closer attention to the documentation I wouldn’t find myself in this position), I’m simply going to rebuild my lab environment from scratch using NSX 4.0.0. I had really hoped to be able to walk you through the upgrade process, but sadly it just doesn’t make sense to do so.

This will be the last post titled “Learning NVP”; moving forward, all future posts will be titled “Learning NSX.” The next post will discuss adding a gateway service to a logical network; this builds on information from part 5 (creating a logical network) and part 6 (adding a gateway appliance).

As always, your feedback is welcome and encouraged, so feel free to speak up in the comments below.

Tags: , , , ,

In this post I’m going to expand a little bit on using libvirt to connect Linux containers (created using LXC) to Open vSwitch (OVS). I made brief mention of this in my post on using LXC with libvirt, but did not provide any details. This post aims to provide those details.

I’m assuming that you’re already familiar with LXC, OVS, and libvirt. If you aren’t familiar with these projects, I suggest you have a look back at other articles I’ve written about them in the past. One of the easiest ways to do that is to browse articles tagged LXC, tagged OVS, and/or tagged Libvirt. Further, I’m using Ubuntu 12.04 LTS in my environment, so if you’re using another Linux distribution please note that some commands and/or package names might be different.

The basic process for connecting a Linux container to OVS using libvirt looks something like this:

  1. Create one or more virtual networks in libvirt to “front-end” OVS.
  2. Create your container(s) using standard LXC user-space tools.
  3. Create libvirt XML definitions for your container(s).
  4. Start the container(s) using virsh.

Steps 2, 3, and 4 were covered in my previous post on using LXC and libvirt, so I won’t repeat them here. Step 1 is the focus here. (If you are a long-time reader and/or well-versed with libvirt and OVS, there isn’t a great deal of new information here; I just wanted to present it in the context of LXC for the sake of completeness.)

To create a libvirt virtual network to front-end OVS, you need to create an XML definition that you can use with virsh to define the virtual network. Here’s an example XML definition:

(If you can’t see the code block above, please click here.)

A few notes about this XML definition:

  • You normally wouldn’t include the UUID, as that is generated automatically by libvirt. If you were using this XML to create the virtual network from scratch, I would recommend just deleting the UUID line.
  • The network is named “bridged”, and points to the OVS bridge named br-ex. In this particular case, br-ex is a simple OVS bridge that contains a single physical interface.
  • This particular virtual network only has a single portgroup configured for untagged traffic. If you wanted to provide a virtual network that supported multiple VLANs, you could add more portgroups with the VLAN tags as I describe in my post on using VLANs with OVS and libvirt. You’d then modify the container’s XML definition to point to the appropriate portgroup, and in this way you could easily support running multiple containers across multiple VLANs on a single host.
  • A libvirt virtual network can only point to a single bridge, so if you wanted to support both bridged (as shown here) as well as tunneled connectivity (perhaps as described in my post on LXC, OVS, and GRE tunnels), you would need to create a second XML definition that creates a separate virtual network. You could then modify the container’s XML definition to point to the new network you just created.

In the bullets above, I mentioned modifying the container’s XML definition. In particular, I’m referring to the <interface type='network‘> portion of the container’s XML definition. To use a libvirt network for a container’s network connectivity, you’d specify <source network='bridged'/> (replacing “bridged” with whatever the name of your virtual network is; I’m using the name provided in the sample XML code above). For multiple interfaces in the container, simply supply multiple <interface type='network'> entries in the container’s XML definition, and configure the source network for each of them appropriately.

Hopefully this post provides some additional details and information on using libvirt to connect Linux containers to OVS. If you have any questions, or if you have more information to share on this topic, please feel free to speak up in the comments below. I encourage and welcome all courteous feedback!

Tags: , , , , ,

In this post, I’ll discuss how you could use Open vSwitch (OVS) and GRE tunnels to connect bare metal workloads. While OVS is typically used in conjunction with a hypervisor such as KVM or Xen, you’re certainly not restricted to only using it on hypervisors. Similarly, while GRE tunnels are commonly used to connect VMs or containers, you’re definitely not restricted from using them with bare metal workloads as well. In this post, I’ll explore how you would go about connecting bare metal workloads over GRE tunnels managed by OVS.

This post, by the way, was sparked in part by a comment on my article on using GRE tunnels with OVS, in which the reader asked: “Is there a way to configure bare Linux (Ubuntu)…with OVS installed…to serve as a tunnel endpoint…?” Hopefully this post helps answer that question. (By the way, the key to understanding how this works is in understanding OVS traffic patterns. If you haven’t yet read my post on examining OVS traffic patterns, I highly recommend you go have a look right now. Seriously.)

Once you have OVS installed (maybe this is helpful?), then you need to create the right OVS configuration. That configuration can be described, at a high level, like this:

  • Assign an IP address to a physical interface. This interface will be considered the “tunnel endpoint,” and therefore should have an IP address that is correct for use on the transport network.
  • Create an OVS bridge that has no physical interfaces assigned.
  • Create an OVS internal interface on this OVS bridge, and assign it an IP address for use inside the GRE tunnel(s). This interface will be considered the primary interface for the OS instance.
  • Create the GRE tunnel for connecting to other tunnel endpoints.

Each of these areas is described in a bit more detail in the following sections.

Setting Up the Transport Interface

When setting up the physical interface—which I’ll refer to as the transport interface moving forward, since it is responsible for transporting the GRE tunnel across to the other endpoints—you’ll just need to use an IP address and routing entries that enable it to communicate with other tunnel endpoints.

Let’s assume that we are going to have tunnel endpoints on the subnet. On the bare metal OS instance, you’d configure a physical interface (I’ll assume eth0, but it could be any physical interface) to have an IP address on the subnet. You could do this automatically via DHCP or manually; the choice is yours. Other than ensuring that the bare metal OS instance can communicate with other tunnel endpoints, no additional configuration is required. (I’m using “required” as in “necessary to make it work.” You may want to increase the MTU on your physical interface and network equipment in order to accommodate the GRE headers in order to optimize performance, but that isn’t required in order to make it work.)

Once you have the transport interface configured and operational, you can move on to configuring OVS.

Configuring OVS

If you’ve been following along at home with all of my OVS-related posts (you can browse all posts using the OVS tag), you can probably guess what this will look like (hint: it will look a little bit like the configuration I described in my post on running host management through OVS). Nevertheless, I’ll walk through the configuration for the benefit of those who are new to OVS.

First, you’ll need to create an OVS bridge that has no physical interfaces—the so-called “isolated bridge” because it is isolated from the physical network. You can call this bridge whatever you want. I’ll use the name br-int (the “integration bridge”) because it’s commonly used in other environments like OpenStack and NVP/NSX.

To create the isolated bridge, use ovs-vsctl:

ovs-vsctl add-br br-int

Naturally, you would substitute whatever name you’d like to use in the above command. Once you’ve created the bridge, then add an OVS internal interface; this internal interface will become the bare metal workload’s primary network interface:

ovs-vsctl add-port br-int mgmt0 -- set interface mgmt0 type=internal

You can use a name other than mgmt0 if you so desire. Next, configure this new OVS internal interface at the operating system level, assigning it an IP address. This IP address should be taken from a subnet “inside” the GRE tunnel, because it is only via the GRE tunnel that you’ll want the workload to communicate.

The following commands will take care of this part for you:

ip addr add dev mgmt0
ip link set mgmt0 up

The process of ensuring that the mgmt0 interface comes up automatically when the system boots is left as an exercise for the reader (hint: use /etc/network/interfaces).

At this point, the bare metal OS instance will have two network interfaces:

  • A physical interface (we’re assuming eth0) that is configured for use on the transport network. In other words, it has an IP address and routes necessary for communication with other tunnel endpoints.
  • An OVS internal interface (I’m using mgmt0) that is configured for use inside the GRE tunnel. In other words, it has an IP address and routes necessary to communicate with other workloads (bare metal, containers, VMs) via the OVS-hosted GRE tunnel(s).

Because the bare metal OS instance sees two interfaces (and therefore has visibility into the routes both “inside” and “outside” the tunnel), you may need to apply some policy routing configuration. See my introductory post on Linux policy routing if you need more information.

The final step is establishing the GRE tunnel.

Establishing the GRE Tunnel

The commands for establishing the GRE tunnel have been described numerous times, but once again I’ll walk through the process just for the sake of completeness. I’m assuming that you’ve already completed the steps in the previous section, and that you are using an OVS bridge named br-int.

First, add the GRE port to the bridge:

ovs-vsctl add-port br-int gre0

Next, configure the GRE interface on that port:

ovs-vsctl set interface gre0 type=gre options:remote_ip=<IP address of remote tunnel endpoint>

Let’s say that you’ve assigned to the transport interface on this system (the bare metal OS instance), and that the remote tunnel endpoint (which could be a host with multiple containers, or a hypervisor running VMs) has an IP address of On the bare metal system, you’d configure the GRE interface like this:

ovs-vsctl set interface gre0 type=gre options:remote_ip=

On the remote tunnel endpoint, you’d configure the GRE interface like this:

ovs-vsctl set interface gre0 type=gre options:remote_ip=

In other words, each GRE interface points to the transport IP address on the opposite end of the tunnel.

Once the configuration on both ends is done, then you should be able to go into the bare metal OS instance and ping an IP address inside the GRE tunnel. For example, I used this configuration to connect a bare metal Ubuntu 12.04 instance, a container running on an Ubuntu host, and a KVM VM running on an Ubuntu host (I had a full mesh topology with STP enabled, as described here). I was able to successfully ping between the bare metal OS instance, the container, and the VM, all inside the GRE tunnel.

Summary, Caveats, and Other Thoughts

While this configuration is interesting as a “proof of concept” that OVS and GRE tunnels can be used to connect bare metal OS instances and workloads, there are a number of considerations and/or caveats that you’ll want to think about before trying something like this in a production environment:

  • The bare metal OS instance has visibility both “inside” and “outside” the tunnel, so there isn’t an easy way to prevent the bare metal OS instance from communicating outside the tunnel to other entities. This might be OK—or it might not. It all depends on your requirements, and what you are trying to achieve. (In theory, you might be able to provide some isolation using network namespaces, but I haven’t tested this at all.)
  • If you want to create a full mesh topology of GRE tunnels, you’ll need to enable STP on OVS.
  • There’s nothing preventing you from attaching an OpenFlow controller to the OVS instances (including the OVS instance on the bare metal OS) and pushing flow rules down. This would eliminate the need for STP, since OVS won’t be in MAC learning mode. This means you could easily incorporate bare metal OS instances into a network virtualization-type environment. However…
  • There’s no easy way to provide a separation of OVS and the bare metal OS instance. This means that users who are legitimately allowed to make administrative changes to the bare metal OS instance could also make changes to OVS, which could easily “break” the configuration and cause problems. My personal view is that this is why you rarely see this sort of OVS configuration used in conjunction with bare metal workloads.

I still see value in explaining how this works because it provides yet another example of how to configure OVS and how to use OVS to help provide advanced networking capabilities in a variety of environments and situations.

If you have any questions, I encourage you to add them in the comments below. Likewise, if I have overlooked something, made any mistakes, or if I’m just plain wrong, please speak up below (courteously, of course!). I welcome all useful/pertinent feedback and interaction.

Tags: , , , , , ,

« Older entries § Newer entries »