Nicira

You are currently browsing articles tagged Nicira.

Welcome to part 7 of the Learning NVP blog series, in which I will discuss transitioning from a focus on NVP to looking at NSX.

If you’re just now joining me for this series, here’s what’s transpired thus far:

When I first started this series back in May of this year, I said this:

Before continuing, it might be useful to set some context around NVP and NSX… The architecture I’m describing here will also be applicable to NSX, which VMware announced in early March. Because NSX will leverage NVP’s architecture, spending some time with NVP now will pay off with NSX later.

Well, the “later” that I referenced is now upon us. I had hoped to be much farther along with this blog series by now, but it has proven more difficult than I had anticipated to get this content written and published. Given that NSX officially GA’d last week at VMworld EMEA in Barcelona, I figured it was time to make the transition from NVP to NSX.

The way I’ll handle the transition from talking NVP to discussing VMware NSX is through an upgrade. I have a completely virtualized environment that is currently running all the NVP components: three controllers, NVP Manager, three nested hypervisors running Ubuntu+KVM+OVS, two gateways, and a service node. (I know, I know—I haven’t written about service nodes yet. Sorry.) The idea is to take you through the upgrade process, upgrading my environment from NVP 3.1.1 to NVP 3.2.1 and then to NSX 4.0.0. From that point forward, the series will change from “Learning NVP” to “Learning NSX”, and I’ll continue with discussing all the topics that I have planned. These include (among others):

  • Deploying service nodes
  • Using an L2 gateway service
  • Using an L3 gateway service
  • Enabling distributed east-west routing
  • Many, many more topics…

Unfortunately, my travel schedule over the next few weeks is pretty hectic, which will probably limit my ability to move quickly on performing and documenting the upgrade process. Nevertheless, I will press forward as quickly as possible, so stay tuned to the site for more updates as soon as I’m able to get them published.

Questions? Comments? Feel free to add them below. All I ask for is common courtesy and disclosure of vendor affiliations, where applicable. Thanks!

Tags: , , , ,

Welcome to part 6 of the Learning NVP blog series. In this part, I’m going to show you how to add an NVP gateway appliance to your NVP environment. In future posts, you’ll use this NVP gateway to host either L2 or L3 gateway services (more on those in a moment). First, though, let’s take a quick recap of what’s transpired so far:

In this part, I’m going to walk you through setting up an NVP gateway appliance. If you’ll recall from our introductory high-level architecture overview, the role of the gateway is to provide L2 (switched/bridged) and L3 (routed) connectivity between logical networks and physical networks. So, adding a gateway would then enable you to extend the logical network you created in part 4 to include either L2 or L3 connectivity to the outside world.

<aside>Many of you have probably seen some of the announcements from VMworld about NSX integrations from various networking suppliers (Arista, Brocade, Dell, and Juniper, for example). These announcements will allow NSX—which I’ve said before will leverage a great deal of NVP’s architecture—to use these hardware devices as L2 gateways, providing bridged/switched connectivity between logical networks and physical networks.</aside>

This post will focus only on getting the gateway appliance set up; in future posts, I’ll show you how to actually add the L2 or L3 connectivity to your logical network.

Building the NVP Gateway

The NVP gateway software is distributed as an ISO, like the NVP controller software. You’d typically install this software on a bare metal server, though with recent releases of NVP it is supported to install the gateway into a VM (refer to the latest NVP release notes for more details). As with the NVP controllers and NVP Manager, the gateway is built on Ubuntu 12.04, and the installer process is completely automated. Once you boot from the ISO, the installation will proceed automatically; when completed, you’ll be left at the login prompt.

Configuring the NVP Gateway

Once the NVP gateway software is installed, configuring the gateway is really straightforward. In fact, it feels a lot like configuring NVP controllers (I suspect this is by design). Here are the steps:

  1. Set the password for the admin user (optional, but highly recommended).

  2. Set the hostname for the gateway appliance (also optional, but strongly recommended).

  3. Configure the network interfaces; you’ll need management, transport, and external connectivity. (I’ll explain those in more detail later.)

  4. Configure DNS and NTP settings.

Let’s take a closer look at these steps. The first step is to set the password for the admin user, which you can accomplish with this command:

set user admin password

From here, you can proceed with setting the hostname for the gateway:

set hostname <hostname>

(So far, these commands should be pretty familiar. They are the same commands used when you set up the NVP controllers and NVP Manager.)

The next step is configure network connectivity; you’ll start by listing the available network interfaces with this command:

show network interfaces

As you’ve seen with the other NVP appliances, the NVP gateway software builds an Open vSwitch (OVS) bridge for each physical interface. In the case of a gateway, you’ll need at least three interfaces—a management interface, a transport network interface, and an external network interface. The diagram below provides a bit more context around how these interfaces are used:

NVP gateway appliance interfaces

Since these interfaces have very different responsibilities, it’s important that you properly configure them. Otherwise, things won’t work as expected. Take the time to identify which interface listed in the show network interfaces output corrsponds to each function. You’ll first want to establish management connectivity, so that should be the first interface to configure. Assuming that breth0 (the bridge matching the physical eth0 interface) is your management interface, you’ll configure it using this command:

set network interface breth0 ip config static 192.168.1.12 255.255.255.0

You’ll want to repeat this command for the other interfaces in the gateway, assigning appropriate IP addresses to each of them.

You may also need to configure the routing for the gateway. Check the routing table(s) with this command:

show network routes

If there is no default route, you can set one using this command:

add network route 0.0.0.0 0.0.0.0 <Default gateway IP address>

Once the appropriate network connectivity has been established, then you can proceed with the next step: adding DNS and NTP servers. Here are the commands for this step:

add network dns-server <DNS server IP address>
add network ntp-server <NTP server IP address>

If you accidentally fat-finger an IP address or hostname along the way, use the remove network dns-server or remove network ntp-server command to remove the incorrect entry, then re-add it correctly with the commands above.

Congrats! The NVP gateway appliance is now up and running. You’re ready to add it to NVP. Once it’s added to NVP, you’ll be able to use the gateway appliance to add gateway services to your logical networks.

Adding the Gateway to NVP

To add the new gateway appliance to NVP, you’ll use NVP Manager (I showed you how to set up NVP Manager in part 3 of the series). Once you’ve opened a web browser, navigated to the NVP Manager web UI, and logged in, then you can start the process of adding the gateway to NVP.

  1. Once you’re logged into NVP Manager, click on the Dashboard link across the top. (If you’re already at the Dashboard, you can skip this step.)

  2. In the Summary of Transport Components box, click the Connect & Add Transport Node button. This will open the Connect to Transport Node dialog box.

  3. Supply the management IP address of the gateway appliance, along with the appropriate username and password, then click Connect.

  4. After a moment, the Connect to Transport Node dialog box will show details of the gateway appliance, such as the interfaces, the bridges, the NIC bonds (if any), and the gateway’s SSL certificate. Click Configure at the bottom of the dialog box to continue.

  5. Supply a display name (something like nvp-gw–01) and, optionally, one or more tags. Click Next.

  6. Unless you know you need to select any of the options on the next screen (I’ll try to cover them in a later blog post), just click Next.

  7. On the final screen, you’ll need to establish connectivity to a transport zone. You’ll want to select the appropriate interface (in my example environment, it was breth2) and the appropriate encapsulation protocol (STT is generally recommended for connectivity back to hypervisors). Then select the appropriate transport zone from the drop-down list. In the end, you’ll have a screen that looks something like this (note that your interfaces, IP addresses, and transport zone information will likely be different):

  8. Adding a gateway to NVP

  9. Click Save to finish the process. The number of gateways listed in the Summary of Transport Components box should increment by 1 in the Registered column. However, the Active column will remain unchanged—that’s because there’s one more step needed.

  10. Back on the gateway appliance itself, run this command (you can use the IP address of any controller in the NVP controller cluster):

  11. set switch manager-cluster <NVP controller IP address>
  12. Back in NVP Manager, refresh the Summary of Transport Components box (there’s a small refresh icon in the corner), and you’ll see the Active column update to show the gateway appliance is now registered and active in NVP.

That’s it—you’re all done adding a gateway appliance to NVP. In future posts, you’ll leverage the gateway appliance to add L2 (bridged) and L3 (routed) connectivity in and out of logical networks. First, though, I’ll need to address the transition from NVP to NSX, so look for that coming soon. In the meantime, feel free to post any questions, thoughts, or suggestions in the comments below. I welcome all courteous comments (even if you disagree with something I’ve said!).

Tags: , , , ,

I’m back with more NVP goodness; this time, I’ll be walking you through the process of creating a logical network and attaching VMs to that logical network. This work builds on the stuff that has come before it in this series:

  • In part 1, I introduced you to the high-level architecture of NVP.
  • In part 2, I walked you through setting up a cluster of NVP controllers.
  • In part 3, I showed you how to install and configure NVP Manager.
  • In part 4, I discussed how to add hypervisors (KVM hosts, in this case) to your NVP environment.

Just a quick reminder in case you’ve forgotten: although VMware recently introduced VMware NSX at VMworld 2013, the architecture of NSX when used in a multi-hypervisor environment is very similar to what you can see today in NVP. (In pure vSphere environments, the NSX architecture is a bit different.) As a result, time spent with NVP now will pay off later when NSX becomes available. Might as well be a bit proactive, right?

At the end of part 4, I mentioned that I was going to revisit a few concepts before proceeding to the logical network piece, but after deeper consideration I’ve decided to proceed with creating a logical network. I still believe there will be a time when I need to stop and revisit some concepts, but it didn’t feel right just yet. Soon, I think.

Before I get into the details on how to create a logical network and attach VMs, I want to first talk about my assumptions regarding your environment.

Assumptions

This walk-through assumes that you have an NVP controller cluster up and running, an instance of NVP Manager connected to that cluster, at least 2 hypervisors installed and added to NVP, and at least 1 VM running on each hypervisor. I further assume that your environment is using KVM and libvirt.

Pursuant to these assumptions, my environment is running KVM on Ubuntu 12.04.2, with libvirt 1.0.2 installed from the Ubuntu Cloud Archive. I have the NVP controller cluster up and running, and an instance of NVP Manager connected to that cluster. I also have an NVP Gateway and an NVP Service Node, two additional components that I haven’t yet discussed. I’ll cover them in a near-future post.

Additionally, to make it easier for myself, I’ve created a libvirt network for the Open vSwitch (OVS) integration bridge, as outlined here (and an update here). This allows me to simply point virsh at the libvirt network, and the guest domain will attach itself to the integration bridge.

Revisiting Transport Zones

I showed you how to create a transport zone in part 4; it was necessary to have a transport zone present in order to add a hypervisor to NVP. But what is a transport zone? I didn’t explain it there, so let me do that now.

NVP uses the idea of transport zones to provide connectivity models based on the topology of the underlying network. For example, you might have hypervisors that connect to one network for management traffic, but use an entirely different network for VM traffic. The combination of a transport zone plus the transport connectors tells NVP how to form tunnels between hypervisors for the purposes of providing logical connectivity.

For example, consider this graphic:

The transport zones (TZ–01 and TZ–02) help NVP understand which interfaces on the hypervisors can communicate with which other interfaces on other hypervisors for the purposes of establishing overlay tunnels. These separate transport zones could be different trust zones, or just reflect the realities of connectivity via the underlying physical network.

Now that I’ve explained transport zones in a bit more detail, hopefully their role in adding hypervisors makes a bit more sense now. You’ll also need a transport zone already created in order to create a logical switch, which is what I’ll show you next.

Creating the Logical Switch

Before I get started taking you through this process, I’d like to point out that this process is going to seem laborious. When you’re operating outside of a CMP such as CloudStack or OpenStack, using NVP will require you to do things manually that you might not have expected. So, keep in mind that NVP was designed to be integrated into a CMP, and what you’re seeing here is what it looks like without a CMP. Cool?

The first step is creating the logical switch. To do that, you’ll log into NVP Manager, which will dump you (by default) into the Dashboard. From there, in the Summary of Logical Components section, you’ll click the Add button to add a switch. To create a logical switch, there are four sections in the NVP Manager UI where you’ll need to supply various pieces of information:

  1. First, you’ll need to provide a display name for the new logical switch. Optionally, you can also specify any tags you’d like to assign to the new logical switch.
  2. Next, you’ll need to decide whether to use port isolation (sort of like PVLANs; I’ll come back to these later) and how you want to handle packet replication (for BUM traffic). For now, leave port isolation unchecked and (since I haven’t shown you how to set up a service node) leave packet replication set to Source Nodes.
  3. Third, you’ll need to select the transport zone to which this logical switch should be bound. As I described earlier, transport zones (along with connectors) help define connectivity between various NVP components.
  4. Finally, you’ll select the logical router, if any, to which this switch should be connected. We won’t be using a logical router here, so just leave that blank.

Once the logical switch is created, the next step is to add logical switch ports.

Adding Logical Switch Ports

Naturally, in order to connect to a logical switch, you need logical switch ports. You’ll add a logical switch port for each VM that needs to be connected to the logical switch.

To add a logical switch port, you’ll just click the Add button on the line for Switch Ports in the Summary of Logical Components section of the NVP Manager Dashboard. To create a logical switch port, you’ll need to provide the following information:

  1. You’ll need to select the logical switch to which this port will be added. The drop-down list will show all the logical switches; once one is selected that switch’s UUID will automatically populate.
  2. The switch port needs a display name, and (optionally) one or more tags.
  3. In the Properties section, you can select a port number (leave blank for the next port), whether the port is administratively enabled, and whether or not there is a logical queue you’d like to assign (queues are used for QoS; leave it blank for no queue/no QoS).
  4. If you want to mirror traffic from one port to another, the Mirror Ports section is where you’ll configure that. Otherwise, just leave it all blank.
  5. The Attachment section is where you “plug” something into this logical switch port. I’ll come back to this—for now, just leave it blank.
  6. Under Port Security you can specify what address pairs are allowed to communicate with this port.
  7. Finally, under Security Profiles, you can attach an existing security profile to this logical port. Security profiles allow you to create ingress/egress access-control lists (ACLs) that are applied to logical switch ports.

In many cases, all you’ll need is the logical switch name, the display name for this logical switch port, and the attachment information. Speaking of attachment information, let’s take a closer look at attachments.

Editing Logical Switch Port Attachment

As I mentioned earlier, the attachment configuration is what “plugs” something into the logical switch. NVP logical switch ports support 6 different types of attachment:

  • None is exactly that—nothing. No attachment means an empty logical port.
  • VIF is used for connecting VMs to the logical switch.
  • Extended Network Bridge is a deprecated option for an older method of bridging logical and physical space. This has been replaced by L2 Gateway (below) and should not be used. (It will likely be removed in future NVP releases.)
  • Multi-Domain Interconnect (MDI) is used in specific configurations where you are federating multiple NVP domains.
  • L2 Gateway is used for connecting an L2 gateway service to the logical switch (this allows you to bring physical network space into logical network space). This is one I’ll discuss later when I talk about L2 gateways.
  • Patch is used to connect a logical switch to a logical router. I’ll discuss this in greater detail when I get around to talking about logical routing.

For now, I’m just going to focus on attaching VMs to the logical switch port, so you’ll only need to worry about the VIF attachment type. However, before we can attach a VM to the logical switch, you’ll first need a VM powered on and attached to the integration bridge. (Hint: If you’re using KVM, use virsh start <VM name> to start the VM. Or just read this.)

Once you have a VM powered on, you’ll need to be sure you know the specific OVS port on that hypervisor to which the VM is attached. To do that, you would use ovs-vsctl show to get a list of the VM ports (typically designated as “vnet_X_”), and then use ovs-vsctl list port vnetX to get specific details about that port. Here’s the output you might get from that command:

In particular, note the external_ids row, where it stores the MAC address of the attached VM. You can use this to ensure you know which VM is mapped to which OVS port.

Once you have the mapping information, you can go back to NVP Manager, select Network Components > Logical Switch Ports from the menu, and then highlight the empty logical switch port you’d like to edit. There is a gear icon at the far right of the row; click that and select Edit. Then click “4. Attachment” to edit the attachment type for that particular logical switch port. From there, it’s pretty straightforward:

  1. Select “VIF” from the Attachment Type drop-down.
  2. Select your specific hypervisor (must already be attached to NVP per part 4) from the Hypervisor drop-down.
  3. Select the OVS port (which you verified just a moment ago) using the VIF drop-down.

Click Save, and that’s it—your VM is now attached to an NVP logical network! A single VM attached to a logical network all by itself is no fun, so repeat this process (start up VM if not already running, verify OVS port, create logical switch port [if needed], edit attachment) to attach a few other VMs to the same logical network. Just for fun, be sure that at least one of the other VMs is on a different hypervisor—this will ensure that you have an overlay tunnel created between the hypervisors. That’s something I’ll be discussing in a near-future post (possibly part 6, maybe part 7).

Once your VMs are attached to the logical network, assign IP addresses to them (there’s no DCHP in your logical network, unless you installed a DHCP server on one of your VMs) and test connectivity. If everything went as expected, you should be able to ping VMs, SSH from one to another, etc., all within the confines of the new NVP logical network you just created.

There’s so much more to show you yet, but I’ll wrap this up here—this post is already way too long. Feel free to post any questions, corrections, or clarifications in the comments below. Courteous comments (with vendor disclosure, where applicable) are always welcome!

Tags: , , , ,

This is part 4 of the Learning NVP blog series. Just to quickly recap what’s happened so far, in part 1 I provided the high-level architecture of NVP and discussed the role of the components in broad terms. In part 2, I focused on the NVP controllers, showing you how to build/configure the NVP controllers and establish a controller cluster. Part 3 focused on NVP Manager, which allowed us to perform a few additional NVP configuration tasks. In this post, I’ll show you how to add hypervisors to NVP so that you can turn up your first logical network.

Assumptions

In this post, I’m using Ubuntu 12.04.2 LTS with the KVM hypervisor. I’m assuming that you’ve already gone through the process of getting KVM installed on your Linux host; if you need help with that, a quick Google search should turn up plenty of “how to” articles (it’s basically a sudo apt-get install kvm operation). If you are using a different Linux distribution or a different hypervisor, the commands you’ll use as well as the names of the prerequisite packages you’ll need to install will vary slightly. Please keep that in mind.

Installing Prerequisites

To get a version of libvirt that supports Open vSwitch (OVS), you’ll want to enable the Ubuntu Cloud Archive. The Ubuntu Cloud Archive is technically intended to allow users of Ubuntu LTS releases to install newer versions of OpenStack and its dependent packages (including libvirt). Instructions for enabling and using the Ubuntu Cloud Archive are found here. However, I’ve found using the Ubuntu Cloud Archive to be an easy way to get a newer version of libvirt (version 1.0.2) on Ubuntu 12.04 LTS.

Once you get the Ubuntu Cloud Archive working, go ahead and install libvirt:

sudo apt-get install libvirt-bin

Next, go ahead and install some prerequisite packages you’ll need to get OVS installed and working:

sudo apt-get install dkms make libc6-dev

Now you’re ready to install OVS.

Installing OVS

Once your hypervisor node has the appropriate prerequisites installed, you’ll need to install an NVP-specific build of OVS. This build of OVS is identical to the open source build in every way except that it includes the ability to create STT tunnels and includes some extra NVP-specific utilities for integrating OVS into NVP. For Ubuntu, this NVP-specific version of OVS is distributed as a compressed tar archive. First, you’ll need to extract the files out like this:

tar -xvzf <file name>

This will extract a set of Debian packages. For ease of use, I recommend moving these files into a separate directory. Once the files are in their own directory, you would install them like this:

cd <directory where the files are stored>
sudo dpkg -i *.deb

Note that if you don’t install the prerequisites listed above, the installation of the DKMS package (for the OVS kernel datapath) will fail. Trying to then run apt-get install <package list> at that point will also fail; you’ll need to run apt-get -f install. This will “fix” the broken packages and allow the DKMS installation to proceed (which it will do automatically).

These particular OVS installation packages do a couple of different things:

  • They install OVS (of course), including the kernel module.
  • They automatically create and configure something called the integration bridge, which I’ll describe in more detail in a few moments.
  • They automatically generate some self-signed certificates you’ll need later.

Now that you have OVS installed, you’re ready for the final step: to add the hypervisor to NVP.

Adding the Hypervisor to NVP

For this process, you’ll need access to NVP Manager as well as SSH access to the hypervisor. I strongly recommend using the same system for access to both NVP Manager and the hypervisor via SSH, as this will make things easier.

Adding the hypervisor to NVP is a two-step process. First, you’ll configure the hypervisor; second, you’ll configure NVP. Let’s start with the hypervisor.

Verifying the Hypervisor Configuration

Before you can add the hypervisor to NVP, you’ll need to be sure it’s configured correctly so I’ll walk you through a couple of verification steps. NVP expects there to be an integration bridge, an OVS bridge that it will control. This integration bridge will be separate from any other bridges configured on the system, so if you have a bridge that you’re using (or will use) outside of NVP this will be separate. I’ll probably expand upon this in more detail in a future post, but for now let’s just verify that the integration bridge exists and is configured properly.

To verify the present of the integration bridge, use the command ovs-vsctl show (you might need to use sudo to get the appropriate permissions). The output of the command should look something like this:

Note the bridge named “br-int”—this is the default name for the integration bridge. As you can see by this output, the integration bridge does indeed exist. However, you must also verify that the integration bridge is configured appropriately. For that, you’ll use the ovs-vsctl list bridge br-int, which will produce output that looks something like this:

Now there’s a lot of goodness here (“Hey, look—NetFlow support! Mirrors! sFlow! STP support even!”), but try to stay focused. I want to draw your attention to the external_ids line, where the value “bridge-id” has been set to “br-int”. This is exactly what you want to see, and I’ll explain why in just a moment.

One final verification step is needed. NVP uses self-signed certificates to authenticate hypervisors, so you’ll need to be sure that the certificates have been generated (they should have been generated during the installation of OVS). You can verify by running ls -la /etc/openvswitch and looking for the file “ovsclient-cert.pem”. If it’s there, you should be good to go.

Next, you’ll need to do a couple of things in NVP Manager.

Create a Transport Zone

Before you can actually add the hypervisor to NVP, you first need to ensure that you have a transport zone defined. I’ll assume that you don’t and walk you through creating one.

  1. In NVP Manager (log in if you aren’t already logged in), select Network Components > Transport Zones.
  2. In the Network Components Query Results, you’ll probably see no transport zones listed. Click Add.
  3. In the Create Transport Zone dialog, specify a name for the transport zone, and optionally add tags (tags are used for searching and sorting information in NVP Manager).
  4. Click Save.

That’s it, but it’s an important bit. I’ll explain more in a moment.

Add the Hypervisor

Now that you’ve verified the hypervisor configuration and created a transport zone, you’re ready to add the hypervisor to NVP. Here’s how.

  1. Log into NVP Manager, if you aren’t already, and click on Dashboard in the upper left corner.
  2. In the Summary of Transport Components section, click the Add button on the line for Hypervisors.
  3. Ensure that Hypervisor is listed in the Transport Node Type drop-down list, then click Next.
  4. Specify a display name (I use the hostname of the hypervisor itself), and—optionally—add one or more tags. (Tags are used for searching/sorting data in NVP Manager.) Click Next.
  5. In the Integration Bridge ID text box, type “br-int”. (Do you know why? It’s not because that’s the name of the integration bridge. It’s because that’s the value set in the external_ids section of the integration bridge.) Be sure that Admin Status Enabled is checked, and optionally you can check Tunnel Keep-Alive Spray. Click Next to continue.
  6. Next, you need to authenticate the hypervisor to NVP. This is a multi-step process. First, switch over to the SSH session with the hypervisor (you still have it open, right?) and run cat /etc/openvswitch/ovsclient-cert.pem. This will output the contents of the OVS client certificate to the screen. Copy everything between the BEGIN CERTIFICATE and the END CERTIFICATE lines, including those lines.
  7. Flip back over to NVP Manager. Ensure that Security Certificate is listed, then paste the clipboard contents into the Security Certificate box. The red X on the left of the dialog box should change to a green check mark, and you can click Next to continue.
  8. The final step is creating a transport connector. Click Add Connector.
  9. In the Create Transport Connector dialog, select STT as the Transport Type.
  10. Select the transport zone you created earlier, if it isn’t already populated in the Transport Zone UUID drop-down.
  11. Specify the IP address of the interface on the hypervisor that will be used as the source for all tunnel traffic. This is generally not the management interface. Click OK.
  12. Click Save.

This should return you to the NVP Manager Dashboard, where you’ll see the Hypervisors line in the Summary of Transport Components go from 0 to 1 (both in the Registered and the Active columns). You can refresh the display of that section only by clicking the little circular arrow button. You should also see an entry appear in the Hypervisor Software Version Summary section. This screen shot shows you what the dashboard would look like after adding 3 hypervisors:

NVP Manager dashboard

(Side note: This dashboard shows a gateway and service node also added, though I haven’t discussed those yet. It also shows a logical switch and logical switch ports, though I haven’t discussed those either. Be patient—they’re coming. I can only write so fast.)

Congratulations—you’ve added your first hypervisor to NVP! In the next part, I’m going to take a moment to remind readers of a few concepts that I’ve covered in the past, and show how those concepts relate to NVP. From there, we’ll pick back up with adding our first logical network and establishing connectivity between VMs on different hypervisors.

Until that time, feel free to post any questions, thoughts, corrections, or clarifications in the comments below. Please disclose vendor affiliations, where appropriate, and—as always—I welcome all courteous comments.

Tags: , , , , , , ,

Welcome to part 3 of the Learning NVP blog series. In part 1, I provide an overview of the high-level architecture of NVP and discussed the role of the components in broad terms. In part 2, I focused on the NVP controllers, showing you how to build/configure the NVP controllers and establish a controller cluster. In this part, I’m going to turn my attention to NVP Manager, which will allow us to further configure NVP.

Reviewing the Role of the NVP Manager

NVP, as you may recall, was expressly designed to integrate with a variety of cloud management platforms (CMPs) via a set of northbound REST APIs. Those northbound REST APIs are actually implemented within the NVP controllers—not within NVP Manager. NVP Manager provides a web-based GUI that is used for certain NVP configuration tasks, such as:

  • Adding hypervisors
  • Adding transport nodes (gateways and service nodes)
  • Configuring transport zones
  • Troubleshooting and information gathering

NVP Manager is really more about the configuration of NVP and not the operation of NVP. To put it another way, you’d use NVP Manager to manage the components in NVP, like gateways and hypervisors, but the actual use of NVP—creating logical networks, logical routers, etc.—would generally be done from within the CMP via REST API calls to the NVP controllers. That being said, I’ll be using NVP Manager to do some of the things that would normally be handled by the CMP, simply because I don’t have a CMP instance to use.

Now that you have a better idea of the role of NVP Manager, let’s have a look at building and configuring NVP Manager.

Building NVP Manager

Like the NVP controllers, NVP Manager is distributed as an ISO. In a production environment, you’d use the ISO to burn optical disks that, in turn, are used to build NVP Manager. In production environments today, NVP Manager is installed on a bare metal server, but it is certainly possible to run NVP Manager in a VM (that’s what I’ll be doing; I’ll actually be running it as a VM in an OpenStack cloud).

To build NVP Manager, just boot from the install media and allow the automated installation routine to complete. Like the controllers, NVP Manager runs on Ubuntu Server 12.04, and the installation routine automatically installs all the necessary components. After the reboot, NVP Manager will boot up to a login prompt. Once you login, you’ll be dropped into the NVP CLI with an unconfigured NVP Manager instance.

Configuring NVP Manager

Configuring NVP Manager, like configuring the NVP controllers, is actually pretty straightforward:

  1. Set a password for the admin user (optional, but highly recommended).

  2. Set the hostname for NVP Manager (also optional, but highly recommended).

  3. Assign an IP address to NVP Manager so it can communicate with the NVP controller cluster.

  4. Add an NVP controller cluster to NVP Manager.

Let’s take a look at the commands needed to accomplish these steps.

First, setting the password for the admin user is done on NVP Manager just like it was on the NVP controllers:

set user admin password

You’ll be prompted to supply the new password, then retype it for confirmation. Next you can set the hostname, again using the same command as on the NVP controllers:

set hostname <hostname>

As with the NVP controllers, the NVP Manager installation routine automatically installs Open vSwitch (OVS) and creates a bridge interface for each physical interface. In my virtual NVP Manager instance, I provided only a single interface (recognized as eth0), so the installation created a bridge interface named breth0. You can assign the IP address to this bridge interface using this command:

set network interface breth0 ip config static 192.168.1.3 255.255.255.0

Naturally, you’d want to substitute the correct IP address and subnet mask in that command. Now that network connectivity has been established (you can test it with ping), you can add DNS and NTP servers with these commands:

add network dns-server <DNS server IP address>
add network ntp-server <NTP server IP address>

Repeat the commands to add multiple DNS and/or NTP servers. Use remove instead of add in the above commands to remove a DNS or NTP server address (especially useful if you typed in the wrong address).

<aside>One quick note: if you switch the interface from DHCP to static configuration, the DNS settings still show up in the CLI—but they don’t work. To fix it, use the commands clear network dns-servers and clear network dns-search-domains, then add those settings back in with the appropriate add command. This is a known issue with this release of NVP (3.1.1) and will be addressed in a future release.</aside>

Once full network connectivity has been established, we can move past the CLI and start using NVP Manager’s web interface. The first task we’ll need to accomplish is adding the NVP controller cluster we built in part 2. Here’s how to add the NVP controller cluster to NVP Manager:

  1. Log into the NVP Manager web GUI.
  2. Because there is no controller cluster, NVP Manager will automatically take you to the screen where you can add a cluster. Click Add Cluster.
  3. In the Connect to NVP Controller Cluster dialog, supply an IP address of one of the controllers in the cluster along with the appropriate username and password.
  4. Click Connect.
  5. Supply a name for this controller cluster, along with an (optional) description and contact.
  6. The “Automatically Use New IPs” box is checked by default; this tells NVP Manager to add all the IP addresses of this controller cluster as eligible to receive API calls from the NVP Manager. Recall from part 2 that it is possible to configure multiple interfaces on the controllers and designate certain interfaces to handle API calls, manage OVS devices, etc. If your configuration is such that only certain IP addresses should receive API calls, then uncheck this box.
  7. The “Export Logical Stats” checkbox allows you to configure the controllers to make this NVP Manager the collector of logical port statistics. Checking this box is normally the recommended setting, and generally no changes to the default settings are needed. This allows you to query (either via the NVP Manager GUI or via the API) for logical port statistics.
  8. The “Make Active Cluster” box should be checked.
  9. Click Save.
  10. Click Use This NVP Manager to configure the controllers to use the NVP Manager as their userlog server. Click Configure.

This should complete the process of adding the controller cluster to NVP Manager.

Now that NVP Manager is connected to the controller cluster, you can click Dashboard in the header across the top of the NVP Manager web GUI and get a screen that looks something like this:

image

At this point, you now have a 3-node cluster of NVP controllers and an instance of NVP Manager connected to that cluster. The next step, which I’ll tackle in the next part of this series, is adding some hypervisors.

Until then, I encourage everyone to post any questions, suggestions, or clarifications in the comments below. Courteous comments are always welcome.

Tags: , , ,

Welcome to part 2 of the Learning NVP blog series. In part 1 of the series, I provided a high-level overview of the NVP architecture and components. In this post, I’ll dive into a bit more detail on one key component of the architecture: the NVP controllers and controller cluster.

(Just a quick reminder: although I’m talking about NVP 3.1—our publicly available, GA product already running in production with customers—it’s important to remember that the NVP architecture serves as the basis for the upcoming VMware NSX product. Time spent learning NVP will be beneficial in understanding NSX later.)

I’ll start with first reviewing the role of the NVP controllers in the overall architecture.

Reviewing the Role of the NVP Controllers

As I mentioned in part 1, the purpose of the NVP controllers is to handle computing the network topology and providing configuration and flow information to create logical networks. They do this by managing all Open vSwitch (OVS) devices and enforcing consistency between the logical network view (which is defined via the northbound NVP API) and the transport network view as implemented by the programmable edge virtual switches.

To break this down a bit more:

  • If a change occurs in the transport network—this would be something like a VM starting, a VM powering off, or a VM migrating to a different host—the controllers update the necessary forwarding rules and state in the transport network so that the connectivity of a VM is consistent with the logical view. As an example, a new VM powers on that should be part of a particular logical network, the controller cluster will update the necessary forwarding rules such that the new VM has the connectivity appropriate for a member of that particular logical network.
  • Similarly, if an API request changes the configuration of the logical port used by a VM, the controllers modify the forwarding rules and state of all relevant devices in the transport network to ensure consistency with the API-driven change.

NVP requires 3 controllers configured as a controller cluster; this provides the ability to distribute working tasks across the different controllers as well as provides the necessary high availability that is needed for the functions provided by the controllers.

OK, now that I’ve reviewed the role of the controllers, let’s look at the controller build process.

Building the NVP Controllers

VMware distributes the NVP controller software as an ISO. In a production deployment, you would use this ISO to burn optical discs that, in turn, are used to build NVP controllers on bare metal servers. The system requirements for the NVP controllers, as outlined in the NVP 3.1 release notes, are 8 CPU cores, 64-bit CPUs, 64GB of RAM, 128GB of local hard disk space, and a NIC of at least 1 Gbps. While it’s certainly possible to run the NVP controllers as VMs (this is what I’m doing), this is not currently supported for production environments.

The NVP controllers run a build of Ubuntu Server 12.04, so when you boot a system from the ISO (or from an optical disc created from the ISO) it will run through a custom install of Ubuntu Server 12.04. The NVP controller software packages are slipstreamed into the install—if you’re careful you’ll see references to them during the install process—so when the custom Ubuntu installer is done you’re left with a blank NVP controller that is ready to configure.

Once you’ve booted from the install media and installed the NVP controller software, getting an NVP controller up and running is actually pretty straightforward. Here are the steps:

  1. Set the password for the admin user (optional, but highly recommended).

  2. Set the hostname for the controller (also optional, but highly recommended).

  3. Assign an IP address to the controller so that it can communicate across the network. (Obviously this is a fairly important step.)

  4. Configure DNS and NTP settings.

  5. Set the controller’s management IP address (more on this in a moment).

  6. Set the controller’s switch manager and API provider IP addresses (more on that in a moment).

Let’s take a look at these steps in a bit more detail. The NVP controller offers users a streamlined command-line interface (CLI) with context-sensitive help. If you get stuck with any of the commands, just press Tab to autocomplete the command, or press Tab twice to provide a list of completion options.

To set the password for the default admin user, just use this command:

set user admin password

You’ll be prompted to supply the new password, then retype it for confirmation. Easy, right? (And pretty familiar if you’ve used Linux before.)

Setting the hostname for the controller is equally straightforward:

set hostname <hostname>

Now you’re ready to assign an IP address to the controller. Use this command to see the network interfaces that are present in the controller:

show network interfaces

You’ll note that for each physical interface in the system, the NVP installation procedure will create a corresponding bridge (this is actually an OVS bridge). So, for a server that has two interfaces (eth0 and eth1), the installation process will automatically create breth0 and breth1. Generally, you’ll want to assign your IP addresses to the bridge interfaces, and not to the physical interfaces.

Let’s say that you wanted to assign the IP address to breth0, which corresponds to the physical eth0 interface. You’d use this command:

set network interface breth0 ip config static 192.168.1.5 255.255.255.0

Naturally, you’d want to substitute the correct IP address and subnet mask in that command. Once the interface is configured, you can use the standard ping command to test connectivity (note, though, that you can’t use any switches to ping, as they aren’t supported by the streamlined NVP controller CLI).

Note that you may also need to add a default route using this command:

add network route 0.0.0.0 0.0.0.0 <Default gateway IP address>

Assuming connectivity is good, you’re ready to add DNS and NTP servers to your configuration. Use these commands:

add network dns-server <DNS server IP address>
add network ntp-server <NTP server IP address>

Repeat these commands as needed to add multiple DNS and/or NTP servers. If you mess up and accidentally fat finger an IP address (happens to me all the time!), you can remove the incorrect IP address using the remove command, like this:

remove network dns-server <Incorrect DNS IP address>

Substitute ntp-server for dns-server in the above command to remove an incorrect NTP server address.

It’s entirely possible that an NVP controller could have multiple IP addresses assigned, so the next few commands will tell the controller which IP address to use for various functions. This allows you to spread certain traffic types across different interfaces (and potentially different networks), should you so desire.

First, set the IP address the controller should use for management traffic:

set control-cluster management-address <IP address>

Then tell the NVP controller which IP address to use for the switch manager role (this is the role that communicates with OVS devices):

set control-cluster role switch_manager listen-ip <IP address>

And tell the controller which IP address to use for the API provider role (this is the role that handles northbound REST API traffic):

set control-cluster role api_provider listen-ip <IP address>

Once all this is done, you’re ready to turn up the controller cluster using the join control-cluster command. For the first controller, you’ll “join” it to itself. In the event the NVP controller has multiple IP addresses assigned, the IP address to use is the IP address you specified when you set the management IP address earlier. Here’s the command to build a controller cluster from the first controller:

join control-cluster <Own IP address>

For the second and third controllers in the cluster, you’ll point them to the IP address of the first controller in the cluster, like this:

join control-cluster <IP address of first controller>

(Side note: you can actually point the third controller to any available node in the cluster. I specified the first controller here just for succinctness.)

Once the process of joining the controller cluster is done, you can check the status of the cluster in a couple of different ways. First, you can use the show control-cluster status, which will tell you if this node is connected to the cluster majority (as well as if this controller can be safely restarted). You can also use the show control-cluster startup-nodes command, which lists all the controllers that are members of the cluster.

The output of both these commands is illustrated below.

Status and startup nodes commands

If you want to get a feel for the types of communication the NVP controllers will use, you can also use the show control-cluster connections command, which produces output that looks something like this:

Connections summary

Once the controller cluster is up and running, you’re ready to move on to adding other components of NVP to the environment. In the next part, I’ll walk through setting up NVP Manager, which will then allow us to continue with setting up NVP by adding gateways, service nodes, and hypervisors.

In the meantime, feel free to post any questions or thoughts in the comments below. Courteous comments (with vendor disclosure, where applicable) are always welcome.

Tags: , , , , ,

This blog post kicks off a new series of posts describing my journey to become more knowledgeable about the Nicira Network Virtualization Platform (NVP). NVP is, in my opinion, an awesome platform, but there hasn’t been a great deal of information shared about the product, how it works, how you configure it, etc. That’s something I’m going to try to address in this series of posts. In this first post, I’ll start with a high-level description of the NVP architecture. Don’t worry—more in-depth information will come in future posts.

Before continuing, it might be useful to set some context around NVP and NSX. This series of posts will focus on NVP—a product that is available today and is currently in use in production. The architecture I’m describing here will also be applicable to NSX, which VMware announced in early March. Because NSX will leverage NVP’s architecture, spending some time with NVP now will pay off with NSX later. Make sense?

Let’s start with a figure. The diagram below graphically illustrates the NVP architecture at a high level:

High-level NVP architecture diagram

The key components of the NVP architecture include:

  • A scale-out controller cluster: The NVP controllers handle computing the network topology and providing configuration and flow information to create logical networks. The controllers support a scale-out model for high availability and increased scalability. The controller cluster supplies a northbound REST API that can be consumed by cloud management platforms such as OpenStack or CloudStack, or by home-grown cloud management systems.
  • A programmable virtual switch: NVP leverages Open vSwitch (OVS), an independent open source project with contributors from across the industry, to fill this role. OVS communicates with the NVP controller clusters to receive configuration and flow information.
  • Southbound communications protocols: NVP uses two open communications protocols to communicate southbound to OVS. For configuration information, NVP leverages OVSDB; for flow information, NVP uses OpenFlow. The management (OVSDB) communication between the controller cluster and OVS is encrypted using SSL.
  • Gateways: Gateways provide the “on-ramp” to enter or exit NVP logical networks. Gateways can provide either L2 gateway services (to bridge NVP logical networks onto physical networks) as well as L3 gateway services (to route between NVP logical networks and physical networks). In either case, the gateways are also built using a scale-out model that provides high availability and scalability for the L2 and L3 gateway services they host.
  • Encapsulation protocol: To provide full independence and isolation of logical networks from the underlying physical networks, NVP uses encapsulation protocols for transporting logical network traffic across physical networks. Currently, NVP supports both Generic Routing Encapsulation (GRE) and Stateless Transport Tunneling (STT), with additional encapsulation protocols planned for future releases.
  • Service nodes: To offload the handling of BUM (Broadcast, Unknown Unicast, and Multicast) traffic, NVP can optionally leverage one or more service nodes. Note that service nodes are optional; customers can choose to have BUM traffic handled locally on each hypervisor node. (Note that service nodes are not shown in the diagram above.)

Now that you have an idea of the high-level architecture, let me briefly outline how the rest of this series will be organized. The basic outline of this series will roughly correspond to how NVP would be deployed in a real-world environment.

  1. In the next post (or two), I’ll be focusing on getting the controller cluster built and diving a bit deeper into the controller cluster architecture.
  2. Once the controller cluster is up and running, I’ll take a look at getting NVP Manager up and running. NVP Manager is an application that consumes the northbound REST APIs from the controller cluster in order to view and manage NVP logical networks and NVP components. In most cases, this function is part of a cloud management platform (such as OpenStack or CloudStack), but using NVP Manager here allows me to focus on NVP instead of worrying about the details of the cloud management platform itself.
  3. The next step will be to bring hypervisor nodes into NVP. I’ll focus on using nodes running KVM, but keep in mind that Xen is also supported by NVP. If time (and resources) permit, I may try to look at bringing up Xen-based hypervisor nodes as well. Because NVP leverages OVS as the edge virtual switch, I’ll naturally be discussing some OVS-related tasks and topics as well.
  4. Following the addition of hypervisor nodes into NVP, I’ll look at creating a simple logical network, and we’ll examine how this logical network works with the underlying physical network.
  5. To add more flexibility to our logical network, we need to be able to bring physical resources into NVP logical networks. To enable that functionality, we’ll need to add gateways and gateway services to our configuration, so I’ll discuss gateways and L2 gateway services, how they work, and how we add them to an NVP configuration.
  6. The next step is to enable L3 (routing) functionality within NVP, and that is enabled by L3 gateway services. I’ll spend some time talking about the L3 gateway services, their architecture, adding them to NVP, and including L3 functionality in an NVP logical network. I’ll also explore distributed L3 routing, where the L3 routing is actually distributed across hypervisors in an NVP environment (this is a new feature just added in NVP 3.1).
  7. Now that we have both L2 and L3 gateway services in NVP, I’ll take a look at building more intricate logical networks.

Beyond that, it’s hard to say where the series will go. I’ll likely also take a look at some of NVP’s security features, and examine a few more complex NVP use cases. If there are additional topics you’d like to see beyond what I’ve outlined above, please feel free to speak up in the comments below.

I’m excited about this journey to learn NVP in more detail, and I’m looking forward to taking all of you along with me. Ready? Let’s go!

Tags: , , , , ,

Life at VMware, Two Weeks In

Today marks my “two week anniversary” in my new role at VMware. So far, it’s been everything that I thought it would be—exciting, but also challenging.

My entire first week was taken up by new hire onboarding and some training on Nicira’s Network Virtualization Platform (NVP). I was pleased to find that the time and effort I’d spent familiarizing myself with OpenFlow and Open vSwitch (OVS) proved quite useful in hitting the ground running with NVP. There is still much to learn, naturally, but I feel like I have a good foundation upon which I can build.

I was also fortunate during my first week to have the opportunity to jump right into some important projects. Some of them I can’t discuss right now (naturally), but I can mention the joint session proposal I helped create for the April OpenStack Summit. It’s a joint presentation with VMware (me) and Canonical (James Page) talking about the improved vSphere support in OpenStack (including a demo!). Hopefully the session will get selected, but it looks like I’ll be at the OpenStack Summit in April either way. That’s pretty exciting.

I spent the majority of my second week getting settled into new procedures, new process, and new tools. It’s no secret that VMware uses Socialcast internally, and I’m still wrestling with if/how to take advantage of such a tool. Other than that, it’s just a matter of becoming familiar with the tools and where they are located.

One key takeaway so far is that I need to deepen my networking knowledge. It’s clear that I really need to dig into a few key areas, like leaf/spine and L3 ECMP network designs. I’ve already started applying some of the techniques I’ve discussed in my presentations—grammar (terminology), logic (how), rhetoric (why)—to these topics, but the real challenge is finding good information sources. I have some incredible coworkers, but I can’t rely too heavily on them; they have work to get done too. If anyone has any ideas for good resources on these topics, I’m open to any and all suggestions.

That’s it for me, two weeks into my new role at VMware. I’m looking forward to the challenges that lie ahead (there are a few big ones), but also to the opportunities (there a few big ones). Feel free to share your comments below; courteous comments are always accepted.

Tags: ,

I suppose there’s no sense in beating around the bush. As the blog post title indicates, I’m taking on a new set of challenges (and a new set of opportunities) in 2013—and the way to do that is in a new role with a new company. So, effective 2/8/2013, I am leaving EMC Corporation to join the former Nicira group at VMware, working directly for Martin Casado. I’ll be working with folks like Brad Hedlund (see his announcement here), Bruce Davie, and Teemu Koponen. I’m truly awed by the talent on this team.

My time at EMC over the last three years has been great, and my choice to leave was a difficult choice to make. The decision to leave does not reflect anything bad about EMC, but rather reflects the magnitude of the opportunities for personal and professional growth that lie ahead with VMware’s virtual networking group. There is a saying among my former team at EMC that goes like this: “Once a vSpecialist, always a vSpecialist.” I don’t agree with this statement, because it implies a sense of permanence—something those of us in IT simply can’t afford to have. You must change, you must evolve, you must become something more than what you were in the past, or you will become irrelevant. While I appreciate my time at EMC—both my time as a vSpecialist and my time within the ESG CTO’s office—the time for growth and evolution has come. This move will help me further evolve and grow. I’ve always been interested in networking, but this will be the first time it will be the primary focus for me, and I’m really looking forward to expanding my knowledge, learning new concepts and ideas, and leveraging my existing experience and expertise with virtualization in new and exciting ways.

Although there are great opportunities ahead, there are also a few challenges. I’m not relocating (I love Denver too much!), but my travel schedule will ramp up quite a bit. Travel has been down for me for the last several months (since I left the vSpecialist team), but in the new role my travel will go back up again as I’ll be meeting with the rest of the virtual networking team in Palo Alto, meeting with strategic customers and partners, supporting community events (expect to see me at VMUG events), and educating field sales resources on virtual networking and why it’s important. Undoubtedly the increased travel will have an impact on Crystal and the rest of the family, and I appreciate everyone’s thoughts and prayers as we sort that out.

One other challenge will come from a shift in “allegiance.” I experienced a similar effect when I joined EMC. When I was with ePlus (it seems so long ago!) I was able to maintain reasonably good relationships with different storage vendors as well as different networking vendors. When I joined EMC, the other storage vendors no longer wanted to work with me. I suppose I can understand that. I was able, though, to continue maintain reasonably good relationship with various networking vendors (and even a few other virtualization vendors). I suspect now, though, that my shift to VMware will alter that landscape again. I can only hope the relationships I’ve established with colleagues at “competing” organizations (real or perceived competition) aren’t negatively affected too much.

Long-time readers know that several transitions have occurred over the nearly 8 years that I’ve been writing here. As I’ve done for the last 8 years, I’ll continue to post as much useful, relevant, and interesting content here as I’m able. Will there be a shift in focus? Possibly; I can’t promise there won’t be. Still, I’ll strive to keep sharing as much as I’m able as together we grow, change, and evolve along with the IT industry. Thanks for the support, and I hope that it continues.

Courteous comments are always welcome, so if you have questions or thoughts you want to share, feel free to speak up below.

Tags: , , ,