Virtualization

You are currently browsing articles tagged Virtualization.

This is a liveblog of the day 1 keynote at VMworld 2014 in San Francisco, CA. (Well, it’s a “semi-liveblog”; I’ll post this immediately after the keynote session wraps up.) As in previous years, the keynote is being held in the massive space in Moscone North, and this year—like previous years—it looks like there is quite a crowd gathered here this morning. I have the privilege of sitting with newly-minted VCDX-NVs and other VCDXes.

The keynote starts with a catchy, jazzy dance/music piece, and then Robin Matlock, Chief Marketing Officer for VMware, takes the stage. Matlock shares that over 22,000 people are gathered here for VMworld, which (if I recall correctly) is down a little bit from last year. Matlock talks about change, and how change can be either a barrier or an opportunity. That leads into a discussion of the building of the Golden Gate Bridge as an allegory for pushing boundaries, exploring endless possibilities, and expanding your knowledge and expertise. This fits into the “No Limits” theme for the conference this week.

After discussing “Operation Giveback” (a charity fundraiser backed by the VMware Foundation), Matlock turns the stage over to VMware’s CEO, Pat Gelsinger, who comes onto the stage in a dramatic “breaking through” effect.

Gelsinger starts his portion of the keynote by introducing the “liquid world,” where change is paramount. Gelsinger provides some examples of various businesses that have embraced the “liquid business” model. The biggest risk to success, according to Gelsinger, is perpetuating the status quo. Change is inevitable. The best way to predict the future is to invent it, and Gelsinger says that’s what brave IT leaders need to do. This leads Gelsinger into a discussion of what the word “brave” means, and that in turn leads to a family-friendly discussion of the first day of school, a kid’s penalty shot in soccer, a driving test, or a public speaking event. Next up is a “who’s who” list of brave business leaders, like Elon Musk and the leader of 23andMe (I didn’t catch the spelling for her name). VMware is on a voyage of brave transformation itself, and Gelsinger provides examples of VMware’s BYOD efforts and running all of VMware’s SAP ERP efforts on vCloud Air.

Gelsinger brings the “fluid” and “brave” themes together in that brave IT involves fluidity and choice (and a third item I couldn’t catch). This leads into a discussion of the software-defined data center (SDDC). As a result of this discussion, Gelsinger goes into some announcements:

  • VMware is announcing VMware vCloud Suite 5.8, which includes new suite “fit and finish” changes
  • VMware is announcing vSphere 6.0 beta
  • VMware is announcing VSAN 2.0 beta
  • As part of the vSphere 6.0 beta, VMware is also announcing Virtual Volumes (VVols)
  • VMware is rebranding the management suite as VMware vRealize Suite

The path to SDDC has three branches: “build your own”, converged infrastructure, and hyper-converged infrastructure. This leads Gelsinger to announce VMware EVO, which is VMware’s family of hyper-converged infrastructure offerings. EVO is the SDDC packaged with hardware to provide the fastest way to realize the benefits of the SDDC. The first member of the EVO family is EVO:RAIL, which is a “building” block designed at around 100 VMs of capacity and is designed to be up and running in 15 minutes. Hardware partners for EVO:RAIL include Dell, EMC, Fujitsu, Inspur, Net One, and Supermicro.

The EVO Zone, found in the Solutions Exchange, has some examples of EVO products being brought to market.

The second member of the EVO family is EVO:RACK, which is being announced as a technical preview. EVO:RACK builds on EVO:RAIL for greater scale—basically building an “entire cloud” in 2 hours or less. There will be a range of certified hardware solutions available as EVO:RACK solutions. VMware is also joining the OpenCompute Project, and anticipates that EVO:RAIL and EVO:RACK will be available on OCP hardware.

Gelsinger also announces VMware Integrated OpenStack (VIO), VMware’s own OpenStack distribution that will incorporate key VMware technologies like vSphere, VSAN, and NSX. VIO is available in beta.

Continuing the discussion around next-generation platforms, Gelsinger announces partnerships with Docker, Google, and Pivotal around building open container APIs and making containers run better on VMware’s SDDC platform—”containers without compromise.”

That leads Gelsinger into a discussion of security, and that naturally involves VMware NSX. Microsegmentation is becoming a key use case for customers adopting VMware NSX.

After a brief discussion of end-user computing—admittedly brief because Sanjay Poonen will be discussing this in more detail in tomorrow’s keynote—Gelsinger moves on to a discussion of hybrid cloud. VMware vCloud Hybrid Service has been rebranded to vCloud Air, and all VMware products that are offered “as a service” will be rebranded as part of the Air family. Gelsinger brings Bill Fathers, EVP & GM for Hybrid Cloud at VMware, onto the stage.

Fathers recaps what’s happened with vCloud Air over the last 12 months; the service was officially launched last year in the day 1 keynote at VMworld 2013. vCloud Air has been successful in meeting all of the goals originally announced. Fathers reviews the key benefits of using public cloud, especially VMware’s public cloud: agility, efficiency, and cost savings. Fathers discusses the various phases of cloud adoption, and states that he believes the market is currently transitioning from the experimental phase to the professional era.

For companies that are being successful transitioning into the professional era—which involves much greater use of public cloud resources—the two things these companies are doing is 1) thinking of the apps first; and 2) putting the IT professionals back in charge. According to Fathers, vCloud Air provides the perfect platform for accomplishing both of these goals, due in no small part because vCloud Air uses the same hypervisor, same VM formats, and same management tools as companies are using today in their private clouds.

Fathers announces a government-focused FedRAMP-compliant vCloud Air offering launching soon (September); this will be very useful for US government-focused partners, service providers, and systems integrators. VMware is also expanding the number of vCloud Air partners in the vCloud Air Network, with 3900 partners available around the world in over 100 countries. vCloud Air Network is, according to Fathers, the world’s largest hybrid cloud network.

Next Fathers moves into a discussion of new services and offerings available with vCloud Air. These new services include continuous integration as a service offering (CIaaS?), MySQL and MS SQL database-as-a-service offering (will later expand to all popular RDBMS, and will offer DR services for on-premises databases), object storage service based on EMC ViPR, AirWatch and Pivotal CloudFoundry mobile services, and vRealize Air Automation (which is hosted cloud management tools available as a SaaS). Fathers also announcing a beta of on-demand, PAYG (pay as you go) access to vCloud Air.

Fathers transitions back to Gelsinger, who wraps up the themes and announcements from the keynote, and introduces Carl Eschenbach, President and COO of VMware.

Eschenbach reinforces that the “engine of innovation” is alive and well at VMware, but it is the brave who drive change (referring to the VMworld attendees in the audience). That leads into a customer discussion between Eschenbach with three key customers: MedTronic, MIT IST (Information Systems and Technology), and Ford. Each of the customers comes out on the stage and has a brief conversation with Eschenbach.

At this point I had to leave to prepare for a session, so I wasn’t able to capture any of the rest of the keynote.

Tags: , ,

In this post, I’ll show you how I got Arista’s vEOS software running under KVM to create a virtualized Arista switch. There are a number of other articles that help provide instructions on how to do this, but none of those that I found included the use of libvirt and/or Open vSwitch (OVS).

In order to run vEOS, you must first obtain a copy of vEOS. I can’t provide you with a copy; you’ll have to register on the Arista Networks site (see here) in order to gain access to the download. The download consists of two parts:

  1. The Aboot ISO, which contains the boot loader
  2. The vEOS disk image, provided as a VMware VMDK

Both of these are necessary; you can’t get away with just one or the other. Further, although the vEOS disk image is provided as a VMware VMDK, KVM/QEMU is perfectly capable of using the VMDK without any conversion required (this is kind of nice).

One you’ve downloaded these files, you can use the following libvirt domain XML definition to create a VM for running Arista vEOS (you’d use a command like virsh define <filename>).

(Click here if you can’t see the code block above.)

There are a few key things to note about this libvirt domain XML:

  • Note the boot order; the VM must boot from the Aboot ISO first.
  • Both the Aboot ISO as well as the vEOS VMDK are attached to the VM as devices, and you must use an IDE bus. Arista vEOS will refuse to boot if you use a SCSI device, so make sure there are no SCSI devices in the configuration. Pay particular attention to the type= parameters that specify the correct disk formats for the ISO (type “raw”) and VMDK (type “vmdk”).
  • For the network interfaces, you’ll want to be sure to use the e1000 model.
  • This example XML definition includes three different network interfaces. (More are supported; up to 7 interfaces on QEMU/KVM.)
  • This XML definition leverages libvirt integration with OVS so that libvirt automatically attaches VMs to OVS and correctly applies VLAN tagging and trunking configurations. In this case, the network interfaces are attaching to a portgroup called “trunked”; this portgroup trunks VLANs up to the guest domain (the vEOS VM, in this case). In theory, this should allow the vEOS VM to support VLAN trunk interfaces, although I had some issues making this work as expected and had to drop back to tagged interfaces.

Once you have the guest domain defined, you can start it by using virsh start <guest domain name>. The first time it boots, it will take a long time to come up. (A really long time—I watched it for a good 10 minutes before finally giving up and walking away to do something else. It was up when I came back.) According to the documentation I’ve found, this is because EOS needs to make a backup copy of the flash partition (which in this case is the VMDK disk image). It might be quicker for you, but be prepared for a long first boot just in case.

Once it’s up and running, use virsh vncdisplay to get the VNC display of the vEOS guest domain, then use a VNC viewer to connect to the guest domain’s console. You won’t be able to SSH in yet, as all the network interfaces are still unconfigured. At the console, set an IP address on the Management1 interface (which will correspond to the first virtual network interface defined in the libvirt domain XML) and then you should have network connectivity to the switch for the purposes of management. Once you create a username and a password, then you’ll be able to SSH into your newly-running Arista vEOS switch. Have fun!

For additional information and context, here are some links to other articles I found on this topic while doing some research:

If you have any questions or need more information, feel free to speak up in the comments below. All courteous comments are welcome!

Tags: , , , , ,

It’s that time again—time for community voting on sessions for the fall OpenStack Summit, being held in Paris this year in early November. I wanted to take a moment and share some of the sessions in which I’m involved and/or that I think might be useful. It would be great if you could take a moment to add your votes for the sessions.

My Sessions

I have a total of four session proposals submitted this year:

Congress Sessions

You may also be aware that I am involved with a project called Congress, which aims to bring an overarching policy service to OpenStack. Here are some sessions pertaining to Congress:

VMware Sessions

Arvind Soni, one of the product managers for OpenStack at VMware, kindly pulled together this list of VMware-related sessions, so feel free to have a look at any of these and vote on what sounds appealing to you.

Other Sessions

There are way too many sessions to list all the interesting ones, but here are a few that caught my eye:

There are a bunch more that looked interesting to me, but I’ll skip listing them all here—just hop over to the OpenStack site and vote for the sessions you want to see.

Tags: , , , , ,

This issue describes a fix I found for an issue I had when booting KVM guest domains on the Ubuntu/KVM hypervisors in my home lab. I’d been struggling with this issue for quite some time now, but only recently found what I believe to be the final fix for the problem.

First, allow me to provide a bit of background. Some time ago—I’d say around August 2012, when I left the vSpecialist team at EMC to join an OpenStack-focused team in another part of EMC—I moved my home lab over completely to Ubuntu 12.04 LTS with the KVM hypervisor. This was an important step in educating myself on Linux, KVM, libvirt, and Open vSwitch (OVS), all of which are critical core components in most installations of OpenStack.

Ever since making that change—particularly after adding some new hardware, a pair of Dell C6100 servers, to my home lab—I would experience intermittent problems booting a KVM guest. The guest would appear to boot properly, but then hang shortly after a message about activating swap space and fsck reporting that the file system was clean. Sometimes, rebooting the guest would work; many times, rebooting the guest didn’t work. Re-installing the guest sometimes worked, but sometimes it didn’t. There didn’t appear to be any consistency with regard to the host (the issue occurred on all hosts) or guest configuration. The only consistency appeared to be with Ubuntu, as virtually (no pun intended) all my KVM guests were running Ubuntu.

Needless to say, this was quite frustrating. I tried all the troubleshooting I could imagine—deleting and recreating swap space, manually checking the file system(s), various different installation routines—and nothing seemed to make any difference.

Finally, just in the last few weeks, I stumbled across this page, which indicated that adding “nomodeset” to the grub command line fixed the problem. This was a standard part of my build (it kept the console from getting too large when using VNC to connect to the guest), but it required that I was able to successfully boot the VM first. I’d noted that once I had been able to successfully boot a guest and add “nomodeset” to the grub configuration, I didn’t have any further issues with that particular guest; however, I explained that away by saying that the intermittent boot issue must have been some sort of first-time boot issue.

In any case, that page linked to this ServerFault entry, which also indicated that the use of “nomodeset” helped fix some (seemingly) random boot problems. The symptoms described there—recovery mode worked fine, booting normally after booting into recovery mode resulted in an “initctl: event failed” error—were consistent with what I’d been seeing as well.

So, I took one of the VMs that was experiencing this problem, booted it into recovery mode, edited the /etc/default/grub file to include “nomodeset” on the GRUB_CMDLINE_LINUX_DEFAULT line, and rebooted. The KVM guest booted without any issues. Problem fixed (apparently).

Thus far, this has fixed the intermittent boot issue on every KVM guest I’ve tried, so I’m relatively comfortable recommending it as a potential change you should explore if you experience the same problem/symptoms. I can’t guarantee it will work, but it has worked for me so far.

Good luck!

Tags: , ,

This is part 15 of my Learning NSX blog series, in which I will spend some time diving a bit deeper into some of the components involved in the logical routing process I described in part 14. Specifically, I’ll be taking a deeper look at gateway appliances, gateway services, and logical routers, and the relationships among these various components.

If you haven’t read any of the prior posts in this series, it would be ideal to read all of them before continuing; you can find links on my Learning NVP/NSX page. In particular, I’d suggest reading part 6 (on adding a gateway appliance), part 9 (on adding a gateway service), and part 14 (on logical routing and logical routers).

Just for the sake of completeness and to reinforce what was introduced in those posts I referenced, let’s start with some terminology:

  • Gateway (or gateway appliance): When I use the terms gateway or gateway appliance, I’m referring to the NSX software gateway that acts as the “on-ramp/off-ramp” to and from logical networks. What makes this confusing is that we also use the term “gateway” (in particular, “IP gateway” or “default gateway”) to refer to a Layer 3 router that acts as the next hop for a aystem. I’ll do my best to make sure that I’m clearly distinguishing between these ambiguous uses.
  • Gateway service: A gateway service is a logical construct within NSX that allows you to group together multiple gateway appliances. For example, in an L2 gateway service, you can combine two gateway appliances so that you have redundancy in providing L2 bridging functionality between a logical network and a physical network. In an L3 gateway service, you can combine up to 10 gateway appliances together for redundancy and scale-out performance.
  • Logical router: As you might recall from part 14, a logical router is a logical construct within NSX that provides Layer 3 routing functionality, typically (but not always) on a per-tenant basis.

I have a few more terms I’ll introduce in this post, but that should be enough for now.

This diagram contains the bulk of what I’d like to discuss in this post—the relationship between gateway services, gateway appliances, and logical routers:

As I walk you through the details of this diagram, hopefully I’ll clarify the relationships between these components.

  • In this example, there are four gateway appliances combined into a single Layer 3 gateway service. As illustrated in the diagram, gateway services can contain more than one gateway appliance (the minimum recommended is two, for reasons to be explained shortly). Gateway services may be either Layer 2 (bridging/switching) or Layer 3 (routing), but not both.
  • A gateway appliance may be a member of only one gateway service at a time; therefore, a gateway appliance is either L2 or L3, but not both.
  • When adding a gateway appliance to a gateway service, the administrator or operator has the ability to specify a failure zone ID. The idea behind the failure zone ID is to help model fault domains within a single gateway service. For example, if GW Appliance 1 is in a different fault domain—say, a different rack—then the administrator or operator could assign a different failure zone ID to GW Appliance 1, indicating that GW Appliance 1 is in a different fault domain. The significance of this functionality will be made clear in a moment.
  • Note that gateway services, gateway appliances, and failure zone IDs are not visible to tenants. Further, the configuration or management of these entities is handled through NSX (via API or NSX Manager), and isn’t tenant-specific. The CMP—OpenStack, for example—doesn’t get involved here.
  • The example diagram shows four different logical routers spread across three tenants. Each of these logical routers acts as an IP gateway (default gateway/default route) for the associated (or connected) logical network(s). Thus, a logical router is visible to a tenant.
  • Creating, managing, and configuring logical routers is handled by the CMP. With OpenStack, for example, you’d use the OpenStack Dashboard or the Neutron command-line client.
  • For redundancy, you’ll note that each logical router is instantiated on 2 different gateway appliances within the gateway service (hence why a minimum of 2 gateway appliances within a gateway service is recommended). This is completely invisible to the tenant and is handled automatically by NSX. If failure zone IDs—indicating different fault domains—are configured on the gateway appliances, then NSX will instantiate the logical router on gateway appliances in different failure zones. This is an attempt to minimize downtime by spreading the logical router across fault domains.

So far, everything I’ve shared with you has been true for centralized logical routers. For distributed logical routers, things are only slightly different. Distributed logical routers are normally instantiated on the hypervisors; a gateway service and its associated gateway appliances only gets involved when you set the uplink for the distributed logical router (using the “Set Gateway” button in OpenStack Dashboard, for example). If you never set an uplink for the logical router, it will remain instantiated only on the hypervisors, and not on the gateway service/gateway appliances.

I hope this information helps in understanding the routing aspects of VMware NSX. Feel free to post any questions, clarifications, or thoughts in the comments below. Any input on other topics you’d like to see in the Learning NSX blog series are welcome as well!

Tags: , , , , , ,

Welcome to part 14 of the Learning NSX blog series, in which I discuss the ability for VMware NSX to do Layer 3 routing in logical networks. This post will also include a look at a very cool feature within VMware NSX known as distributed logical routing. This post will take a closer look at distributed logical routing within the context of an OpenStack environment that’s been integrated with VMware NSX. (Although NSX isn’t necessarily tied to OpenStack, I’ll assume you’re using OpenStack just to simplify the discussion.)

If you’re new to this series, you can find links to all the articles on my Learning NVP/NSX page. Ideally, I’d recommend you read all the articles, but if you’re just interested in some of the high-level concepts you probably don’t need to do that. For those interested in the deep technical details, I’d suggest catching up on the series before proceeding.

Overview of Logical Routing

One of the features of VMware NSX that can be useful, depending on customer requirements, is the ability to create complex network topologies. For example, creating a multi-tier network topology like the one shown below is easily accomplished via VMware NSX:

Sample network topology

Note that this topology has two tenant-specific routing entities—these are logical routers. A logical router is an abstraction created and maintained by VMware NSX on behalf of your cloud management platform (like OpenStack, which I’ll assume you’re using here). These logical entities perform the routing process just like a physical router would (forwarding traffic based on a routing table, changing the source and destination MAC address, maintaining an ARP cache of MAC addresses, decrementing the TTL, etc.). Of course, they are not exactly the same as physical routers; you can’t, for example, connect two logical routers directly to each other.

Logical routers also act as the logical boundary between one or more logical networks and an external network. Logical routers can be connected to multiple logical networks (each logical network with its own logical router interface), but can only be connected to a single external network. Thus, you can’t use a logical router as a transit path between two external networks (two VLANs, for example).

Now that you have a good understanding of logical routing, let’s take a closer look at the various components inside VMware NSX.

Components of Logical Routing

The components are pretty straightforward. In addition to the logical router abstraction that I’ve discussed already, you also have logical router ports (naturally, these are the ports on a logical router that connect it to a logical network or an external network), network address translation (NAT) rules (for handling address translation tasks), and a routing table (for…well, routing).

You can see all of these components in NSX Manager. Once you’re logged into NSX Manager, select Network Components > Logical Layer > Logical Routers, then click on a specific logical router from the list. This will display the screen shown below (click the image for a larger version):

Logical router detail in NSX Manager

A few things to note here:

  • You’ll note that the logical router has a port whose attachment is listed as “L3GW”. This denotes an attachment to a Layer 3 Gateway Service, an entity I described in part 9 of the series. This Layer 3 Gateway Service is itself comprised of two NSX gateway appliances; part 6 in the series discussed how to add a gateway appliance to your installation. The relationship between logical router, Layer 3 Gateway Service, and gateway appliance can be confusing for some; I plan to discuss that in more detail in the next post.
  • This particular logical router is not configured as a distributed logical router. This means that the actual routing function resides on a Layer 3 Gateway Service. The routing functionality is instantiated in a highly available configuration on two different gateway appliances within the Layer 3 Gateway Service.
  • NAT Synchronization is set to on; this refers to keeping NAT state synchronized between the active and standby routing functions instantiated on the gateway appliances.
  • As noted under Replication Mode, this router uses an NSX service node (refer to part 10 for more details on service nodes) for packet replication/BUM traffic.
  • You might notice that one of the logical router ports is assigned the IP address 169.254.169.253 (and you’ll also note a corresponding “no NAT” rule and routing table entries for that same network). Astute readers recognize this as the network for Automatic Private IP Addressing (APIPA), also known as IPv4 Link-Local Addresses per RFC 3927. This exists to support an OpenStack-specific feature known as the metadata service, and is created automatically by OpenStack. (I’ll talk more about OpenStack later in this post.)

All of these components and settings are accessible via the NSX API, and since NSX Manager is completely an API client (it merely consumes NSX APIs and does not provide standalone functionality outside of some logging features), you could create, modify, and delete any of the logical routing components directly within NSX Manager. (Or, if you were so inclined, you could make the API calls yourself to do these tasks.) Typically, though, these tasks would be handled via integration between NSX and your cloud management platform, like OpenStack.

One key component of NSX’s logical routing functionality that you can’t see in NSX Manager is how the routing is actually implemented in the data plane. As with most features in NSX, the actual data plane implementation is handled via Open vSwitch (OVS) and a set of flow rules pushed down by the NSX controllers. These flow rules control the flow of traffic within and between logical networks (logical switches in NSX). You can see some of the flow rules in OVS using the ovs-dpctl dump-flows command, which will produce output something like what’s shown in this screenshot (note that the addresses are highlighted because I used grep to show only the flows matching a certain IP address):

List of flows in OVS

(Click the image above for a larger version.)

These flow rules include actions like re-writing source and destination MAC addresses and decrementing the TTL, both tasks carried out by “normal” routers when routing traffic between networks. These flow rules also provide some insight into the differences between a logical router and a distributed logical router. While both are logical entities, the way in which the data plane is implemented is different for each:

  • For a logical router, the flow rules will direct traffic to the appropriate gateway appliance in the Layer 3 Gateway Service. The logical router is actually instantiated on a gateway appliance, so all routed traffic must go to the logical router, get “routed” (routing table consulted, source and destination MAC re-written, TTL decremented, NAT rules applied, etc.), then get sent on to the final destination (which might be a VM on a hypervisor in NSX or might be a physical network outside of NSX).
  • For a distributed logical router, the flow rules will direct traffic either to the appropriate gateway appliance in the Layer 3 Gateway Service or to the destination hypervisor directly. Why the “either/or”? If the traffic is north/south traffic—that is, traffic being routed out of a logical network onto the physical network—then it must go to the gateway appliance (which, as I have mentioned before, is where traffic is unencapsulated and placed onto the physical network). However, if the traffic is east/west traffic—traffic that is moving from one server on a logical network to another server on a logical network—then the traffic is “routed” directly on the source hypervisor and then sent across an encapsulated connection to the hypervisor where the destination VM resides.

In both cases, there is only one logical router. For a non-distributed logical router, the data plane is instantiated on a gateway appliance only. For a distributed logical router, the data plane is instantiated both on the local hypervisors as well as on a gateway appliance. (This is assuming you’ve set an uplink on the logical router, meaning you have a north/south connection. If you haven’t set an uplink, then the routing functionality is instantiated on the hypervisors only.)

This should provide a good overview of how logical routing is implemented in VMware NSX, but there’s one more aspect I want to cover: logical routers in OpenStack with NSX.

Logical Routers in OpenStack

As you work with OpenStack Networking—Neutron, as it’s commonly called—you’ll find that the abstractions Neutron uses map really well to the abstractions that NSX uses. So, to create a logical router in NSX, you just create a logical router in OpenStack. Attaching an OpenStack logical router to a logical network tells NSX to create the logical switch port, create the logical router port, and connect the two ports together.

In OpenStack, there are a number of different ways to create a logical router:

  • OpenStack Dashboard (Horizon)
  • Command-line interface (CLI)
  • OpenStack Orchestration (Heat) template
  • API calls directly

When using the web-based Dashboard user interface, you can only create centralized logical routers, not distributed logical routers. The Dashboard UI also doesn’t provide any way of knowing if a logical router is distributed or not; for that, you’ll need the CLI (the command is provided shortly).

On a system with the neutron CLI client installed, you can create a logical router like this:

neutron router-create <router name>

This creates a centralized logical router. If you want to create a distributed logical router, it’s as simple as this:

neutron router-create <router name> --distributed True

The neutron router-show command will return output about the specified logical router; that output will tell you if it is a distributed logical router.

The neutron CLI client also offers commands to update a logical router’s routing table (to add or remove static routes, for example), or to connect a logical router to an external network (to set an uplink, in other words).

If you want to create a logical router as part of a stack created via OpenStack Orchestration (Heat), you could use this YAML snippet in a HOT-formatted template to create a distributed logical router (click here if you can’t see the code block below):

OpenStack Heat also offers resource types for setting the router’s external gateway and creating router interfaces (logical router ports). If you aren’t familiar with OpenStack Heat, you might find this introduction useful.

That wraps up this post on logical routing with VMware NSX. As always, I welcome your courteous feedback, so feel free to speak up in the comments below. In the next post, I’ll spend a bit of time discussing logical routers, gateway servies, and gateway appliances. See you next time!

Tags: , , , , , , ,

Today marked the start of Dockercon, and Docker, Inc., the commercial entity behind the incredibly popular open source Docker project, is taking this opportunity to make its move. Already the focus of quite a bit of attention (some might say a disproportionate amount of attention), Docker is making several announcements today that indicate an intent to broaden its reach:

  • Docker (also referred to as Docker Engine) 1.0 is released today, signaling the project’s readiness to be officially used in production environments (although many firms were/are already using pre–1.0 releases in production)
  • Docker is introducing a new Enterprise Support program to give enterprises the warm cuddlies that using Docker in production environments is safe. The new program aims to provide (in Docker’s words) “training, expertise, and support necessary to stand-up mission-critical workloads built on the Docker platform.”
  • Docker is announcing a set of ten systems integrator partners to help drive commercial adoption.
  • Docker is also unveiling Docker Hub, a cloud-based service (naturally!) for users, content, and workflows. Docker Hub will offer a registry for a comprehensive list of “Dockerized” applications (taking the place of the Docker Index, I assume), along with a management console, user authentication, an automated build service, the Docker Hub API, and collaboration tools to help users manage and share applications.
  • A key part of Docker Hub is Official Repositories; these are “Dockerized” applications that are maintained, supported, and available to all Docker Hub users. This will initially include applications like MongoDB, MySQL, Nginx, Redis, and WordPress, but it is open to any community group or ISV. ()Obviously, there are some requirements around committing to maintenance of the application on an ongoing basis.)

I’m certainly a fan of using Docker (or other container technologies) where it makes sense. I fear, though, that the intense focus (some might say hype) around Docker will lead more than a few organizations down a path of trying to make Docker containers their “be all end all” solution. If that trend grows too large, that could be as damaging (if not more damaging) to Docker than being ignored. Time will tell. In the meantime, I’d strongly recommend getting a grasp on the basics of Docker so that you can better understand how it might fit into your overall solution. (My introductory post on Docker, while a bit dated, might prove useful in that regard.)

Tags: , , ,

Reader Brian Markussen—with whom I had the pleasure to speak at the Danish VMUG in Copenhagen earlier this month—brought to my attention an issue between VMware vSphere’s health check feature and Cisco UCS when using Cisco’s VIC cards. His findings, confirmed by VMware support and documented in this KB article, show that the health check feature doesn’t work properly with Cisco UCS and the VIC cards.

Here’s a quote from the KB article:

The distributed switch network health check, including the VLAN, MTU, and teaming policy check can not function properly when there are hardware virtual NICs on the server platform. Examples of this include but are not limited to Broadcom Flex10 systems and Cisco UCS systems.

(Ignore the fact that “UCS systems” is redundant.)

According to Brian, a fix for this issue will be available in a future update to vSphere. In the meantime, there doesn’t appear to be any workaround, so plan accordingly.

Tags: , , , , , ,

Welcome to part 13 of the Learning NSX blog series, in which I revisit the idea of logical networking with VMware NSX. This is a topic I first discussed in part 5 of this series, but I want to go back and look at it again, this time from a more practical perspective of what it looks like to use VMware NSX for logical networking in an OpenStack environment.

If you haven’t been keeping up with the Learning NVP/NSX series, you’ll probably want to go back and catch up. Links to all the articles are found on my Learning NVP/NSX page. You’ll particularly want to be sure that you’ve read part 11 and part 12, which cover the OpenStack integration I’ll be leveraging in this post.

To start things off, let’s first do a quick recap of what it looks like to manually create a logical network in VMware NSX (all of this is described in part 5 of the series):

  1. Create a logical switch.
  2. Add logical switch ports to the newly-created logical switch.
  3. Edit the attachment of the logical switch ports to connect a VM’s virtual network interface card (NIC).

These three steps will establish a simple logical network within VMware NSX. Of course, this logical network won’t have any Dynamic Host Configuration Protocol (DHCP) services, but it will still work (you could manually assign IP address to VMs attached to this logical network).

Now that we have VMware NSX integrated with OpenStack, let’s revisit this process to see what it looks like. (I’ll assume that you’re logged into the OpenStack dashboard and have the necessary permissions to create networks, launch instances, etc.)

First, you’d need to create a network in OpenStack. To do this, it’s as simple as selecting Networks > Create Network, then providing a name for the new network (you could also use the neutron net-create command as well):

Creating a logical network in OpenStack

To exactly mirror the process I showed you in part 5—which did not include DHCP services—you’d need to also go to the Subnet tab and uncheck “Create Subnet” as well as go to the Subnet Detail tab and uncheck “Enable DHCP.” Once you unselect those options and click Create, then OpenStack will (through the Neutron plugin for NSX) create a logical switch in NSX. You can pop into NSX Manager to see this:

New logical switch in NSX Manager

As I pointed out in part 12, the UUID and os_tid tag on this object in NSX will provide the necessary ties back to the corresponding object in OpenStack.

Now go spin up a new instance and attach that instance to the logical network you just created. What you’ll find is that OpenStack will automatically handle the creation of the logical switch ports as well as the attachment of the VM’s virtual NIC to the logical switch. This helps underscore how VMware NSX was designed to be used in conjunction with a cloud management/orchestration system like OpenStack. (You can verify that the logical switch port is automatically created using NSX Manager and comparing the number of logical switch ports both before and after launching the new instance.)

Now that we have OpenStack up and running, though, we can create a logical network that does have DHCP services:

  1. Use the neutron net-create command to create a new logical network:
  2. neutron net-create logical-net-02
  3. Use neutron subnet-create to create a subnet for the new network:
  4. neutron subnet-create --name logical-subnet-02 logical-net-02 10.1.1.0/24

If you log into NSX Manager, you’ll see that a new logical switch (whose name matches the name you gave the logical network above) has been created, and you’ll also note that 1 logical switch port is already in use—even though you haven’t launched any instances yet! The easiest way to find out what is attached to that port is via the OpenStack dashboard. Once logged into the dashboard, select Networks, then click on the network you just created, and scroll down to the list of Ports. You’ll see there that OpenStack has automatically created a logical switch port for the DHCP services associated with the subnet you created above:

Ports on a logical network

If you’re a command-line freak, you could also get this information from the CLI:

  1. Find the subnet associated with the logical network you just created:
  2. SUBNET_ID=$(neutron subnet-list | awk '/\ logical-net-02\ / {print $2}')
  3. List all the ports on that subnet:
  4. neutron port-list | grep $SUBNET_ID
  5. In this case, there is only one port on that subnet, so you can capture the ID of that port in order to get more information about the port:
  6. PORT_ID=$(neutron port-list | grep $SUBNET_ID | awk '{print $2}')
  7. List the information associated with that specific port, paying particular attention to the device_owner attribute (which should show “network:dhcp”):
  8. neutron port-show $PORT_ID

If you have been reading along diligently, you’ll probably be able to put 2 and 2 together here to realize that the “network:dhcp” port is actually a port on OVS on the network node (which, if you’ll recall, is registered as a hypervisor in VMware NSX). If you’ve really been following my stuff closely, you’ll probably also know that the OVS port is connected to a veth pair, which in turn connects to a network namespace where an instance of dnsmasq is running. (Want to learn more about network namespaces? See here.)

At this point, you should have a fairly clear understanding of how logical networking functions within an OpenStack environment with VMware NSX. I wanted to take the time to revisit this topic because future posts are going to assume that you understand these basic concepts and interactions as we explore more advanced functionality and more complex networking topologies.

Thanks for reading, and feel free to post any corrections, clarifications, or questions in the comments below.

Tags: , , , , , ,

Welcome to part 12 of the Learning NSX blog series, in which I continue the discussion around integration between OpenStack and VMware NSX, and in which I’ll provide more details about how exactly to integrate them.

If you are just now joining the series, I encourage you to visit the Learning NVP/NSX page, where you can find links to all the posts in the series. While you’ll want to be caught up on all the posts (they do build on one another in various ways), in particular you’ll want to make sure you’ve read part 11. Part 11 covers the basics of VMware NSX-OpenStack integration, and explains how the various components of OpenStack Neutron and VMware NSX will interact.

Once you understand how the different components of Neutron and NSX will interact, getting NSX integrated into OpenStack Neutron isn’t too terribly difficult. The basic steps look like this:

  1. Install the VMware NSX plugin for Neutron.
  2. Configure VMware NSX for Neutron.
  3. Configure Neutron for VMware NSX.

Let’s take a more in-depth look at each of these steps.

Installing the VMware NSX plugin for Neutron

VMware distributes a set of compiled binary packages for OpenStack Neutron plus the VMware NSX plugin from the VMware NSX support portal (available to VMware NSX customers). Source code is also available, if you’d prefer that. These builds provided by VMware represent the latest fixes to both Neutron and NSX based off the “official” OpenStack Neutron releases. A single download contains all the different components of Neutron that you need (it’s a tarred and gzipped file that you just unpack).

Once you have the packages (I’ll assume you’re using Ubuntu and therefore have downloaded and will use the Debian packages), then you can just use dpkg -i to install the appropriate package(s) on the appropriate node(s). Recall from part 11 that when implementing Neutron with VMware NSX, you’ll need both a Neutron server as well as a network node running the DHCP and metadata agents. Here’s a breakdown of which packages need to be installed on which nodes:

  • On the Neutron server, you’d install neutron-common, neutron-server, python-neutron, and neutron-plugin-nicira.
  • On the Neutron network node, you’d install (at a minimum) neutron-common, neutron-dhcp-agent, and neutron-metadata-agent. If you wanted LBaaS support, you’d also install neutron-lbaas-agent. You could optionally install the Python client with python-neutron as well.

From here, you would proceed with setting up OpenStack Neutron as outlined in a variety of places, including the official OpenStack docs. If you do choose to use the official docs to get Neutron configured, here’s how the breakdown of the instructions map to the setup you’d need to build for use with VMware NSX:

  • The “Installing networking support on a dedicated controller node” section contains information for setting up Neutron on an OpenStack controller that does not run any of the underlying agents. Typically, this system would also run the API servers for some of the other OpenStack services as well (like Nova, Cinder, or Glance).
  • The “Installing networking support on a dedicated network node” section contains the information for setting up a network node that would run the DHCP and metadata agents. Recall that you don’t need the L3 agent, since that is handled by NSX. It might include the LBaaS agent, if you need that functionality.
  • The “Installing networking support on a dedicated compute node” section has the information for setting up your OpenStack compute nodes to interact with Neutron appropriately. Note that you don’t need to install an agent on the compute nodes; adding the compute nodes to NSX (as described in part 4) establishes the necessary communication between the NSX controllers and the compute nodes.

This helps you get Neutron up and running; in the “Configuring Neutron for VMware NSX” section below, I’ll provide additional specifics around how to configure Neutron to communicate with VMware NSX. For now, let’s make sure that VMware NSX is ready for Neutron.

Configuring VMware NSX for Neutron

VMware NSX was designed to be cloud platform-agnostic, so there isn’t a whole lot that needs to be done here. There are, however, a few tasks you’ll want to make sure you’ve done inside VMware NSX:

  1. You’ll want to ensure that you’ve added at least one NSX gateway appliance to your installation. (Part 6 describes how to add an NVP/NSX gateway appliance.)
  2. You’ll want to ensure that you’ve added an L3 gateway service, as described in part 9 of the series. The L3 gateway service replaces the L3 agent in OpenStack Neutron and is therefore necessary to provide routed/NAT’d connectivity into or out of logical (tenant) networks. Use NSX Manager to get the UUID of the L3 gateway service you’ve added; we’ll need that when we configure Neutron.
  3. You’ll need to make sure you’ve already created a transport zone, as described in part 4 and explained in greater detail in part 5. Use NSX Manager to get the UUID of the transport zone that you want Neutron to use when creating overlay networks; you’ll need that when configuring the NSX plugin for Neutron.

Configuring Neutron for VMware NSX

Now, let’s get into some nitty gritty specifics on how we configure OpenStack Neutron to interact with VMware NSX.

Most of the configuration is done within the NSX-specific configuration files, but there are two settings in neutron.conf on the controller node (where the Neutron API server is running) that you’ll want to set:

  • You’ll absolutely want to set the core_plugin value to neutron.plugins.nicira.NeutronPlugin.NvpPluginV2. (Note that in future releases of the NSX plugin, the name may change from “nicira” and “Nvp” to “vmware” and “Nsx”.)
  • You’ll probably also want to set allow_overlapping_ips to True so that Nova metadata works as you would expect. (I’ll have more on that in a moment.)

The bulk of the rest of the configuration is found in nvp.ini, which is typically found in the /etc/neutron/plugins/nicira directory. Here are the relevant settings that you’ll want to configure:

  • You’ll want to set nvp_user and nvp_password appropriately for your VMware NSX installation.
  • Populate the nvp_controllers line with the addresses of the NSX controllers in your environment, in the form “W.X.Y.Z:443″. Separate the controllers’ IP addresses with commas.
  • Place the UUID of the transport zone that you want Neutron to use when creating overlay networks as the value for default_tz_uuid.
  • Place the UUID of the L3 gateway service that you want Neutron to use when creating logical routers with external gateways as the value for the default_l3_gw_service_uuid entry.
  • In the [database] section, make sure there is an appropriate MySQL connection entry for the Neutron database (assuming you are using MySQL). An example connection entry might look like “mysql://neutron:[email protected]/neutron” (or similar).
  • There are a couple different ways to provide Nova metadata to the instances; I prefer using a special metadata access network (I’ll likely talk more about that in a future post). To use this configuration, set metadata_mode to “access_network” and set enable_metadata_access_network to True. (You may also need to set metadata_dhcp_host_route to False.)

That should be all the settings you need on the controller node. However, you’ll also need to slightly configure the DHCP and metadata agents on the network node:

  • In the dhcp_agent.ini file, set enable_isolated_metadata and enable_metadata_network to True. If your Linux distribution supports network namespaces (Ubuntu does), then also set use_namespaces to True.
  • The metadata agent does not require any special configuration above and beyond what is needed to get Neutron running.

Once you restart all relevant services so that they pick up the new settings, you should have Neutron talking to VMware NSX correctly. To test if everything is working correctly, use the Neutron CLI to create a logical network:

neutron net-create test-network

In my environment (running OpenStack Havana and NSX 4.0.0), that produced output that looked like this:

Neutron CLI output

If all is working as expected, then you should see a matching logical switch listed in NSX Manager:

Logical switches in NSX Manager

(Click the image above for a larger version.)

It may be obvious, but you’ll note that the ID returned by the neutron net-create command matches the UUID listed in NSX Manager. You’ll also note that the os_tid tag assigned to the logical switch in NSX Manager matches the tenant ID of the tenant who owns the logical switch in OpenStack. Finally, you’ll note that the bound transport zone’s UUID will match the UUID you specified in nvp.ini as I outlined earlier.

That’s it—you now have VMware NSX integrated with OpenStack Neutron!

In the next post, I’ll revisit the topic of logical networking and logical switches within VMware NSX, something I first discussed fairly early in the series. Once I’ve reviewed some concepts and established a firm foundation, future posts will take a look at how to take advantage of a very cool feature within VMware NSX: the distributed logical router.

In the meantime, feel free to post any questions, clarifications, or thoughts in the comments below. Please include any vendor affiliations, where applicable; otherwise, all courteous comments are welcome!

Tags: , , , , , ,

« Older entries § Newer entries »