It’s that time again—time for community voting on sessions for the fall OpenStack Summit, being held in Paris this year in early November. I wanted to take a moment and share some of the sessions in which I’m involved and/or that I think might be useful. It would be great if you could take a moment to add your votes for the sessions.

My Sessions

I have a total of four session proposals submitted this year:

Congress Sessions

You may also be aware that I am involved with a project called Congress, which aims to bring an overarching policy service to OpenStack. Here are some sessions pertaining to Congress:

VMware Sessions

Arvind Soni, one of the product managers for OpenStack at VMware, kindly pulled together this list of VMware-related sessions, so feel free to have a look at any of these and vote on what sounds appealing to you.

Other Sessions

There are way too many sessions to list all the interesting ones, but here are a few that caught my eye:

There are a bunch more that looked interesting to me, but I’ll skip listing them all here—just hop over to the OpenStack site and vote for the sessions you want to see.

Tags: , , , , ,

Many of you know that my wife, Crystal, has been organizing activities—aka Spousetivities—at VMworld since 2008. This year is no different, and registration is already open for activities at VMworld US 2014 in San Francisco. This year Crystal has another great set of activities planned, including:

  • The usual “Getting to Know You” breakfast, held again this year at Jillian’s
  • Mission Street food tour
  • Whale watching tour
  • City tour
  • Wine tour including custom wine blending and wine pairing
  • Kayaking on the Russian River

As in previous years, sponsors make these activities possible. This year, Spousetivities is sponsored by VMware NSX, Origami Owl, vBackup, The Virtualization Practice, VMUG, HyTrust, Xangati, and Wine Shop at Home. The support of these sponsors is what allows Crystal to get private transportation for participants and to keep prices as low as possible.

Here’s my ask:

  • If you’re attending VMworld and thinking of bringing a spouse, significant other, family member, or just a friend, please consider registering for Spousetivities.
  • If you’re someone traveling with a VMworld conference attendee, please consider registering for Spousetivities.
  • If you’re neither of the above, please consider helping to spread the word about Spousetivities: blog about it, tweet about it, post updates to Facebook…whatever you can do.

You can register for Spousetivities here.

Tags:

In this post, I’m going to provide a very quick introduction to CoreOS. CoreOS, in case you haven’t heard of it, is a highly streamlined Linux distribution designed with containers, massive server deployments, and distributed systems/applications in mind.

CoreOS is built around a number of key concepts/technologies:

  1. The OS is updated as a whole, not package-by-package. CoreOS uses the Omaha protocol—initially engineered by Google for updating things like the Chrome browser and Chrome OS—to stay up-to-date with new versions. CoreOS also employs an active/passive dual root partition scheme. This dual root partition scheme allows CoreOS to run off one root partition while updating the other; the system then reboots onto the updated partition once an update is complete. If the system fails to boot from the updated partition, then reboot it again and it will revert to the known-good installation on the first partition.
  2. All applications run in containers. CoreOS provides out-of-the-box support for Docker containers. In fact, all applications on CoreOS run in containers. This enables separation of applications from the underlying OS and further streamlines the CoreOS update process (because applications are essentially self-contained).
  3. CoreOS leverages systemd. systemd is not unique to CoreOS; it is the new standard system and service manager for Linux. (Debian has elected to use systemd; Ubuntu will adopt systemd with 14.10, if I understand correctly; and Red Hat and related distributions already use systemd.) In CoreOS, systemd unit files are used not only for system services, but also for running Docker containers.
  4. CoreOS has a distributed key-value data store called etcd. The etcd distributed key-value data store can be used for shared configuration and service discovery. etcd uses a simple REST API (HTTP+JSON) and leverages the Raft consensus protocol. Docker containers on CoreOS are able to access etcd via the loopback interface, and thus can use etcd to do dynamic service registration or discovery, for example. etcd is also configurable via cloud-init, which means it’s friendly to deployment on many cloud platforms including OpenStack. More information on etcd is available via the etcd GitHub site.
  5. CoreOS supports deploying containers across a cluster using fleet. Fleet is another open source project that leverages etcd to deploy Docker containers (written as systemd unit files) across a cluster of CoreOS systems. Fleet leverages both etcd and systemd to support the deployment of containers across a cluster of systems. See this page for more information on clustering with CoreOS and fleet.

Taken individually—the use of a minimal Linux distribution, systemd support, the distributed key-value data store, Docker support, dual root partition w/ recoverable system updates, fleet—these technologies are interesting, but not all that revolutionary. Put them all together, however, and you have (in my opinion) a very interesting solution.

I’m quite intrigued with CoreOS and do plan on spending more time with it in the near future, so stay tuned for additional posts. In the meantime, if you’d like to see something specific about CoreOS or any related technologies, please speak up in the comments. I’ll do my best to satisfy your requests!

Tags: , ,

In this post, I’ll share with you how I installed the Docker plugin for OpenStack Heat, so that Heat is able to orchestrate the creation of Docker containers in an OpenStack environment. I’m publishing this because I found the default instructions to be a bit too vague to be helpful. By sharing my experience, I hope that others interested in using Docker in their OpenStack environment will benefit.

Here are the steps I used to make the Docker plugin work with Heat. These steps assume you are using Ubuntu and already have OpenStack Heat installed and working correctly:

  1. If you are using the packaged version of Heat (in other words, you are installing Heat via a method like apt-get install on Ubuntu), then you’ll want to use the “stable/icehouse” branch that contains the Docker container. In this case, you don’t want to use master—it won’t work (either the plugin won’t load or the Heat engine service won’t start). Download a ZIP copy of the correct branch of Heat from GitHub (for “stable/icehouse”, see here).
  2. Extract the contrib/docker folder from the downloaded ZIP copy of Heat.
  3. Delete the contrib/docker/docker/tests directory; in my testing, the plugin failed to load if you leave this directory present in the plugin.
  4. Copy the contrib/docker folder to your OpenStack controller somewhere. On my controller, I chose to put it into an existing /var/lib/heat directory. When you’re done, you should have a docker directory in your chosen destination, and that directory should container another subdirectory named docker. For example, on my system, the full path to the plugin was /var/lib/heat/docker/docker. Make note of the full path.
  5. In the top-level docker folder, run pip install -r requirements.txt. Note that you might need to do an apt-get install python-pip first. This will install the docker-py Python module, which is required by the Docker plugin.
  6. Modify your Heat configuration file (typically found at /etc/heat/heat.conf) and add the full path of the Docker plugin to the plugin_dirs setting. If you used /var/lib/heat as the base directory for the plugin, then the full path should be /var/lib/heat/docker/docker.
  7. Restart the Heat engine (via something like sudo service heat-engine restart or similar).
  8. Run heat resource-type-list and verify that DockerInc::Docker::Container is listed in the results. If not, verify that you have the correct path to the plugin specified in the Heat configuration file, and verify that you used the correct branch of the Docker plugin (“stable/icehouse” if you are using packaged versions of OpenStack). Review the Heat log files for any errors if the resource type still isn’t listed.

Assuming you were successful, then you are ready to start deploying Docker containers via Heat. Stay tuned for an example Heat template that shows how to deploy a Docker container. Until then, feel free to share any corrections, clarifications, or questions in the comments below.

Tags: , ,

(This is a repost of an announcement from the Spousetivities web site. I wanted to include it here for broader coverage. —Scott)

For seven years, Spousetivities has been fortunate to be part of the VMware/VMworld community. Since 2008, we’ve been the only community-focused and community-driven spouse activities program, and it’s been an honor. Spousetivities exists thanks to the support of the community. However, Spousetivities also exists to provide support back to that same community.

Last week, a member of our community was tragically taken from us. Jim Ruddy died in a car accident, leaving behind his wife Stephanie and their children. This is a horrible loss, and the community continues to mourn his loss. (My husband, Scott, worked with Jim at EMC for a number of years, as did many others.) In honor of Jim and to support the family he left behind, I worked with other members of the community to establish the Jim Ruddy Memorial Fund. As of this writing, that fund had raised over $15,000 to help support Stephanie and the kids in this very trying time.

No amount of money can replace Jim. However, this is a difficult time for Stephanie—not only emotionally and physically, but also financially. For that reason, Spousetivities is setting aside 10% of all proceeds raised by activities at VMworld 2014 to be donated to Jim Ruddy’s family via the Jim Ruddy Memorial Fund.

If you haven’t donated to the Jim Ruddy Memorial Fund yet, please consider doing so. If you (or your spouse/partner/significant other) is participating in Spousetivities at VMworld this year, please know that your participation means also helping a family in their time of need.

Being part of the community means giving back to the community.

Tags: , , ,

Long-time readers know that since VMworld 2008 in Las Vegas, my wife Crystal has been organizing spouse activities—aka “Spousetivities”—for spouses, partners, family members, and friends traveling with VMworld conference attendees. This is a true labor of love for Crystal; she works tirelessly to gather sponsors, organize activities, and handle the logistics for the events. Although registration is opening a bit later than usual this year (Crystal’s been sick and in the hospital recently), registration for Spousetivities at VMworld 2014 in San Francisco is now open.

Here’s a quick peek at some of the activities she’s organized:

  • The annual “Get to Know You” breakfast, where Spousetivities participants get to meet the other folks they’ll be hanging out with for the week, as well as possibly win some great prizes
  • Mission Street food tour
  • Kayaking on the Russian River
  • Wine tours (Crystal’s still sorting details on this one, so tickets aren’t available yet—but check back often!)
  • Whale watching

Crystal’s also working on adding some additional activities; as details for those become available, she’ll let the community know.

Although this is a labor of love for Crystal, it takes sponsors to make it happen. This year, VMworld’s first and only community-driven and community-focused spouse activities are brought to you by these sponsors:

If you appreciate the fact that your loved ones will have someone to watch out for them and something to do while you’re at the conference this year, please let the Spousetivities sponsors know. Visit their booths, or give them a shout-out on Twitter or Facebook.

Interested in signing up? Registration for Spousetivities is available here. Interested in being a sponsor? Drop me an e-mail (see my About page) and I’ll get you connected with more information.

Tags: ,

This issue describes a fix I found for an issue I had when booting KVM guest domains on the Ubuntu/KVM hypervisors in my home lab. I’d been struggling with this issue for quite some time now, but only recently found what I believe to be the final fix for the problem.

First, allow me to provide a bit of background. Some time ago—I’d say around August 2012, when I left the vSpecialist team at EMC to join an OpenStack-focused team in another part of EMC—I moved my home lab over completely to Ubuntu 12.04 LTS with the KVM hypervisor. This was an important step in educating myself on Linux, KVM, libvirt, and Open vSwitch (OVS), all of which are critical core components in most installations of OpenStack.

Ever since making that change—particularly after adding some new hardware, a pair of Dell C6100 servers, to my home lab—I would experience intermittent problems booting a KVM guest. The guest would appear to boot properly, but then hang shortly after a message about activating swap space and fsck reporting that the file system was clean. Sometimes, rebooting the guest would work; many times, rebooting the guest didn’t work. Re-installing the guest sometimes worked, but sometimes it didn’t. There didn’t appear to be any consistency with regard to the host (the issue occurred on all hosts) or guest configuration. The only consistency appeared to be with Ubuntu, as virtually (no pun intended) all my KVM guests were running Ubuntu.

Needless to say, this was quite frustrating. I tried all the troubleshooting I could imagine—deleting and recreating swap space, manually checking the file system(s), various different installation routines—and nothing seemed to make any difference.

Finally, just in the last few weeks, I stumbled across this page, which indicated that adding “nomodeset” to the grub command line fixed the problem. This was a standard part of my build (it kept the console from getting too large when using VNC to connect to the guest), but it required that I was able to successfully boot the VM first. I’d noted that once I had been able to successfully boot a guest and add “nomodeset” to the grub configuration, I didn’t have any further issues with that particular guest; however, I explained that away by saying that the intermittent boot issue must have been some sort of first-time boot issue.

In any case, that page linked to this ServerFault entry, which also indicated that the use of “nomodeset” helped fix some (seemingly) random boot problems. The symptoms described there—recovery mode worked fine, booting normally after booting into recovery mode resulted in an “initctl: event failed” error—were consistent with what I’d been seeing as well.

So, I took one of the VMs that was experiencing this problem, booted it into recovery mode, edited the /etc/default/grub file to include “nomodeset” on the GRUB_CMDLINE_LINUX_DEFAULT line, and rebooted. The KVM guest booted without any issues. Problem fixed (apparently).

Thus far, this has fixed the intermittent boot issue on every KVM guest I’ve tried, so I’m relatively comfortable recommending it as a potential change you should explore if you experience the same problem/symptoms. I can’t guarantee it will work, but it has worked for me so far.

Good luck!

Tags: , ,

This is part 15 of my Learning NSX blog series, in which I will spend some time diving a bit deeper into some of the components involved in the logical routing process I described in part 14. Specifically, I’ll be taking a deeper look at gateway appliances, gateway services, and logical routers, and the relationships among these various components.

If you haven’t read any of the prior posts in this series, it would be ideal to read all of them before continuing; you can find links on my Learning NVP/NSX page. In particular, I’d suggest reading part 6 (on adding a gateway appliance), part 9 (on adding a gateway service), and part 14 (on logical routing and logical routers).

Just for the sake of completeness and to reinforce what was introduced in those posts I referenced, let’s start with some terminology:

  • Gateway (or gateway appliance): When I use the terms gateway or gateway appliance, I’m referring to the NSX software gateway that acts as the “on-ramp/off-ramp” to and from logical networks. What makes this confusing is that we also use the term “gateway” (in particular, “IP gateway” or “default gateway”) to refer to a Layer 3 router that acts as the next hop for a aystem. I’ll do my best to make sure that I’m clearly distinguishing between these ambiguous uses.
  • Gateway service: A gateway service is a logical construct within NSX that allows you to group together multiple gateway appliances. For example, in an L2 gateway service, you can combine two gateway appliances so that you have redundancy in providing L2 bridging functionality between a logical network and a physical network. In an L3 gateway service, you can combine up to 10 gateway appliances together for redundancy and scale-out performance.
  • Logical router: As you might recall from part 14, a logical router is a logical construct within NSX that provides Layer 3 routing functionality, typically (but not always) on a per-tenant basis.

I have a few more terms I’ll introduce in this post, but that should be enough for now.

This diagram contains the bulk of what I’d like to discuss in this post—the relationship between gateway services, gateway appliances, and logical routers:

As I walk you through the details of this diagram, hopefully I’ll clarify the relationships between these components.

  • In this example, there are four gateway appliances combined into a single Layer 3 gateway service. As illustrated in the diagram, gateway services can contain more than one gateway appliance (the minimum recommended is two, for reasons to be explained shortly). Gateway services may be either Layer 2 (bridging/switching) or Layer 3 (routing), but not both.
  • A gateway appliance may be a member of only one gateway service at a time; therefore, a gateway appliance is either L2 or L3, but not both.
  • When adding a gateway appliance to a gateway service, the administrator or operator has the ability to specify a failure zone ID. The idea behind the failure zone ID is to help model fault domains within a single gateway service. For example, if GW Appliance 1 is in a different fault domain—say, a different rack—then the administrator or operator could assign a different failure zone ID to GW Appliance 1, indicating that GW Appliance 1 is in a different fault domain. The significance of this functionality will be made clear in a moment.
  • Note that gateway services, gateway appliances, and failure zone IDs are not visible to tenants. Further, the configuration or management of these entities is handled through NSX (via API or NSX Manager), and isn’t tenant-specific. The CMP—OpenStack, for example—doesn’t get involved here.
  • The example diagram shows four different logical routers spread across three tenants. Each of these logical routers acts as an IP gateway (default gateway/default route) for the associated (or connected) logical network(s). Thus, a logical router is visible to a tenant.
  • Creating, managing, and configuring logical routers is handled by the CMP. With OpenStack, for example, you’d use the OpenStack Dashboard or the Neutron command-line client.
  • For redundancy, you’ll note that each logical router is instantiated on 2 different gateway appliances within the gateway service (hence why a minimum of 2 gateway appliances within a gateway service is recommended). This is completely invisible to the tenant and is handled automatically by NSX. If failure zone IDs—indicating different fault domains—are configured on the gateway appliances, then NSX will instantiate the logical router on gateway appliances in different failure zones. This is an attempt to minimize downtime by spreading the logical router across fault domains.

So far, everything I’ve shared with you has been true for centralized logical routers. For distributed logical routers, things are only slightly different. Distributed logical routers are normally instantiated on the hypervisors; a gateway service and its associated gateway appliances only gets involved when you set the uplink for the distributed logical router (using the “Set Gateway” button in OpenStack Dashboard, for example). If you never set an uplink for the logical router, it will remain instantiated only on the hypervisors, and not on the gateway service/gateway appliances.

I hope this information helps in understanding the routing aspects of VMware NSX. Feel free to post any questions, clarifications, or thoughts in the comments below. Any input on other topics you’d like to see in the Learning NSX blog series are welcome as well!

Tags: , , , , , ,

In April of this year, we started a series of articles at Network Heresy on the topic of policy in the data center. The first of these articles, which I mentioned in this post, focused on the problem of policy in the data center. This was a great introduction to the need for policy and the challenges with the current ways of addressing policy in the data center.

A short while ago, we published the second of our series on policy, titled “On Policy in the Data Center: The solution space”. This post describes the key features/functionality that a policy system must have to address the challenges identified in part 1 of the series. In a nutshell (I highly recommend you go read the full article), these key areas include:

  • The sources from which policy is derived
  • The language(s) used to express policy
  • The way policy systems interact with data center services
  • The actions a policy system can take

I really liked this statement from the article (this is in reference to how a policy system interacts with other services in the data center):

A policy system by itself is useless; to have value, the policy system must interact and integrate with other data center or cloud services.

The relationship between a policy system and the ecosystem of data center services with which it interacts is so critical. Having a policy system is great, but if the policy system can’t be integrated with other data center or cloud services, then it’s not very useful, is it?

Go have a look at the second post in the series on policy in the data center and feel free to join in the conversation. You can leave comments here or at the Network Heresy site.

Tags: , ,

Welcome to part 14 of the Learning NSX blog series, in which I discuss the ability for VMware NSX to do Layer 3 routing in logical networks. This post will also include a look at a very cool feature within VMware NSX known as distributed logical routing. This post will take a closer look at distributed logical routing within the context of an OpenStack environment that’s been integrated with VMware NSX. (Although NSX isn’t necessarily tied to OpenStack, I’ll assume you’re using OpenStack just to simplify the discussion.)

If you’re new to this series, you can find links to all the articles on my Learning NVP/NSX page. Ideally, I’d recommend you read all the articles, but if you’re just interested in some of the high-level concepts you probably don’t need to do that. For those interested in the deep technical details, I’d suggest catching up on the series before proceeding.

Overview of Logical Routing

One of the features of VMware NSX that can be useful, depending on customer requirements, is the ability to create complex network topologies. For example, creating a multi-tier network topology like the one shown below is easily accomplished via VMware NSX:

Sample network topology

Note that this topology has two tenant-specific routing entities—these are logical routers. A logical router is an abstraction created and maintained by VMware NSX on behalf of your cloud management platform (like OpenStack, which I’ll assume you’re using here). These logical entities perform the routing process just like a physical router would (forwarding traffic based on a routing table, changing the source and destination MAC address, maintaining an ARP cache of MAC addresses, decrementing the TTL, etc.). Of course, they are not exactly the same as physical routers; you can’t, for example, connect two logical routers directly to each other.

Logical routers also act as the logical boundary between one or more logical networks and an external network. Logical routers can be connected to multiple logical networks (each logical network with its own logical router interface), but can only be connected to a single external network. Thus, you can’t use a logical router as a transit path between two external networks (two VLANs, for example).

Now that you have a good understanding of logical routing, let’s take a closer look at the various components inside VMware NSX.

Components of Logical Routing

The components are pretty straightforward. In addition to the logical router abstraction that I’ve discussed already, you also have logical router ports (naturally, these are the ports on a logical router that connect it to a logical network or an external network), network address translation (NAT) rules (for handling address translation tasks), and a routing table (for…well, routing).

You can see all of these components in NSX Manager. Once you’re logged into NSX Manager, select Network Components > Logical Layer > Logical Routers, then click on a specific logical router from the list. This will display the screen shown below (click the image for a larger version):

Logical router detail in NSX Manager

A few things to note here:

  • You’ll note that the logical router has a port whose attachment is listed as “L3GW”. This denotes an attachment to a Layer 3 Gateway Service, an entity I described in part 9 of the series. This Layer 3 Gateway Service is itself comprised of two NSX gateway appliances; part 6 in the series discussed how to add a gateway appliance to your installation. The relationship between logical router, Layer 3 Gateway Service, and gateway appliance can be confusing for some; I plan to discuss that in more detail in the next post.
  • This particular logical router is not configured as a distributed logical router. This means that the actual routing function resides on a Layer 3 Gateway Service. The routing functionality is instantiated in a highly available configuration on two different gateway appliances within the Layer 3 Gateway Service.
  • NAT Synchronization is set to on; this refers to keeping NAT state synchronized between the active and standby routing functions instantiated on the gateway appliances.
  • As noted under Replication Mode, this router uses an NSX service node (refer to part 10 for more details on service nodes) for packet replication/BUM traffic.
  • You might notice that one of the logical router ports is assigned the IP address 169.254.169.253 (and you’ll also note a corresponding “no NAT” rule and routing table entries for that same network). Astute readers recognize this as the network for Automatic Private IP Addressing (APIPA), also known as IPv4 Link-Local Addresses per RFC 3927. This exists to support an OpenStack-specific feature known as the metadata service, and is created automatically by OpenStack. (I’ll talk more about OpenStack later in this post.)

All of these components and settings are accessible via the NSX API, and since NSX Manager is completely an API client (it merely consumes NSX APIs and does not provide standalone functionality outside of some logging features), you could create, modify, and delete any of the logical routing components directly within NSX Manager. (Or, if you were so inclined, you could make the API calls yourself to do these tasks.) Typically, though, these tasks would be handled via integration between NSX and your cloud management platform, like OpenStack.

One key component of NSX’s logical routing functionality that you can’t see in NSX Manager is how the routing is actually implemented in the data plane. As with most features in NSX, the actual data plane implementation is handled via Open vSwitch (OVS) and a set of flow rules pushed down by the NSX controllers. These flow rules control the flow of traffic within and between logical networks (logical switches in NSX). You can see some of the flow rules in OVS using the ovs-dpctl dump-flows command, which will produce output something like what’s shown in this screenshot (note that the addresses are highlighted because I used grep to show only the flows matching a certain IP address):

List of flows in OVS

(Click the image above for a larger version.)

These flow rules include actions like re-writing source and destination MAC addresses and decrementing the TTL, both tasks carried out by “normal” routers when routing traffic between networks. These flow rules also provide some insight into the differences between a logical router and a distributed logical router. While both are logical entities, the way in which the data plane is implemented is different for each:

  • For a logical router, the flow rules will direct traffic to the appropriate gateway appliance in the Layer 3 Gateway Service. The logical router is actually instantiated on a gateway appliance, so all routed traffic must go to the logical router, get “routed” (routing table consulted, source and destination MAC re-written, TTL decremented, NAT rules applied, etc.), then get sent on to the final destination (which might be a VM on a hypervisor in NSX or might be a physical network outside of NSX).
  • For a distributed logical router, the flow rules will direct traffic either to the appropriate gateway appliance in the Layer 3 Gateway Service or to the destination hypervisor directly. Why the “either/or”? If the traffic is north/south traffic—that is, traffic being routed out of a logical network onto the physical network—then it must go to the gateway appliance (which, as I have mentioned before, is where traffic is unencapsulated and placed onto the physical network). However, if the traffic is east/west traffic—traffic that is moving from one server on a logical network to another server on a logical network—then the traffic is “routed” directly on the source hypervisor and then sent across an encapsulated connection to the hypervisor where the destination VM resides.

In both cases, there is only one logical router. For a non-distributed logical router, the data plane is instantiated on a gateway appliance only. For a distributed logical router, the data plane is instantiated both on the local hypervisors as well as on a gateway appliance. (This is assuming you’ve set an uplink on the logical router, meaning you have a north/south connection. If you haven’t set an uplink, then the routing functionality is instantiated on the hypervisors only.)

This should provide a good overview of how logical routing is implemented in VMware NSX, but there’s one more aspect I want to cover: logical routers in OpenStack with NSX.

Logical Routers in OpenStack

As you work with OpenStack Networking—Neutron, as it’s commonly called—you’ll find that the abstractions Neutron uses map really well to the abstractions that NSX uses. So, to create a logical router in NSX, you just create a logical router in OpenStack. Attaching an OpenStack logical router to a logical network tells NSX to create the logical switch port, create the logical router port, and connect the two ports together.

In OpenStack, there are a number of different ways to create a logical router:

  • OpenStack Dashboard (Horizon)
  • Command-line interface (CLI)
  • OpenStack Orchestration (Heat) template
  • API calls directly

When using the web-based Dashboard user interface, you can only create centralized logical routers, not distributed logical routers. The Dashboard UI also doesn’t provide any way of knowing if a logical router is distributed or not; for that, you’ll need the CLI (the command is provided shortly).

On a system with the neutron CLI client installed, you can create a logical router like this:

neutron router-create <router name>

This creates a centralized logical router. If you want to create a distributed logical router, it’s as simple as this:

neutron router-create <router name> -\-distributed True

The neutron router-show command will return output about the specified logical router; that output will tell you if it is a distributed logical router.

The neutron CLI client also offers commands to update a logical router’s routing table (to add or remove static routes, for example), or to connect a logical router to an external network (to set an uplink, in other words).

If you want to create a logical router as part of a stack created via OpenStack Orchestration (Heat), you could use this YAML snippet in a HOT-formatted template to create a distributed logical router (click here if you can’t see the code block below):

OpenStack Heat also offers resource types for setting the router’s external gateway and creating router interfaces (logical router ports). If you aren’t familiar with OpenStack Heat, you might find this introduction useful.

That wraps up this post on logical routing with VMware NSX. As always, I welcome your courteous feedback, so feel free to speak up in the comments below. In the next post, I’ll spend a bit of time discussing logical routers, gateway servies, and gateway appliances. See you next time!

Tags: , , , , , , ,

« Older entries § Newer entries »