OpenStack

You are currently browsing articles tagged OpenStack.

Welcome to Technology Short Take #44, the latest in my irregularly-published series of articles, links, ideas, and thoughts about various data center-related technologies. Enjoy!

Networking

  • One of the original problems with the VXLAN IETF specification was that it (deliberately) didn’t include any control plane information; as a result, the process of mapping MAC addresses to VTEPs (VXLAN Tunnel Endpoints) wasn’t defined, and the early implementations relied on multicast to handle this issue. To help resolve this issue, Cumulus Networks (and possibly Metacloud, I’m not sure of their involvement yet) has release an open source project called vxfld. As described in this Metacloud blog post, vxfld is designed to “handle VXLAN traffic from any operationg system or hardware platform that adheres to the IETF Internet-Draft for VXLAN”.
  • Nir Yechiel recently posted part 1 of a discussion on the need for network overlays. This first post is more of a discussion of why VLANs and VLAN-based derivatives aren’t sufficient, and why we should be looking to routing (layer 3) constructs instead. I’m looking forward to part 2 of the series.
  • One ongoing discussion in the network industry these days—or so it seems—is the discussion about the interaction between network overlays and the underlying transport network. Some argue that tight integration is required; others point to streaming video services and VoIP running across the Internet and insist that no integration or interaction is needed. In this post, Scott Jensen argues in favor of the former—that SDN solutions shouldn’t just manage network overlays, but should also manage the configuration of the physical transport network as well. I’d love to hear from more networking pros (please disclose company affiliations) about their thoughts on this matter.
  • I like the distinction made here between network automation and SDN.
  • Need to get a better grasp on OpenFlow? Check out OpenFlow basics and OpenFlow deep-dive.
  • Here’s a write-up on connecting Docker containers using VXLAN. I think there’s a great deal of promise for OVS in containerized environments, but what’s needed is better/tighter integration between OVS and container solutions like Docker.

Servers/Hardware

  • Is Intel having second thoughts about software-defined infrastructure? That’s the core question in this blog post, which explores the future of Intel in a software-defined world and the increasing interest in non-x86 platforms like ARM.
  • On the flip side, proponents who claim that platforms like ARM and others are necessary in order to move forward with SDN and NFV initiatives should probably read this article on 80 Gbps performance from an off-the-shelf x86 server. Impressive.

Security

  • It’s nice to see that work on OpenStack Barbican is progressing nicely; see this article for a quick overview of the project and an update on the status.

Cloud Computing/Cloud Management

  • SDN Central has a nice write-up on the need for open efforts in the policy space, which includes the Congress project.
  • The use of public cloud offerings as disaster recovery targets is on the rise; note this article from Microsoft on how to migrate on-premises workloads to Azure using Azure Site Recovery. VMware has a similar offering via the VMware vCloud Hybrid Service recovery-as-a-service offering.
  • The folks at eNovance have a write-up on multi-tenant Docker with OpenStack Heat. It’s an interesting write-up, but not for the faint of heart—to make their example work, you’ll need the latest builds of Heat and the Docker plugin (it doesn’t work with the stable branch of Heat).
  • Preston Bannister took a look at cloud application backup in OpenStack. His observations are, I think, rational and fair, and I’m glad to see someone paying attention to this topic (which, thus far, I think has been somewhat ignored).
  • Interested in Docker and Kubernetes on Azure? See here and here for more details.
  • This article takes a look at Heat-Translator, an effort designed to provide some interoperability between TOSCA and OpenStack HOT documents for application deployment and orchestration. The portability of orchestration resources is one of several aspects you’ll want to examine as you progress down the route of fully embracing a cloud computing operational model.

Operating Systems/Applications

  • Looks like we have another convert to Markdown—Anthony Burke recently talked about how he uses Markdown. Regular readers of this site know that I do almost all of my content generation using MultiMarkdown (a variation of Markdown with some expanded syntax options). Here’s a post I recently published on some useful Markdown tools for OS X.
  • Good to see that Ivan Pepelnjak thinks infrastructure as code makes sense. I guess that means the time I’ve spent with Puppet (you can browse Puppet-related posts here) wasn’t a waste.
  • I don’t know if I’ve mentioned this before (sorry if that’s the case), but I’m liking this “NIX4NetEng” series going on over at Nick Buraglio’s site (part 1, part 2, and part 3).
  • Mike Foley has a blog post on how to go from zero to Windows domain controller in only 4 reboots. Handy.

Storage

Virtualization

  • Running Hyper-V with Linux VMs? Ben Armstrong details what versions of Linux support the various Hyper-V features in this post.
  • Here’s a quick write-up on running VMs with VirtualBox 4.3 on a headless Ubuntu 14.04 LTS server.
  • Nested OS X guest on top of nested ESXi on top of VMware Fusion? Must be something William Lam’s tried. Go have a look at his write-up.
  • Here’s a quick update on Nova-Docker, the effort in OpenStack to allow users to deploy Docker containers via Nova. I’m not yet convinced that treating Docker as a hypervisor in Nova is the right path, but we’ll see how things develop.
  • This post is a nice write-up on the different ways to connect a Docker container to a local network.
  • Weren’t able to attend VMworld US in San Francisco last week? No worries. If you have access to the recorded VMworld sessions, check out Jason Boche’s list of the top 10 sessions for a priority list of what recordings to check out. Or need a recap of the week? See here (one of many recap posts, I’m sure).

That’s it this time around; hopefully I was able to include something useful for you. As always, all courteous comments are welcome, so feel free to speak up in the comments. In particular, if there is a technology area that I’m not covering (or not covering well), please let me know—and suggestions for more content sources are certainly welcome!

Tags: , , , , , , , , , , , , ,

In this post, I’ll share a simple template for deploying Docker containers in an OpenStack environment using Heat. Given that Docker is targeted at application deployment, then I felt that using Heat was a more appropriate way of leveraging Docker in an OpenStack environment as opposed to treating Docker as a form of a hypervisor. Later in this post, I’ll compare this approach to using a more container-aware solution such as fleet.

I assume you’re already familiar with OpenStack Heat and Docker. If you aren’t, take a look at these articles first:

Prerequisites

Before you can actually use Heat to orchestrate Docker containers, there are some prerequisites you’ll need to have done first:

  1. You’ll need to have the Docker plugin for Heat installed. This can be tricky; see here for some instructions that worked for me. To verify that the Docker plugin is working as expected, run heat resource-type-list and check the output for “DockerInc::Docker::Container”. If that resource type is included in the output, then the Docker plugin is working as expected.
  2. Any Docker hosts you’re running must have Docker configured to listen on a network-accessible socket. I was running CoreOS in my environment, so I followed the instructions to make Docker on CoreOS listen on a TCP socket. (In case the link doesn’t take you to the right section, see the section titled “Enable the Remote API on a New Socket.”) In my case, I selected TCP port 2345. Make note of whatever port you select, as you’ll need it in your template.
  3. Any Docker hosts that will be orchestrated by Heat must have an IP address assigned that is reachable from the server where Heat is running (typically the cloud controller). In my case, I use Neutron with NSX, so I had to assign floating IPs to the instances with which Heat would be communicating.
  4. You’ll need to be sure that the TCP port you select for Docker to use (I used TCP port 2345) is accessible, so modify any security groups assigned to the instances to allow inbound TCP traffic on that port from the appropriate sources.

Once these prerequisites are addressed—Docker plugin installed and working, Docker listening on a TCP port, instance reachable from cloud controller on selected TCP port—then you’re ready to go.

Template for Docker Orchestration

Here is a sample template that will create a Docker container on an existing instance:

(Click here if you don’t see the code block above.)

As I said, this is pretty simple. The image property is the name of the Docker image you want to use; in this case, I’m using an image containing the popular Nginx web server. The docker_endpoint property should be a URL that specifies the protocol (TCP), IP address (in my case, a floating IP address assigned to the instance), and the port number on which the Docker daemon is listening. Note that the format for this property isn’t documented anywhere I’ve found.

In the “stable/icehouse” branch of the Docker plugin (required if you’re using distro packages for your OpenStack installation, as I am), there are some additional properties available as well. Unfortunately, without any documentation on what these properties should look like, I was unable to make it work with any of those properties included. In particular, the port_specs property, which controls how ports in a Docker container are exposed to the outside world, would have been very useful and applicable. However, I was unable to make it work with the port_specs attribute included. If anyone has information on the exact syntax and format for the port_specs property in the “stable/icehouse” branch of the plugin, please speak up in the comments.

Naturally, you could embed this portion of YAML code into a larger HOT-formatted template that also launched instances, created Neutron networks, attached the instances to Neutron networks, created a logical router, and mapped a floating IP address to the instance. I leave the creation of such a template as an exercise for the reader, but I will point out that I’ve already shared with you almost all the pieces necessary to do exactly that. (See the blog posts I provided earlier.)

Summary

I mentioned at the start of this post that I’d provide some comparison to other methods for deploying containers in an automated fashion. With that in mind, here are a few points you’ll want to consider:

  • There is no container scheduling in this solution. Containers are statically mapped to a container host (the VM instance, in this case, although this could be a bare metal host running Docker as well). Other solutions, like fleet, at least let you just point to a cluster of systems instead of a specific system. (See this write-up on fleet for more information.)
  • Docker must be listening on a TCP socket. This isn’t Docker’s default configuration, so this is an additional change that must be incorporated into the environment. Fleet doesn’t have this requirement, although other solutions such as Mesos might (I haven’t tested any other solutions—yet.)
  • There is very little documentation available right now. Note that this may be true for other solutions as well (this entire space is relatively new and growing/evolving rapidly). Regardless, until someone can at least figure out how to expose Docker containers to the network via a Heat template, this isn’t very useful.

My initial assessment is that OpenStack needs container scheduling, not static assignment, in order for Docker integration into OpenStack to be truly useful. Proponents of the Nova-Docker approach (treating Docker as a hypervisor and Docker images as Glance images) point to their approach as superior because of the integration of Nova’s scheduling functionality. It will be interesting to see how things develop on this front.

If you have any questions, have more information to share, or have corrections or clarifications to any of the information presented here, please speak up in the comments.

Tags: , , ,

In this post, I’m going to illustrate one way to deploy CoreOS on OpenStack using Heat. By no means is this intended to be seen as the only way to use Heat to deploy CoreOS, but rather as one way of using Heat to deploy CoreOS. I’m publishing this in the hopes that others will be able to use this as a building block for their own deployments.

If you aren’t already familiar with OpenStack Heat or CoreOS, you might want to take a moment and refer to this introductory posts for some foundational information:

Moving forward, OpenStack Heat is trying to standardize on OpenStack resource types (like OS::Nova::Server) and the HOT format (using YAML). Therefore, the Heat template I’m presenting here will use OpenStack resource types and YAML. Note that it’s certainly possible to do this using CloudFormation (CFN) resource types and JSON formatting. I’ll leave the conversion of the template found here into CFN/JSON as an exercise for the readers.

Here’s the example Heat template you can use to deploy and customize CoreOS on OpenStack:

(Click here if you can’t see the code block above.)

Let’s walk through this template real quick:

  • On line 9, you’ll need to provide the ID for the Neutron network to which the new CoreOS instance(s) should connect. You can get this a couple of different ways; running neutron net-list is one way.
  • On line 14, you’ll need to supply the ID for the CoreOS image you’ve uploaded into Glance. Again, there are multiple ways to obtain this; running glance image-list is one way of getting that information.
  • On line 22, replace the text (including the “<” and “>” symbols) with the ID of the security group you want applied to the CoreOS instance(s) being deployed. The neutron security-group-list command can give you the information you need to put here.
  • On line 31, supply the name of the SSH key you want to inject into the instance(s).
  • On line 37, you’ll need to generate a unique cluster ID to place here for the configuration of etcd within the CoreOS instance(s). You can generate a new ID (also called a token) by visiting https://discovery.etcd.io/new. That will return another URL that contains the new etcd cluster token. Supply that token here to create a new etcd cluster out of the CoreOS instance(s) you’re deploying with this template.
  • This template only deploys a single CoreOS instance. To deploy multiple CoreOS instances, you’ll need a separate OS::Neutron::Port and OS::Nova::Server resource for each instance. For each Neutron port, you can reference the same security group ID and network ID. For each instance, you can reference the same Glance image ID, same SSH key, and same etcd cluster token; the only thing that would change with each instance is line 30. Line 30 should point to a unique Neutron port resource created for each instance (something like instance1_port0, instance2_port0, etc.).

Now, there are obviously lots of other things you could do here—you could create your own Neutron network to host these CoreOS instances, you could create a logical router to provide external connectivity (which is required, by the way, in order for the etcd cluster token discovery to work correctly), and you could create and assign floating IPs to the instances. Examples of some of these tasks are in the articles I provided earlier; others are left as an exercise for the reader. (Or I’ll write up something later. We’ll see.)

Once you have your template, you can deploy the stack using Heat, and then—after your CoreOS cluster is up and running—begin to deploy applications to the cluster using tools like fleet. That, my friends, is another story for another day.

Any questions? Corrections? Clarifications? Feel free to start (or join) the discussion below. All courteous comments are welcome.

Tags: , , ,

A new startup emerged from stealth today, a company called Platform9. Platform9 was launched by former VMware veterans with the goal of making it easy for companies to consume their existing infrastructure in an agile, cloud-like fashion. Platform9 seeks to accomplish this by offering a cloud management platform that is itself provided as a cloud-based service—hence the name of this post, “cloud-hosted cloud management.”

It’s an interesting approach, and it certainly helps eliminate some of the complexity that organizations face when implementing their own cloud management platform. For now, at least, that is especially true for OpenStack, which can be notoriously difficult for newcomers to the popular open source cloud management environment. By Platform9 offering an OpenStack API-compatible service, organizations that want a more “public cloud-like” experience can get it without all the added hassle.

The announcements for Platform9 talk about support for KVM, vSphere, and Docker, though the product will only GA with KVM support (support for vSphere and Docker are on the roadmap). Networking support is also limited; in the initial release, Platform9 will look for Linux bridges with matching names in order to stitch together networks. However, customers will get an easy, non-disruptive setup with a nice set of dashboards to help show how their capacity is being utilized and allocated.

It will be interesting to see how things progress for Platform9. The idea of providing cloud management via an SaaS model (makes me think of “cloud inception”) is an interesting one that does sidestep many adoption hurdles, though questions of security, privacy, confidentiality, etc., may still hinder adoption in some environments.

Thoughts on Platform9? Feel free to speak up in the comments below. All courteous comments are welcome!

Tags: , , ,

Welcome to Technology Short Take #43, another episode in my irregularly-published series of articles, links, and thoughts from around the web, focusing on data center technologies like networking, virtualization, storage, and cloud computing. Here’s hoping you find something useful.

Networking

  • Jason Edelman recently took a look at Docker networking. While Docker is receiving a great deal of attention, I have to say that I feel Docker networking is a key area that hasn’t received the amount of attention that it probably needs. It would be great to see Docker get support for connecting containers directly to Open vSwitch (OVS), which is generally considered the de facto standard for networking on Linux hosts.
  • Ivan Pepelnjak asks the question, “Is OpenFlow the best tool for overlay virtual networks?” While so many folks see OpenFlow as the answer regardless of the question, Ivan takes a solid look at whether there are better ways of building overlay virtual networks. I especially liked one of the last statements in Ivan’s post: “Wouldn’t it be better to keep things simple instead of introducing yet-another less-than-perfect abstraction layer?”
  • Ed Henry tackles the idea of abstraction vs. automation in a fairly recent post. It’s funny—I think Ed’s post might actually be a response to a Twitter discussion that I started about the value of the abstractions that are being implemented in Group-based Policy (GBP) in OpenStack Neutron. Specifically, I was asking if there was value in creating an entirely new set of abstractions when it seemed like automation might be a better approach. Regardless, Ed’s post is a good one—the decision isn’t about one versus the other, but rather recognizing, in Ed’s words, “abstraction will ultimately lead to easier automation.” I’d agree with that, with one change: the right abstraction will lead to easier automation.
  • Jason Horn provides an example of how to script NSX security groups.
  • Interested in setting up overlays using Open vSwitch (OVS)? Then check out this article from the ever-helpful Brent Salisbury on setting up overlays on OVS.
  • Another series on VMware NSX has popped up, this time from Jon Langemak. Only two posts so far (but very thorough posts), one on setting up VMware NSX and another on logical networking with VMware NSX.

Servers/Hardware

Nothing this time around, but I’ll keep my eyes open for more content to include next time.

Security

  • Someone mentioned I should consider using pfctl and its ability to automatically block remote hosts exceeding certain connection rate limits. See here for details.
  • Bromium published some details on a Android security flaw that’s worth reviewing.

Cloud Computing/Cloud Management

  • Want to add some Docker to your vCAC environment? This post provides more details on how it is done. Kind of cool, if you ask me.
  • I am rapidly being pulled “higher” up the stack to look at tools and systems for working with distributed applications across clusters of servers. You can expect to see some content here soon on topics like fleet, Kubernetes, Mesos, and others. Hang on tight, this will be an interesting ride!

Operating Systems/Applications

  • A fact that I think is sometimes overlooked when discussing Docker is access to the Docker daemon (which, by default, is accessible only via UNIX socket—and therefore accessible locally only). This post by Adam Stankiewicz tackles configuring remote TLS access to Docker, which addresses that problem.
  • CoreOS is a pretty cool project that takes a new look at how Linux distributions should be constructed. I’m kind of bullish on CoreOS, though I haven’t had nearly the time I’d like to work with it. There’s a lot of potential, but also some gotchas (especially right now, before a stable product has been released). The fact that CoreOS takes a new approach to things means that you might need to look at things a bit differently than you had in the past; this post tackles one such item (pushing logs to a remote destination).
  • Speaking of CoreOS: here’s how to test drive CoreOS from your Mac.
  • I think I may have mentioned this before; if so, I apologize. It seems like a lot of folks are saying that Docker eliminates the need for configuration management tools like Puppet or Chef. Perhaps (or perhaps not), but in the event you need or want to combine Puppet with Docker, a good place to start is this article by James Turnbull (formerly of Puppet, now with Docker) on building Puppet-based applications inside Docker.
  • Here’s a tutorial for running Docker on CloudSigma.

Storage

  • It’s interesting to watch the storage industry go through the same sort of discussion around what “software-defined” means as the networking industry has gone through (or, depending on your perspective, is still going through). A few articles highlight this discussion: this one by John Griffith (Project Technical Lead [PTL] for OpenStack Cinder), this response by Chad Sakac, this response by the late Jim Ruddy, this reply by Kenneth Hui, and finally John’s response in part 2.

Virtualization

  • The ability to run nested hypervisors is the primary reason I still use VMware Fusion on my laptop instead of switching to VirtualBox. In this post Cody Bunch talks about how to use Vagrant to configure nested KVM on VMware Fusion for using things like DevStack.
  • A few different folks in the VMware space have pointed out the VMware OS Optimization Tool, a tool designed to help optimize Windows 7/8/2008/2012 systems for use with VMware Horizon View. Might be worth checking out.
  • The VMware PowerCLI blog has a nice three part series on working with Customization Specifications in PowerCLI (part 1, part 2, and part 3).
  • Jason Boche has a great collection of information regarding vSphere HA and PDL. Definitely be sure to give this a look.

That’s it for this time around. Feel free to speak up in the comments and share any thoughts, clarifications, corrections, or other ideas. Thanks for reading!

Tags: , , , , , , , , , , ,

It’s that time again—time for community voting on sessions for the fall OpenStack Summit, being held in Paris this year in early November. I wanted to take a moment and share some of the sessions in which I’m involved and/or that I think might be useful. It would be great if you could take a moment to add your votes for the sessions.

My Sessions

I have a total of four session proposals submitted this year:

Congress Sessions

You may also be aware that I am involved with a project called Congress, which aims to bring an overarching policy service to OpenStack. Here are some sessions pertaining to Congress:

VMware Sessions

Arvind Soni, one of the product managers for OpenStack at VMware, kindly pulled together this list of VMware-related sessions, so feel free to have a look at any of these and vote on what sounds appealing to you.

Other Sessions

There are way too many sessions to list all the interesting ones, but here are a few that caught my eye:

There are a bunch more that looked interesting to me, but I’ll skip listing them all here—just hop over to the OpenStack site and vote for the sessions you want to see.

Tags: , , , , ,

In this post, I’m going to provide a very quick introduction to CoreOS. CoreOS, in case you haven’t heard of it, is a highly streamlined Linux distribution designed with containers, massive server deployments, and distributed systems/applications in mind.

CoreOS is built around a number of key concepts/technologies:

  1. The OS is updated as a whole, not package-by-package. CoreOS uses the Omaha protocol—initially engineered by Google for updating things like the Chrome browser and Chrome OS—to stay up-to-date with new versions. CoreOS also employs an active/passive dual root partition scheme. This dual root partition scheme allows CoreOS to run off one root partition while updating the other; the system then reboots onto the updated partition once an update is complete. If the system fails to boot from the updated partition, then reboot it again and it will revert to the known-good installation on the first partition.
  2. All applications run in containers. CoreOS provides out-of-the-box support for Docker containers. In fact, all applications on CoreOS run in containers. This enables separation of applications from the underlying OS and further streamlines the CoreOS update process (because applications are essentially self-contained).
  3. CoreOS leverages systemd. systemd is not unique to CoreOS; it is the new standard system and service manager for Linux. (Debian has elected to use systemd; Ubuntu will adopt systemd with 14.10, if I understand correctly; and Red Hat and related distributions already use systemd.) In CoreOS, systemd unit files are used not only for system services, but also for running Docker containers.
  4. CoreOS has a distributed key-value data store called etcd. The etcd distributed key-value data store can be used for shared configuration and service discovery. etcd uses a simple REST API (HTTP+JSON) and leverages the Raft consensus protocol. Docker containers on CoreOS are able to access etcd via the loopback interface, and thus can use etcd to do dynamic service registration or discovery, for example. etcd is also configurable via cloud-init, which means it’s friendly to deployment on many cloud platforms including OpenStack. More information on etcd is available via the etcd GitHub site.
  5. CoreOS supports deploying containers across a cluster using fleet. Fleet is another open source project that leverages etcd to deploy Docker containers (written as systemd unit files) across a cluster of CoreOS systems. Fleet leverages both etcd and systemd to support the deployment of containers across a cluster of systems. See this page for more information on clustering with CoreOS and fleet.

Taken individually—the use of a minimal Linux distribution, systemd support, the distributed key-value data store, Docker support, dual root partition w/ recoverable system updates, fleet—these technologies are interesting, but not all that revolutionary. Put them all together, however, and you have (in my opinion) a very interesting solution.

I’m quite intrigued with CoreOS and do plan on spending more time with it in the near future, so stay tuned for additional posts. In the meantime, if you’d like to see something specific about CoreOS or any related technologies, please speak up in the comments. I’ll do my best to satisfy your requests!

Tags: , ,

In this post, I’ll share with you how I installed the Docker plugin for OpenStack Heat, so that Heat is able to orchestrate the creation of Docker containers in an OpenStack environment. I’m publishing this because I found the default instructions to be a bit too vague to be helpful. By sharing my experience, I hope that others interested in using Docker in their OpenStack environment will benefit.

Here are the steps I used to make the Docker plugin work with Heat. These steps assume you are using Ubuntu and already have OpenStack Heat installed and working correctly:

  1. If you are using the packaged version of Heat (in other words, you are installing Heat via a method like apt-get install on Ubuntu), then you’ll want to use the “stable/icehouse” branch that contains the Docker container. In this case, you don’t want to use master—it won’t work (either the plugin won’t load or the Heat engine service won’t start). Download a ZIP copy of the correct branch of Heat from GitHub (for “stable/icehouse”, see here).
  2. Extract the contrib/docker folder from the downloaded ZIP copy of Heat.
  3. Delete the contrib/docker/docker/tests directory; in my testing, the plugin failed to load if you leave this directory present in the plugin.
  4. Copy the contrib/docker folder to your OpenStack controller somewhere. On my controller, I chose to put it into an existing /var/lib/heat directory. When you’re done, you should have a docker directory in your chosen destination, and that directory should container another subdirectory named docker. For example, on my system, the full path to the plugin was /var/lib/heat/docker/docker. Make note of the full path.
  5. In the top-level docker folder, run pip install -r requirements.txt. Note that you might need to do an apt-get install python-pip first. This will install the docker-py Python module, which is required by the Docker plugin.
  6. Modify your Heat configuration file (typically found at /etc/heat/heat.conf) and add the full path of the Docker plugin to the plugin_dirs setting. If you used /var/lib/heat as the base directory for the plugin, then the full path should be /var/lib/heat/docker/docker.
  7. Restart the Heat engine (via something like sudo service heat-engine restart or similar).
  8. Run heat resource-type-list and verify that DockerInc::Docker::Container is listed in the results. If not, verify that you have the correct path to the plugin specified in the Heat configuration file, and verify that you used the correct branch of the Docker plugin (“stable/icehouse” if you are using packaged versions of OpenStack). Review the Heat log files for any errors if the resource type still isn’t listed.

Assuming you were successful, then you are ready to start deploying Docker containers via Heat. Stay tuned for an example Heat template that shows how to deploy a Docker container. Until then, feel free to share any corrections, clarifications, or questions in the comments below.

Tags: , ,

This is part 15 of my Learning NSX blog series, in which I will spend some time diving a bit deeper into some of the components involved in the logical routing process I described in part 14. Specifically, I’ll be taking a deeper look at gateway appliances, gateway services, and logical routers, and the relationships among these various components.

If you haven’t read any of the prior posts in this series, it would be ideal to read all of them before continuing; you can find links on my Learning NVP/NSX page. In particular, I’d suggest reading part 6 (on adding a gateway appliance), part 9 (on adding a gateway service), and part 14 (on logical routing and logical routers).

Just for the sake of completeness and to reinforce what was introduced in those posts I referenced, let’s start with some terminology:

  • Gateway (or gateway appliance): When I use the terms gateway or gateway appliance, I’m referring to the NSX software gateway that acts as the “on-ramp/off-ramp” to and from logical networks. What makes this confusing is that we also use the term “gateway” (in particular, “IP gateway” or “default gateway”) to refer to a Layer 3 router that acts as the next hop for a aystem. I’ll do my best to make sure that I’m clearly distinguishing between these ambiguous uses.
  • Gateway service: A gateway service is a logical construct within NSX that allows you to group together multiple gateway appliances. For example, in an L2 gateway service, you can combine two gateway appliances so that you have redundancy in providing L2 bridging functionality between a logical network and a physical network. In an L3 gateway service, you can combine up to 10 gateway appliances together for redundancy and scale-out performance.
  • Logical router: As you might recall from part 14, a logical router is a logical construct within NSX that provides Layer 3 routing functionality, typically (but not always) on a per-tenant basis.

I have a few more terms I’ll introduce in this post, but that should be enough for now.

This diagram contains the bulk of what I’d like to discuss in this post—the relationship between gateway services, gateway appliances, and logical routers:

As I walk you through the details of this diagram, hopefully I’ll clarify the relationships between these components.

  • In this example, there are four gateway appliances combined into a single Layer 3 gateway service. As illustrated in the diagram, gateway services can contain more than one gateway appliance (the minimum recommended is two, for reasons to be explained shortly). Gateway services may be either Layer 2 (bridging/switching) or Layer 3 (routing), but not both.
  • A gateway appliance may be a member of only one gateway service at a time; therefore, a gateway appliance is either L2 or L3, but not both.
  • When adding a gateway appliance to a gateway service, the administrator or operator has the ability to specify a failure zone ID. The idea behind the failure zone ID is to help model fault domains within a single gateway service. For example, if GW Appliance 1 is in a different fault domain—say, a different rack—then the administrator or operator could assign a different failure zone ID to GW Appliance 1, indicating that GW Appliance 1 is in a different fault domain. The significance of this functionality will be made clear in a moment.
  • Note that gateway services, gateway appliances, and failure zone IDs are not visible to tenants. Further, the configuration or management of these entities is handled through NSX (via API or NSX Manager), and isn’t tenant-specific. The CMP—OpenStack, for example—doesn’t get involved here.
  • The example diagram shows four different logical routers spread across three tenants. Each of these logical routers acts as an IP gateway (default gateway/default route) for the associated (or connected) logical network(s). Thus, a logical router is visible to a tenant.
  • Creating, managing, and configuring logical routers is handled by the CMP. With OpenStack, for example, you’d use the OpenStack Dashboard or the Neutron command-line client.
  • For redundancy, you’ll note that each logical router is instantiated on 2 different gateway appliances within the gateway service (hence why a minimum of 2 gateway appliances within a gateway service is recommended). This is completely invisible to the tenant and is handled automatically by NSX. If failure zone IDs—indicating different fault domains—are configured on the gateway appliances, then NSX will instantiate the logical router on gateway appliances in different failure zones. This is an attempt to minimize downtime by spreading the logical router across fault domains.

So far, everything I’ve shared with you has been true for centralized logical routers. For distributed logical routers, things are only slightly different. Distributed logical routers are normally instantiated on the hypervisors; a gateway service and its associated gateway appliances only gets involved when you set the uplink for the distributed logical router (using the “Set Gateway” button in OpenStack Dashboard, for example). If you never set an uplink for the logical router, it will remain instantiated only on the hypervisors, and not on the gateway service/gateway appliances.

I hope this information helps in understanding the routing aspects of VMware NSX. Feel free to post any questions, clarifications, or thoughts in the comments below. Any input on other topics you’d like to see in the Learning NSX blog series are welcome as well!

Tags: , , , , , ,

In April of this year, we started a series of articles at Network Heresy on the topic of policy in the data center. The first of these articles, which I mentioned in this post, focused on the problem of policy in the data center. This was a great introduction to the need for policy and the challenges with the current ways of addressing policy in the data center.

A short while ago, we published the second of our series on policy, titled “On Policy in the Data Center: The solution space”. This post describes the key features/functionality that a policy system must have to address the challenges identified in part 1 of the series. In a nutshell (I highly recommend you go read the full article), these key areas include:

  • The sources from which policy is derived
  • The language(s) used to express policy
  • The way policy systems interact with data center services
  • The actions a policy system can take

I really liked this statement from the article (this is in reference to how a policy system interacts with other services in the data center):

A policy system by itself is useless; to have value, the policy system must interact and integrate with other data center or cloud services.

The relationship between a policy system and the ecosystem of data center services with which it interacts is so critical. Having a policy system is great, but if the policy system can’t be integrated with other data center or cloud services, then it’s not very useful, is it?

Go have a look at the second post in the series on policy in the data center and feel free to join in the conversation. You can leave comments here or at the Network Heresy site.

Tags: , ,

« Older entries