Welcome to Technology Short Take #41, the latest in my series of random thoughts, articles, and links from around the Internet. Here’s hoping you find something useful!

Networking

  • Network Functions Virtualization (NFV) is a networking topic that is starting to get more and more attention (some may equate “attention” with “hype”; I’ll allow you to draw your own conclusion there). In any case, I liked how this article really hit upon what I personally feel is something many people are overlooking in NFV. Many vendors are simply rushing to provide virtualized versions of their solution without addressing the orchestration and automation side of the house. I’m looking forward to part 2 on this topic, in which the author plans to share more technical details.
  • Rob Sherwood, CTO of Big Switch, recently published a reasonably in-depth look at “modern OpenFlow” implementations and how they can leverage multiple tables in hardware. Some good information in here, especially on OpenFlow basics (good for those of you who aren’t familiar with OpenFlow).
  • Connecting Docker containers to Open vSwitch is one thing, but what about using Docker containers to run Open vSwitch in userspace? Read this.
  • Ivan knocks centralized SDN control planes in this post. It sounds like Ivan favors scale-out architectures, not scale-up architectures (which are typically what is seen in centralized control plane deployments).
  • Looking for more VMware NSX content? Anthony Burke has started a new series focusing on VMware NSX in pure vSphere environments. As far as I can tell, Anthony is up to 4 posts in the series so far. Check them out here: part 1, part 2, part 3, and part 4. Enjoy!

Servers/Hardware

  • Good friend Simon Seagrave is back to the online world again with this heads-up on a potential NIC issue with an HP Proliant firmware update. The post also contains a link to a fix for the issue. Glad to see you back again, Simon!
  • Tom Howarth asks, “Is the x86 blade server dead?” (OK, so he didn’t use those words specifically. I’m paraphrasing for dramatic effect.) The basic premise of Tom’s position is that new technologies like server-side caching and VSAN/Ceph/Sanbolic (turning direct-attached storage into shared storage) will dramatically change the landscape of the data center. I would generally agree, although I’m not sure that I agree with Tom’s statement that “complexity is reduced” with these technologies. I think we’re just shifting the complexity to a different place, although it’s a place where I think we can better manage the complexity (and perhaps mask it). What do you think?

Security

Cloud Computing/Cloud Management

  • Juan Manuel Rey has launched a series of blog posts on deploying OpenStack with KVM and VMware NSX. He has three parts published so far; all good stuff. See part 1, part 2, and part 3.
  • Kyle Mestery brought to my attention (via Twitter) this list of the “best newly-available OpenStack guides and how-to’s”. It was good to see a couple of Cody Bunch’s articles on the list; Cody’s been producing some really useful OpenStack content recently.
  • I haven’t had the opportunity to use SaltStack yet, but I’m hearing good things about it. It’s always helpful (to me, at least) to be able to look at products in the context of solving a real-world problem, which is why seeing this post with details on using SaltStack to automate OpenStack deployment was helpful.
  • Here’s a heads-up on a potential issue with the vCAC 6.0.1.1 upgrade—the upgrade apparently changes some configuration files. The linked blog post provides more details on which files get changed. If you’re looking at doing this upgrade, read this to make sure you aren’t adversely affected.
  • Here’s a post with some additional information on OpenStack live migration that you might find useful.

Operating Systems/Applications

  • RHEL7, Docker, and Puppet together? Here’s a post on just such a use case (oh, I forgot to mention OpenStack’s involved, too).
  • Have you ever walked through a spider web because you didn’t see it ahead of time? (Not very fun.) Sometimes I feel that way with certain technologies or projects—like there are connections there with other technologies, projects, trends, etc., that aren’t quite “visible” just yet. That’s where I am right now with the recent hype around containers and how they are going to replace VMs. I’m not so sure I agree with that just yet…but I have more noodling to do on the topic.

Storage

  • “Server SAN” seems to be the name that is emerging to describe various technologies and architectures that create pools of storage from direct-attached storage (DAS). This would include products like VMware VSAN as well as projects like Ceph and others. Stu Miniman has a nice write-up on Server SAN over at Wikibon; if you’re not familiar with some of the architectures involved, that might be a good place to start. Also at Wikibon, David Floyer has a write-up on the rise of Server SAN that goes into a bit more detail on business and technology drivers, friction to adoption, and some recommendations.
  • Red Hat recently announced they were acquiring Inktank, the company behind the open source scale-out Ceph project. Jon Benedict, aka “Captain KVM,” weighs in with his thoughts on the matter. Of course, there’s no shortage of thoughts on the acquisition—a quick web search will prove that—but I find it interesting that none of the “big names” in storage social media had anything to say (not that I could find, anyway). Howard? Stephen? Chris? Martin? Bueller?

Virtualization

  • Doug Youd pulled together a nice summary of some of the issues and facts around routed vMotion (vMotion across layer 3 boundaries, such as across a Clos fabric/leaf-spine topology). It’s definitely worth a read (and not just because I get mentioned in the article, either—although that doesn’t hurt).
  • I’ve talked before—although it’s been a while—about Hyper-V’s choice to rely on host-level NIC teaming in order to provide network link redundancy to virtual machines. Ben Armstrong talks about another option, guest-level NIC teaming, in this post. I’m not so sure that using guest-level teaming is any better than relying on host-level NIC teaming; what’s really needed is a more full-featured virtual networking layer.
  • Want to run nested ESXi on vCHS? Well, it’s not supported…but William Lam shows you how anyway. Gotta love it!
  • Brian Graf shows you how to remove IP pools using PowerCLI.

Well, that’s it for this time around. As always, I welcome all courteous comments, so feel free to share your thoughts, ideas, rants, links, or feedback in the comments below.

Tags: , , , , , , , , , , , , ,

In an earlier post, I provided an introduction to OpenStack Heat, and provided an example Heat template that launched two instances with a logical network and a logical router. Here I am going to provide another view of a Heat template that does the same thing, but uses YAML and the HOT format instead of JSON and the CFN format.

Here’s the full template (click here if the code box below isn’t showing up):

I won’t walk through the whole template again, but rather just talk briefly about a couple of the differences between this YAML-encoded template and the earlier JSON-encoded template:

  • You’ll note the syntax is much simpler. JSON can trip you up on commas and such if you’re not careful; YAML is simpler and cleaner.
  • You’ll note the built-in functions are different, as I pointed out in my first Heat post. Instead of using Ref to refer to an object defined elsewhere in the template, HOT uses get_resource instead.

Aside from these differences, you’ll note that the resource types and properties match between the two; this is because resource types are separate and independent from the template format.

Feel free to post any questions, corrections, or clarifications in the comments below. Thanks for reading!

Tags: , , ,

In this post, I’m going to provide a quick introduction to OpenStack Heat, the orchestration service that allows you to spin up multiple instances, logical networks, and other cloud services in an automated fashion. Note that this is only an introductory post—I’m not an expert on Heat, but I did want to share at least some basic information to help others get started as well.

Let’s start with some terminology, so that there is no confusion about the terms later when we start using them in specific examples:

  • Stack: In Heat parlance, a stack is the collection of objects—or resources—that will be created by Heat. This might include instances (VMs), networks, subnets, routers, ports, router interfaces, security groups, security group rules, auto-scaling rules, etc.
  • Template: Heat uses the idea of a template to define a stack. If you wanted to have a stack that created two instances connected by a private network, then your template would contain the definitions for two instances, a network, a subnet, and two network ports. Since templates are central to how Heat operates, I’ll show you examples of templates in this post.
  • Parameters: A Heat template has three major sections, and one of those sections defines the template’s parameters. These are tidbits of information—like a specific image ID, or a particular network ID—that are passed to the Heat template by the user. This allows users to create more generic templates that could potentially use different resources.
  • Resources: Resources are the specific objects that Heat will create and/or modify as part of its operation, and the second of the three major sections in a Heat template.
  • Output: The third and last major section of a Heat template is the output, which is information that is passed to the user, either via OpenStack Dashboard or via the heat stack-list and heat stack-show commands.
  • HOT: Short for Heat Orchestration Template, HOT is one of two template formats used by Heat. HOT is not backwards-compatible with AWS CloudFormation templates and can only be used with OpenStack. Templates in HOT format are typically—but not necessarily required to be—expressed as YAML (more information on YAML here). (I’ll do my best to avoid saying “HOT template,” as that would be redundant, wouldn’t it?)
  • CFN: Short for AWS CloudFormation, this is the second template format that is supported by Heat. CFN-formatted templates are typically expressed in JSON (see here and see my non-programmer’s introduction to JSON for more information on JSON specifically).

OK, that should be enough to get us going. (BTW, the OpenStack Heat documentation actually has a really good glossary. Please note that this link might break as OpenStack development continues.)

Architecturally, Heat has a few major components:

  • The heat-api component implements an OpenStack-native RESTful API. This components processes API requests by sending them to the Heat engine via AMQP.
  • The heat-api-cfn component provides an API compatible with AWS CloudFormation, and also forwards API requests to the Heat engine over AMQP.
  • The heat-engine component provides the main orchestration functionality.

All of these components would typically be installed on an OpenStack “controller” node that also housed the API servers for Nova, Glance, Neutron, etc. As far as I know, though, there is nothing that requires them to be installed on the same system. Like most of the rest of the OpenStack services, Heat uses a back-end database for maintaining state information.

Now that you have an idea about Heat’s architecture, I’ll walk you through an example template that I created and tested on my own OpenStack implementation (running OpenStack Havana on Ubuntu 12.04 with KVM and VMware NSX). Here’s the full template:

(Can’t see the code above? Click here.)

Let’s walk through this template real quick:

  • First, note that I’ve specified the template version as “AWSTemplateFormatVersion”. One thing that confused me at first was the relationship between the template format (CFN vs. HOT) and resource types. It turns out these are independent of one another; you can—as I have done here—use HOT resource types (like OS::Neutron::Net) in a CFN template. Obviously, if you use HOT resources you’re not fully compatible with AWS. Also, as I stated earlier, CFN templates are typically expressed in JSON (as mine is). Heat does support YAML for CFN templates, although again you’d be sacrificing AWS compatibility.
  • You’ll note that my template skips any use of parameters and goes straight to resources. This is perfectly acceptable, although it means that some values (like the shared public provider network to which the logical router uplinks and the security group) have to be hard-coded in the template.
  • One thing that the template format does control is some of the syntax. So, for example, you’ll note the template uses “Resources”, “Type”, and “Properties.” In some of the other template formats, these could be specified lowercase.
  • The first resource defined is a logical network, defined as type OS::Neutron::Net.
  • The next resource is a subnet (of type OS::Neutron::Subnet), which is associated with the previously-defined logical network through the use of the Ref built-in function on line 20. Built-in functions are another thing controlled by the template format, so when you want to refer to another object in a CFN template, you’ll use the Ref function as I did here. This associates the “network_id” property of the subnet with the logical network defined just prior. You’ll also note that the subnet resource has a number of properties associated with it—CIDR, DNS name servers, DHCP, and gateway IP address.
  • The third resource defined is a logical router.
  • After the logical router is defined, the template links the logical router to a pre-existing provider network via the OS::Neutron::RouterGateway type. (This was deprecated in Icehouse in favor of an external_gateway_info property on the logical router.) The UUID listed there is the UUID of a pre-existing provider network. Note the use of the Ref function again to link this resource back to the logical router.
  • Next up the template creates an interface on the logical router, using two Ref instances to link this router interface back to the logical router and the subnet created earlier. This means we are adding an interface to the referenced logical router on the specified subnet (and that interface will assume the IP address specified by the “gateway_ip” property on the subnet).
  • Next the template creates two Neutron ports, and links them to the default security group. Note that if you don’t specify a security group when creating the Neutron port, it will have none—and no traffic will pass.
  • Finally, the Heat template creates two instances (type OS::Nova::Server), using the “m1.xsmall” flavor and a hard-coded Glance image ID. These instances are connected to the Neutron ports created earlier using the Ref function once more.

(In case it wasn’t obvious already, you can’t just copy-and-paste this Heat template and use it in your own environment, as it references UUIDs for objects in my environment that won’t be the same.)

If you are going to use JSON (as I have here), then I’d recommend bookmarking a JSON validation site, such as jsonlint.com.

Once you have your Heat template defined, you can then use this template to create a stack, either via the heat CLI client or via the OpenStack Dashboard. I’ll attach a screenshot from a stack that I deployed via the Dashboard so that you can see what it looks like (click the image for a larger version):

A deployed Heat stack in OpenStack Dashboard

Kinda nifty, don’t you think? Anyway, I hope this brief introduction to OpenStack Heat has proven useful. I do plan on covering some additional topics with OpenStack Heat in the near future, so stay tuned. In the meantime, if you have any questions, corrections, or clarifications, I invite you to add them to the comments below.

Tags: , , , ,

Reader Brian Markussen—with whom I had the pleasure to speak at the Danish VMUG in Copenhagen earlier this month—brought to my attention an issue between VMware vSphere’s health check feature and Cisco UCS when using Cisco’s VIC cards. His findings, confirmed by VMware support and documented in this KB article, show that the health check feature doesn’t work properly with Cisco UCS and the VIC cards.

Here’s a quote from the KB article:

The distributed switch network health check, including the VLAN, MTU, and teaming policy check can not function properly when there are hardware virtual NICs on the server platform. Examples of this include but are not limited to Broadcom Flex10 systems and Cisco UCS systems.

(Ignore the fact that “UCS systems” is redundant.)

According to Brian, a fix for this issue will be available in a future update to vSphere. In the meantime, there doesn’t appear to be any workaround, so plan accordingly.

Tags: , , , , , ,

Welcome to part 13 of the Learning NSX blog series, in which I revisit the idea of logical networking with VMware NSX. This is a topic I first discussed in part 5 of this series, but I want to go back and look at it again, this time from a more practical perspective of what it looks like to use VMware NSX for logical networking in an OpenStack environment.

If you haven’t been keeping up with the Learning NVP/NSX series, you’ll probably want to go back and catch up. Links to all the articles are found on my Learning NVP/NSX page. You’ll particularly want to be sure that you’ve read part 11 and part 12, which cover the OpenStack integration I’ll be leveraging in this post.

To start things off, let’s first do a quick recap of what it looks like to manually create a logical network in VMware NSX (all of this is described in part 5 of the series):

  1. Create a logical switch.
  2. Add logical switch ports to the newly-created logical switch.
  3. Edit the attachment of the logical switch ports to connect a VM’s virtual network interface card (NIC).

These three steps will establish a simple logical network within VMware NSX. Of course, this logical network won’t have any Dynamic Host Configuration Protocol (DHCP) services, but it will still work (you could manually assign IP address to VMs attached to this logical network).

Now that we have VMware NSX integrated with OpenStack, let’s revisit this process to see what it looks like. (I’ll assume that you’re logged into the OpenStack dashboard and have the necessary permissions to create networks, launch instances, etc.)

First, you’d need to create a network in OpenStack. To do this, it’s as simple as selecting Networks > Create Network, then providing a name for the new network (you could also use the neutron net-create command as well):

Creating a logical network in OpenStack

To exactly mirror the process I showed you in part 5—which did not include DHCP services—you’d need to also go to the Subnet tab and uncheck “Create Subnet” as well as go to the Subnet Detail tab and uncheck “Enable DHCP.” Once you unselect those options and click Create, then OpenStack will (through the Neutron plugin for NSX) create a logical switch in NSX. You can pop into NSX Manager to see this:

New logical switch in NSX Manager

As I pointed out in part 12, the UUID and os_tid tag on this object in NSX will provide the necessary ties back to the corresponding object in OpenStack.

Now go spin up a new instance and attach that instance to the logical network you just created. What you’ll find is that OpenStack will automatically handle the creation of the logical switch ports as well as the attachment of the VM’s virtual NIC to the logical switch. This helps underscore how VMware NSX was designed to be used in conjunction with a cloud management/orchestration system like OpenStack. (You can verify that the logical switch port is automatically created using NSX Manager and comparing the number of logical switch ports both before and after launching the new instance.)

Now that we have OpenStack up and running, though, we can create a logical network that does have DHCP services:

  1. Use the neutron net-create command to create a new logical network:
  2. neutron net-create logical-net-02
  3. Use neutron subnet-create to create a subnet for the new network:
  4. neutron subnet-create --name logical-subnet-02 logical-net-02 10.1.1.0/24

If you log into NSX Manager, you’ll see that a new logical switch (whose name matches the name you gave the logical network above) has been created, and you’ll also note that 1 logical switch port is already in use—even though you haven’t launched any instances yet! The easiest way to find out what is attached to that port is via the OpenStack dashboard. Once logged into the dashboard, select Networks, then click on the network you just created, and scroll down to the list of Ports. You’ll see there that OpenStack has automatically created a logical switch port for the DHCP services associated with the subnet you created above:

Ports on a logical network

If you’re a command-line freak, you could also get this information from the CLI:

  1. Find the subnet associated with the logical network you just created:
  2. SUBNET_ID=$(neutron subnet-list | awk '/\ logical-net-02\ / {print $2}')
  3. List all the ports on that subnet:
  4. neutron port-list | grep $SUBNET_ID
  5. In this case, there is only one port on that subnet, so you can capture the ID of that port in order to get more information about the port:
  6. PORT_ID=$(neutron port-list | grep $SUBNET_ID | awk '{print $2}')
  7. List the information associated with that specific port, paying particular attention to the device_owner attribute (which should show “network:dhcp”):
  8. neutron port-show $PORT_ID

If you have been reading along diligently, you’ll probably be able to put 2 and 2 together here to realize that the “network:dhcp” port is actually a port on OVS on the network node (which, if you’ll recall, is registered as a hypervisor in VMware NSX). If you’ve really been following my stuff closely, you’ll probably also know that the OVS port is connected to a veth pair, which in turn connects to a network namespace where an instance of dnsmasq is running. (Want to learn more about network namespaces? See here.)

At this point, you should have a fairly clear understanding of how logical networking functions within an OpenStack environment with VMware NSX. I wanted to take the time to revisit this topic because future posts are going to assume that you understand these basic concepts and interactions as we explore more advanced functionality and more complex networking topologies.

Thanks for reading, and feel free to post any corrections, clarifications, or questions in the comments below.

Tags: , , , , , ,

Welcome to part 12 of the Learning NSX blog series, in which I continue the discussion around integration between OpenStack and VMware NSX, and in which I’ll provide more details about how exactly to integrate them.

If you are just now joining the series, I encourage you to visit the Learning NVP/NSX page, where you can find links to all the posts in the series. While you’ll want to be caught up on all the posts (they do build on one another in various ways), in particular you’ll want to make sure you’ve read part 11. Part 11 covers the basics of VMware NSX-OpenStack integration, and explains how the various components of OpenStack Neutron and VMware NSX will interact.

Once you understand how the different components of Neutron and NSX will interact, getting NSX integrated into OpenStack Neutron isn’t too terribly difficult. The basic steps look like this:

  1. Install the VMware NSX plugin for Neutron.
  2. Configure VMware NSX for Neutron.
  3. Configure Neutron for VMware NSX.

Let’s take a more in-depth look at each of these steps.

Installing the VMware NSX plugin for Neutron

VMware distributes a set of compiled binary packages for OpenStack Neutron plus the VMware NSX plugin from the VMware NSX support portal (available to VMware NSX customers). Source code is also available, if you’d prefer that. These builds provided by VMware represent the latest fixes to both Neutron and NSX based off the “official” OpenStack Neutron releases. A single download contains all the different components of Neutron that you need (it’s a tarred and gzipped file that you just unpack).

Once you have the packages (I’ll assume you’re using Ubuntu and therefore have downloaded and will use the Debian packages), then you can just use dpkg -i to install the appropriate package(s) on the appropriate node(s). Recall from part 11 that when implementing Neutron with VMware NSX, you’ll need both a Neutron server as well as a network node running the DHCP and metadata agents. Here’s a breakdown of which packages need to be installed on which nodes:

  • On the Neutron server, you’d install neutron-common, neutron-server, python-neutron, and neutron-plugin-nicira.
  • On the Neutron network node, you’d install (at a minimum) neutron-common, neutron-dhcp-agent, and neutron-metadata-agent. If you wanted LBaaS support, you’d also install neutron-lbaas-agent. You could optionally install the Python client with python-neutron as well.

From here, you would proceed with setting up OpenStack Neutron as outlined in a variety of places, including the official OpenStack docs. If you do choose to use the official docs to get Neutron configured, here’s how the breakdown of the instructions map to the setup you’d need to build for use with VMware NSX:

  • The “Installing networking support on a dedicated controller node” section contains information for setting up Neutron on an OpenStack controller that does not run any of the underlying agents. Typically, this system would also run the API servers for some of the other OpenStack services as well (like Nova, Cinder, or Glance).
  • The “Installing networking support on a dedicated network node” section contains the information for setting up a network node that would run the DHCP and metadata agents. Recall that you don’t need the L3 agent, since that is handled by NSX. It might include the LBaaS agent, if you need that functionality.
  • The “Installing networking support on a dedicated compute node” section has the information for setting up your OpenStack compute nodes to interact with Neutron appropriately. Note that you don’t need to install an agent on the compute nodes; adding the compute nodes to NSX (as described in part 4) establishes the necessary communication between the NSX controllers and the compute nodes.

This helps you get Neutron up and running; in the “Configuring Neutron for VMware NSX” section below, I’ll provide additional specifics around how to configure Neutron to communicate with VMware NSX. For now, let’s make sure that VMware NSX is ready for Neutron.

Configuring VMware NSX for Neutron

VMware NSX was designed to be cloud platform-agnostic, so there isn’t a whole lot that needs to be done here. There are, however, a few tasks you’ll want to make sure you’ve done inside VMware NSX:

  1. You’ll want to ensure that you’ve added at least one NSX gateway appliance to your installation. (Part 6 describes how to add an NVP/NSX gateway appliance.)
  2. You’ll want to ensure that you’ve added an L3 gateway service, as described in part 9 of the series. The L3 gateway service replaces the L3 agent in OpenStack Neutron and is therefore necessary to provide routed/NAT’d connectivity into or out of logical (tenant) networks. Use NSX Manager to get the UUID of the L3 gateway service you’ve added; we’ll need that when we configure Neutron.
  3. You’ll need to make sure you’ve already created a transport zone, as described in part 4 and explained in greater detail in part 5. Use NSX Manager to get the UUID of the transport zone that you want Neutron to use when creating overlay networks; you’ll need that when configuring the NSX plugin for Neutron.

Configuring Neutron for VMware NSX

Now, let’s get into some nitty gritty specifics on how we configure OpenStack Neutron to interact with VMware NSX.

Most of the configuration is done within the NSX-specific configuration files, but there are two settings in neutron.conf on the controller node (where the Neutron API server is running) that you’ll want to set:

  • You’ll absolutely want to set the core_plugin value to neutron.plugins.nicira.NeutronPlugin.NvpPluginV2. (Note that in future releases of the NSX plugin, the name may change from “nicira” and “Nvp” to “vmware” and “Nsx”.)
  • You’ll probably also want to set allow_overlapping_ips to True so that Nova metadata works as you would expect. (I’ll have more on that in a moment.)

The bulk of the rest of the configuration is found in nvp.ini, which is typically found in the /etc/neutron/plugins/nicira directory. Here are the relevant settings that you’ll want to configure:

  • You’ll want to set nvp_user and nvp_password appropriately for your VMware NSX installation.
  • Populate the nvp_controllers line with the addresses of the NSX controllers in your environment, in the form “W.X.Y.Z:443″. Separate the controllers’ IP addresses with commas.
  • Place the UUID of the transport zone that you want Neutron to use when creating overlay networks as the value for default_tz_uuid.
  • Place the UUID of the L3 gateway service that you want Neutron to use when creating logical routers with external gateways as the value for the default_l3_gw_service_uuid entry.
  • In the [database] section, make sure there is an appropriate MySQL connection entry for the Neutron database (assuming you are using MySQL). An example connection entry might look like “mysql://neutron:[email protected]/neutron” (or similar).
  • There are a couple different ways to provide Nova metadata to the instances; I prefer using a special metadata access network (I’ll likely talk more about that in a future post). To use this configuration, set metadata_mode to “access_network” and set enable_metadata_access_network to True. (You may also need to set metadata_dhcp_host_route to False.)

That should be all the settings you need on the controller node. However, you’ll also need to slightly configure the DHCP and metadata agents on the network node:

  • In the dhcp_agent.ini file, set enable_isolated_metadata and enable_metadata_network to True. If your Linux distribution supports network namespaces (Ubuntu does), then also set use_namespaces to True.
  • The metadata agent does not require any special configuration above and beyond what is needed to get Neutron running.

Once you restart all relevant services so that they pick up the new settings, you should have Neutron talking to VMware NSX correctly. To test if everything is working correctly, use the Neutron CLI to create a logical network:

neutron net-create test-network

In my environment (running OpenStack Havana and NSX 4.0.0), that produced output that looked like this:

Neutron CLI output

If all is working as expected, then you should see a matching logical switch listed in NSX Manager:

Logical switches in NSX Manager

(Click the image above for a larger version.)

It may be obvious, but you’ll note that the ID returned by the neutron net-create command matches the UUID listed in NSX Manager. You’ll also note that the os_tid tag assigned to the logical switch in NSX Manager matches the tenant ID of the tenant who owns the logical switch in OpenStack. Finally, you’ll note that the bound transport zone’s UUID will match the UUID you specified in nvp.ini as I outlined earlier.

That’s it—you now have VMware NSX integrated with OpenStack Neutron!

In the next post, I’ll revisit the topic of logical networking and logical switches within VMware NSX, something I first discussed fairly early in the series. Once I’ve reviewed some concepts and established a firm foundation, future posts will take a look at how to take advantage of a very cool feature within VMware NSX: the distributed logical router.

In the meantime, feel free to post any questions, clarifications, or thoughts in the comments below. Please include any vendor affiliations, where applicable; otherwise, all courteous comments are welcome!

Tags: , , , , , ,

Technology and Travel

Cody Bunch recently posted a quick round-up of what he carries when traveling, and just for fun I thought I’d do the same. Like Cody, I don’t know that I would consider myself a road warrior, but I have traveled a pretty fair amount. Here’s what I’m currently carrying when I hit the road:

  • Light laptop and tablet: After years of carrying around a 15″ MacBook Pro, then going down to a 13″ MacBook Pro, I have to say I’m pretty happy with the 13" MacBook Air that I’m carrying now. Weight really does make a difference. I’m still toting the full-size iPad, but will probably switch to an iPad mini later in the year to save a bit more weight.
  • Bag: I settled on the Timbuktu Commute messenger bag (see my write-up) and I’m quite pleased with it. A good bag makes a big difference when you’re mobile.
  • Backup battery: I’m carrying the NewTrent PowerPak 10.0 (NT100H). It may not be the best product out there, but it’s worked pretty well for me. It’s not too heavy and not too big, and will charge both phones and tablets.
  • Noise-canceling earphones: The Bose QC20 earphones (in-ear) are awesome. Naturally they let in a bit more noise than the bigger on-ear QC15 headphones, but the added noise is worth the tremendous decrease in size and weight.

On the software side, I’ll definitely echo Cody’s recommendation of Little Snitch; it’s a excellent product that I’ve used for years. You might also consider enabling the built-in firewall (see this write-up for enabling pf on OS X Mountain Lion; haven’t tried on Mavericks yet) for an added layer of network protection.

What about you, other road warriors out there? What are you carrying these days?

Update: Thanks to Ivan Pepelnjak, who pointed out that I had inadvertently swapped out the product names for the Bose earphones and headphones. That’s been corrected!

Tags: , ,

Most IT vendors agree that more extensive use of automation and orchestration in today’s data centers are beneficial to customers. The vendors may vary in their approach to providing this automation and orchestration—some may prefer to do it in software (VMware would be one of these, along with other software companies like Microsoft and Red Hat), while others want to do it in hardware. There are advantages and disadvantages to each approach, naturally, and customers need to evaluate the various solutions against their own requirements to find the best fit.

However, the oft-overlooked problem that more extensive use of automation and orchestration creates is one of control—specifically, how customers can control this automation and orchestration according to their own specific policy. A recent post on the Network Heresy site discusses the need for policy in fully automated IT environments:

However, fully automated IT management is a double-edged sword. While having people on the critical path for IT management was time-consuming, it provided an opportunity to ensure that those resources were managed sensibly and in a way that was consistent with how the business said they ought to be managed. In other words, having people on the critical path enabled IT resources to be managed according to business policy. We cannot simply remove those people without also adding a way of ensuring that IT resources obey business policy—without introducing a way of ensuring that IT resources retain the same level of policy compliance.

VMware, along with a number of other companies, has launched an open source effort to address this challenge: finding a way to enable customers to manage their resources according to their business policy, and do so in a cloud-agnostic way. This effort is called Congress, and it has received some attention from those who think it’s a critical project). I’m really excited to be involved in this project, and I’m also equally excited to be working with some extremely well-respected individuals across a number of different companies (this is most definitely not a VMware-only project). I believe that creating an open source solution to the policy problem will further the cause of cloud computing and the transformation of our industry. I strongly urge you to read this first post, titled “On Policy in the Data Center: The policy problem”, and stay tuned for future blog posts that will dive into even greater detail. Exciting times are ahead!

Tags: , ,

Crossing the Threshold

Last week while attending the CloudStack Collaboration Conference in my home city of Denver, I had a bit of a realization. I wanted to share it here in the hopes that it might serve as an encouragement for others out there.

Long-time readers know that one of my projects over the last couple of years has been to become more fluent in Linux (refer back to my 2012 project list and my 2013 project list). I gave myself a B+ for my efforts last year, feeling that I had made good progress over the course of the year. Even so, I still felt like there was still so much that I needed to learn. As so many of us are inclined to do, I was more focused on what I still hadn’t learned instead of taking a look at what I had learned.

This is where last week comes in. Before the conference started, I participated in a couple of “mini boot camps” focused on CloudStack and related tools/clients/libraries. (You may have seen some of my tweets about tools like cloudmonkey, Apache libcloud, and awscli/ec2stack.) As I worked through the boot camps, I could hear the questions that other attendees were asking as well as the tasks with which others were struggling. Folks were wrestling with what I thought were pretty simple tasks; these were not, after all, very complex exercises. So the lab guide wasn’t complete or correct; you should be able to figure it out, right?

Then it hit me. I’m a Linux guy now.

That’s right—I had crossed the threshold between “working on being a Linux guy” and “being a Linux guy.” It’s not that I know everything there is to know (far from it!), but that the base level of knowledge had finally accrued to a level where—upon closer inspection—I realized that I was fluent enough that I could perform most common tasks without a great deal of effort. I knew enough to know what to do when something didn’t work, or wasn’t configured properly, and the general direction in which to look when trying to determine exactly what was going on.

At this point you might be wondering, “What does that have to do with encouraging me?” That’s a fair question.

As IT professionals—especially those on the individual contributor (IC) track instead of the management track—we are tasked with having to constantly learn new products, new technologies, and new methodologies. Because the learning never stops (and that isn’t a bad thing, in my humble opinion), we tend to focus on what we haven’t mastered. We forget to look at what we have learned, at the progress that we have made. Maybe, like me, you’re on a journey of learning and education to move from being a specialist in one type of technology to a practitioner of another type. If that’s the case, perhaps it’s time you stop saying “I will be a <new technology> person” and say “I am a <new technology> person.” Perhaps it’s time for you to cross the threshold.

Tags: , ,

Welcome to Technology Short Take #40. The content is a bit light this time around; I thought I’d give you, my readers, a little break. Hopefully there’s still some useful and interesting stuff here. Enjoy!

Networking

  • Bob McCouch has a nice write-up on options for VPNs to AWS. If you’re needing to build out such a solution, you might want to read his post for some additional perspectives.
  • Matthew Brender touches on a networking issue present in VMware ESXi with regard to VMkernel multi-homing. This is something others have touched on before (including myself, back in 2008—not 2006 as I tweeted one day), but Matt’s write-up is concise and to the point. You’ll definitely want to keep this consideration in mind for your designs. Another thing to consider: vSphere 5.5 introduces the idea of multiple TCP/IP stacks, each with its own routing table. As the ability to use multiple TCP/IP stacks extends throughout vSphere, it’s entirely possible this limitation will go away entirely.
  • YAOFC (Yet Another OpenFlow Controller), interesting only because it focuses on issues of scale (tens of thousands of switches with hundreds of thousands of endpoints). See here for details.

Servers/Hardware

  • Intel recently announced a refresh of the E5 CPU line; Kevin Houston has more details here.

Security

  • This one slipped past me in the last Technology Short Take, so I wanted to be sure to include it here. Mike Foley—whom I’m sure many of you know—recently published an ESXi security whitepaper. His blog post provides more details, as well as a link to download the whitepaper.
  • The OpenSSL “Heartbleed” vulnerability has captured a great deal of attention (justifiably so). Here’s a quick article on how to assess if your Linux-based server is affected.

Cloud Computing/Cloud Management

  • I recently built a Windows Server 2008 R2 image for use in my OpenStack home lab. This isn’t as straightforward as building a Linux image (no surprises there), but I did find a few good articles that helped along the way. If you find yourself needing to build a Windows image for OpenStack, check out creating a Windows image on OpenStack (via Gridcentric) and building a Windows image for OpenStack (via Brent Salisbury). You might also check out Cloudbase.it, which offers a version of cloud-init for Windows as well as some prebuilt evaluation images. (Note: I was unable to get the prebuilt images to download, but YMMV.)
  • Speaking of building OpenStack images, here’s a “how to” guide on building a Debian 7 cloud image for OpenStack.
  • Sean Roberts recently launched a series of blog posts about various OpenStack projects that he feels are important. The first project he highlights is Congress, a policy management project that has recently gotten a fair bit of attention (see a reference to Congress at the end of this recent article on the mixed messages from Cisco on OpFlex). In my opinion, Congress is a big deal, and I’m really looking forward to seeing how it evolves.
  • I have a related item below under Virtualization, but I wanted to point this out here: work is being done on a VIF driver to connect Docker containers to Open vSwitch (and thus to OpenStack Neutron). Very cool. See here for details.
  • I love that Cody Bunch thinks a lot like I do, like this quote from a recent post sharing some links on OpenStack Heat: “That generally means I’ve got way too many browser tabs open at the moment and need to shut some down. Thus, here comes a huge list of OpenStack links and resources.” Classic! Anyway, check out the list of Heat resources, you’re bound to find something useful there.

Operating Systems/Applications

  • A short while back I had a Twitter conversation about spinning up a Minecraft server for my kids in my OpenStack home lab. That led to a few other discussions, one of which was how cool it would be if you could use Heat autoscaling to scale Minecraft. Then someone sends me this.
  • Per the Microsoft Windows Server Team’s blog post, the Windows Server 2012 R2 Udpate is now generally available (there’s also a corresponding update for Windows 8.1).

Storage

  • Did you see that EMC released a virtual edition of VPLEX? It’s being called the “data plane” for software-defined storage. VPLEX is an interesting product, no doubt, and the introduction of a virtual edition is intriguing (but not entirely unexpected). I did find it unusual that the release of the virtual edition signalled the addition of a new feature called “MetroPoint”, which allows two sites to replicate back to a single site. See Chad Sakac’s blog post for more details.
  • This discussion on MPIO and in-guest iSCSI is a great reminder that designing solutions in a virtualized data center (or, dare I say it—a software-defined data center?) isn’t the same as designing solutions in a non-virtualized environment.

Virtualization

  • Ben Armstrong talks briefly about Hyper-V protected networks, which is a way to protect a VM against network outage by migrating the VM to a different host if a link failure occurs. This is kind of handy, but requires Windows Server clustering in order to function (since live migration in Hyper-V requires Windows Server clustering). A question for readers: is Windows Server clustering still much the same as it was in years past? It was a great solution in years past, but now it seems outdated.
  • At the same time, though, Microsoft is making some useful networking features easily accessible in Hyper-V. Two more of Ben’s articles show off the DHCP Guard and Router Guard features available in Hyper-V on Windows Server 2012.
  • There have been a pretty fair number of posts talking about nested ESXi (ESXi running as a VM on another hypervisor), either on top of ESXi or on top of VMware Fusion/VMware Workstation. What I hadn’t seen—until now—was how to get that working with OpenStack. Here’s how Mathias Ewald made it work.
  • And while we’re talking nested hypervisors, be sure to check out William Lam’s post on running a nested Xen hypervisor with VMware Tools on ESXi.
  • Check out this potential way to connect Docker containers with Open vSwitch (which then in turn opens up all kinds of other possibilities).
  • Jason Boche regales us with a tale of a vCenter 5.5 Update 1 upgrade that results in missing storage providers. Along the way, he also shares some useful information about Profile-Driven Storage in general.
  • Eric Gray shares information on how to prepare an ESXi ISO for PXE booting.
  • PowerCLI 5.5 R2 has some nice new features. Skip over to Alan Renouf’s blog to read up on what is included in this latest release.

I should close things out now, but I do have one final link to share. I really enjoyed Nick Marshall’s recent post about the power of a tweet. In the post, Nick shares how three tweets—one with Duncan Epping, one with Cody Bunch, and one with me—have dramatically altered his life and his career. It’s pretty cool, if you think about it.

Anyway, enough is enough. I hope that you found something useful here. I encourage readers to contribute to the discussion in the comments below. All courteous comments are welcome.

Tags: , , , , , , , , , , ,

« Older entries § Newer entries »