In just a few weeks, I’ll be participating in a book sprint to create a book that provides architecture and design guidance for building OpenStack-based clouds. For those of you who aren’t familiar with the idea of a book sprint, it’s been described to me like this:

  1. Take a group of people—a mix of technical experts and experienced writers—and lock them in a room.
  2. Don’t let them out until they’ve written a book.

(That really is how people have described it to me. I’ll share my experiences after it’s done.)

The architecture/design guide that results from this book sprint will join the results of previous book sprints: the operations guide and the security guide. (Are there others?)

This is my first book sprint, and I’m really excited to be able to participate. It will be great to have the opportunity to work with some very talented folks within the OpenStack community (some I’ve met/already know, others I will be meeting for the very first time). VMware has generously offered to host the book sprint, so we’ll be spending the week locked in a room in Palo Alto—but I’m hoping we might “need” to get some inspiration and spend some time outside on the campus!

Stay tuned for more details, as well as a post-mortem after the book sprint has wrapped up. Writing a book in five days is going to be a challenge, but I’m looking forward to the opportunity!

Tags: , ,

Welcome to Technology Short Take #42, another installation in my ongoing series of irregularly published collections of news, items, thoughts, rants, raves, and tidbits from around the Internet, with a focus on data center-related technologies. Here’s hoping you find something useful!

Networking

  • Anthony Burke’s series on VMware NSX continues with part 5.
  • Aaron Rosen, a Neutron contributor, recently published a post about a Neutron extension called Allowed-Address-Pairs and how you can use it to create high availability instances using VRRP (via keepalived). Very cool stuff, in my opinion.
  • Bob McCouch has a post over at Network Computing (where I’ve recently started blogging as well—see my first post) discussing his view on how software-defined networking (SDN) will trickle down to small and mid-sized businesses. He makes comparisons among server virtualization, 10 Gigabit Ethernet, and SDN, and feels that in order for SDN to really hit this market it needs to be “not a user-facing feature, but rather a means to an end” (his words). I tend to agree—focusing on SDN is focusing on the mechanism, rather than focusing on the problems the mechanism can address.
  • Want or need to use multiple external networks in your OpenStack deployment? Lars Kellogg-Stedman shows you how in this post on multiple external networks with a single L3 agent.

Servers/Hardware

  • There was some noise this past week about Cisco UCS moving into the top x86 blade server spot for North America in Q1 2014. Kevin Houston takes a moment to explore some ideas why Cisco was so successful in this post. I agree that Cisco had some innovative ideas in UCS—integrated management and server profiles come to mind—but my biggest beef with UCS right now is that it is still primarily a north/south (server-to-client) architecture in a world where east/west (server-to-server) traffic is becoming increasingly critical. Can UCS hold on in the face of a fundamental shift like that? I don’t know.

Security

  • Need to scramble some data on a block device? Check out this command. (I love the commandlinefu.com site. It reminds me that I still have so much yet to learn.)

Cloud Computing/Cloud Management

  • Want to play around with OpenDaylight and OpenStack? Brent Salisbury has a write-up on how to OpenStack Icehouse (via DevStack) together with OpenDaylight.
  • Puppet Labs has released a module that allows users to programmatically (via Puppet) provision and configure Google Compute Platform (GCP) instances. More details are available in the Puppet Labs blog post.
  • I love how developers come up with these themes around certain projects. Case in point: “Heat” is the name of the project for orchestrating resources in OpenStack, HOT is the name for the format of Heat templates, and Flame is the name of a new project to automatically generate Heat templates.

Operating Systems/Applications

  • I can’t imagine that anyone has been immune to the onslaught of information on Docker, but here’s an article that might be helpful if you’re still looking for a quick and practical introduction.
  • Many of you are probably familiar with Razor, the project that former co-workers Nick Weaver and Tom McSweeney created when they were at EMC. Tom has since moved on to CSC (via the vCHS team at VMware) and has launched a “next-generation” version of Razor called Hanlon. Read more about Hanlon and why this is a new/separate project in Tom’s blog post here.
  • Looking for a bit of clarity around CoreOS and Project Atomic? I found this post by Major Hayden to be extremely helpful and informative. Both of these projects are on my radar, though I’ll probably focus on CoreOS first as the (currently) more mature solution.
  • Linux Journal has a nice multi-page write-up on Docker containers that might be useful if you are still looking to understand Docker’s basic building blocks.
  • I really enjoyed Donnie Berkholz’ piece on microservices and the migrating Unix philosophy. It was a great view into how composability can (and does) shift over time. Good stuff, I highly recommend reading it.
  • cURL is an incredibly useful utility, especially in today’s age of HTTP-based REST API. Here’s a list of 9 uses for cURL that are worth knowing. This article on testing REST APIs with cURL is handy, too.
  • And for something entirely different…I know that folks love to beat up AppleScript, but it’s cross-application tasks like this that make it useful.

Storage

  • Someone recently brought the open source Open vStorage project to my attention. Open vStorage compares itself to VMware VSAN, but supporting multiple storage backends and supporting multiple hypervisors. Like a lot of other solutions, it’s implemented as a VM that presents NFS back to the hypervisors. If anyone out there has used it, I’d love to hear your feedback.
  • Erik Smith at EMC has published a series of articles on “virtual storage networks.” There’s some interesting content there—I haven’t finished reading all of the posts yet, as I want to be sure to take the time to digest them properly. If you’re interested, I suggest starting out with his introductory post (which, strangely enough, wasn’t the first post in the series), then moving on to part 1, part 2, and part 3.

Virtualization

  • Did you happen to see this write-up on migrating a VMware Fusion VM to VMware’s vCloud Hybrid Service? For now—I believe there are game-changing technologies out there that will alter this landscape—one of the very tangible benefits of vCHS is its strong interoperability with your existing vSphere (and Fusion!) workloads.
  • Need a listing of the IP addresses in use by the VMs on a given Hyper-V host? Ben Armstrong shares a bit of PowerShell code that produces just such a listing. As Ben points out, this can be pretty handy when you’re trying to track down a particular VM.
  • vCenter Log Insight 2.0 was recently announced; Vladan Seget has a decent write-up. I’m thinking of putting this into my home lab soon for gathering event information from VMware NSX, OpenStack, and the underlying hypervisors. I just need more than 24 hours in a day…
  • William Lam has an article on lldpnetmap, a little-known utility for mapping ESXi interfaces to physical switches. As the name implies, this relies on LLDP, so switches that don’t support LLDP or that don’t have LLDP enabled won’t work correctly. Still, a useful utility to have in your toolbox.
  • Technology previews of the next versions of Fusion (Fusion 7) and Workstation (Workstation 11) are available; see Eric Sloof’s articles (here and here for Fusion and Workstation, respectively) for more details.
  • vSphere 4 (and associated pieces) are no longer under general support. Sad face, but time stops for no man (or product).
  • Having some problems with VMware Fusion’s networking? Cody Bunch channels his inner Chuck Norris to kick VMware Fusion networking in the teeth.
  • Want to preview OS X Yosemite? Check out William Lam’s guide to using Fusion or vSphere to preview the new OS X beta release.

I’d better wrap this up now, or it’s going to turn into one of Chad’s posts. (Just kidding, Chad!) Thanks for taking the time to read this far!

Tags: , , , , , , , , , , , , , , ,

Today marked the start of Dockercon, and Docker, Inc., the commercial entity behind the incredibly popular open source Docker project, is taking this opportunity to make its move. Already the focus of quite a bit of attention (some might say a disproportionate amount of attention), Docker is making several announcements today that indicate an intent to broaden its reach:

  • Docker (also referred to as Docker Engine) 1.0 is released today, signaling the project’s readiness to be officially used in production environments (although many firms were/are already using pre–1.0 releases in production)
  • Docker is introducing a new Enterprise Support program to give enterprises the warm cuddlies that using Docker in production environments is safe. The new program aims to provide (in Docker’s words) “training, expertise, and support necessary to stand-up mission-critical workloads built on the Docker platform.”
  • Docker is announcing a set of ten systems integrator partners to help drive commercial adoption.
  • Docker is also unveiling Docker Hub, a cloud-based service (naturally!) for users, content, and workflows. Docker Hub will offer a registry for a comprehensive list of “Dockerized” applications (taking the place of the Docker Index, I assume), along with a management console, user authentication, an automated build service, the Docker Hub API, and collaboration tools to help users manage and share applications.
  • A key part of Docker Hub is Official Repositories; these are “Dockerized” applications that are maintained, supported, and available to all Docker Hub users. This will initially include applications like MongoDB, MySQL, Nginx, Redis, and WordPress, but it is open to any community group or ISV. ()Obviously, there are some requirements around committing to maintenance of the application on an ongoing basis.)

I’m certainly a fan of using Docker (or other container technologies) where it makes sense. I fear, though, that the intense focus (some might say hype) around Docker will lead more than a few organizations down a path of trying to make Docker containers their “be all end all” solution. If that trend grows too large, that could be as damaging (if not more damaging) to Docker than being ignored. Time will tell. In the meantime, I’d strongly recommend getting a grasp on the basics of Docker so that you can better understand how it might fit into your overall solution. (My introductory post on Docker, while a bit dated, might prove useful in that regard.)

Tags: , , ,

As IT pros at the “cutting edge” of technology and industry change, I think sometimes we forget that not everyone has the same mindset toward learning, growth, and career evolution. That’s especially true, I think, for those of us who are bloggers, because it’s our passion for the technology that not only drives us to write about it but also drives us to constantly explore new trends, new areas, and new concepts. It is that passion that drives us to seek out new ways the technology could be applied to our jobs. That passion sustained us over the years, as we progressed from Windows admins to VMware admins and now to virtualization and cloud architects. That passion led us to “bring our work home” and build home labs. We’ve had years of actively seeking out new layers of knowledge to build on top of what we already knew.

This isn’t a bad thing; not by any stretch. But my point is this—we must consider that not everyone is like us. Not everyone is driven by a passion for the technology. Not everyone seeks out new technologies and explores new ways to put those technologies to work. Some IT pros like to leave their work at work. And that’s OK, too. However, knowing that there are folks out there who don’t have that same passion and don’t have years of layering pieces of information on top one another, it’s our job not to berate them about change but rather to encourage and educate them about why change is needed, how that change will affect them, and what they can do about it.

There have been times that I’ve seen some IT pros lecture others about how they aren’t embracing change, they aren’t growing fast enough, how they aren’t headed in the right direction and how technology will leave them behind. (Shoot, I’ve probably done it as well—none of us are perfect, that’s for sure.) I think we can all agree that career evolution is a necessity, but rather than jumping on the “You’d better change or else” bandwagon, wouldn’t we be better served by asking these simple questions:

  • What can I do to help others understand the changes that are coming?
  • Are there things I can do to help others formulate a plan to cope with change?
  • What can I do to help others get the information they need?
  • How can I help others know in what ways this information applies to them?

I don’t know, perhaps I’m overly optimistic, overly idealistic, or overly naive (or all three). I just think that maybe if we spent less time preaching about how career evolution has to occur and instead focused on helping others succeed at career evolution, we’d probably all be a little bit better off. This aligns really well, too, with some thinking I’ve been doing about my own personal “mission statement” and purpose, which centers around helping others.

Feel free to tell me what you think in the comments below—courteous comments are always welcome.

Tags: ,

I don’t think that it is any secret that I’ve become a big fan of Markdown (MultiMarkdown to be precise; more on that in a moment) over the last couple of years. Likewise, it’s not really a secret that I’m an OS X user, although my “fanboi” status for OS X has waned a bit since the introduction of Lion and Mountain Lion (neither of which are as good, in my opinion, as Snow Leopard, but that’s an entirely different discussion—the jury is still out on Mavericks). In this post, I want to share with you a few Markdown tools for OS X that I’ve been using and that (hopefully) you’ll find helpful as well.

BBEdit

Because Markdown is just plain text, a good text editor is essential. For a long time, I was a diehard TextMate fan. It was lean, it was fast, and the syntax highlighting for Markdown was really good. Over time, though, as TextMate faded and waned, I found myself in a position of needing to find a new text editor. (Yes, I looked at the TextMate 2.0 alpha releases. I didn’t really like where the project was headed, though I’m sure that it’s perfectly fine for many folks.) I played with a variety of text editors, and finally settled on BBEdit, from Bare Bones Software.

One of the key things that attracted me to BBEdit was its extensibility. (This included fairly extensive AppleScript support.) For example, once I’d wrapped my head around how BBEdit works (which meant unwrapping my head from how TextMate worked), it was relatively trivial to incorporate additional AppleScripts and even UNIX scripts into my workflow (here’s a non-Markdown example). I’m still uncovering tips and tricks for maximizing my use of BBEdit, but if you’re in the market for a good text editor to use with Markdown, I’d recommend giving BBEdit a serious look.

MultiMarkdown

It didn’t take me too long to stop relying on built-in Markdown processors in my text editors and switch to the standalone MultiMarkdown processor from MultiMarkdown’s creator, Fletcher Penny. In addition to a few syntax additions, MultiMarkdown (MMD) adds more output options—not just HTML, but also OPML and ODF. (ODF can then be easily transformed into RTF or DOC/DOCX using a built-in tool from OS X named textutil.)

Because of BBEdit’s extensibility—specifically, because of the ability to integrate UNIX scripts into the application—it was a piece of cake to use BBEdit for writing the Markdown, then leverage the standalone MMD processor to convert to the final output format.

One other thing I really like about MultiMarkdown is the ability to add metadata to the document. This allows you to embed keywords, author name, document title, project, affiliation, dates, etc., into the document itself. This has two benefits:

  1. It makes it easier to find the right original documents, since each document now carries additional information that you can use in Spotlight.
  2. When converting to HTML, this information gets embedded into the HTML header.

To make embedding the Markdown metadata easier, I use a Typinator snippet.

QuickLook Generator

Available here, this QuickLook generator allows you to preview Markdown/MultiMarkdown files using OS X’s QuickLook functionality. This might not seem like a big deal, but I’ve actually found it quite useful.

Pandoc

Pandoc, from John McFarlane, is a treasure trove of document conversion functionality. It supports an impressive list of both input and output formats. While the standalone MultiMarkdown processor handles most of what I need, having Pandoc in my pocket when I need some additional conversion options is very powerful. For example, I can (and sometimes do) use Pandoc to handle automatic conversions from Markdown/MultiMarkdown directly to Rich Text Format (RTF) or DOC/DOCX. Pandoc also allows you to specify custom templates, so that you could create custom RTF templates that would control how the final RTF document is formatted. All in all, a very powerful tool.

Marked

Marked is a relatively new addition to my Markdown toolset, although it’s been around for a while. The premise behind Marked is simple: preview any Markdown (or MultiMarkdown) document. It turns out this functionality is quite useful (more so than I thought it would be, to be honest). Brett Terpstra, the developer behind Marked, has added features like readability statistics, keyword highlighting, and repetitive word visualization to make Marked even more useful. Another feature I’m finding very handy is the ability to easily export from Marked in PDF, something that is possible using other tools but not nearly as easy. Now, if only I could control Marked via AppleScript, so I could automate that process…that would be ideal.

I hope this information is useful to someone. I’ve found that using Markdown via some of these tools has really helped my productivity; perhaps you will find the same thing. As always, feel free to post any courteous comments, corrections, or clarifications below.

Tags: , ,

Welcome to Technology Short Take #41, the latest in my series of random thoughts, articles, and links from around the Internet. Here’s hoping you find something useful!

Networking

  • Network Functions Virtualization (NFV) is a networking topic that is starting to get more and more attention (some may equate “attention” with “hype”; I’ll allow you to draw your own conclusion there). In any case, I liked how this article really hit upon what I personally feel is something many people are overlooking in NFV. Many vendors are simply rushing to provide virtualized versions of their solution without addressing the orchestration and automation side of the house. I’m looking forward to part 2 on this topic, in which the author plans to share more technical details.
  • Rob Sherwood, CTO of Big Switch, recently published a reasonably in-depth look at “modern OpenFlow” implementations and how they can leverage multiple tables in hardware. Some good information in here, especially on OpenFlow basics (good for those of you who aren’t familiar with OpenFlow).
  • Connecting Docker containers to Open vSwitch is one thing, but what about using Docker containers to run Open vSwitch in userspace? Read this.
  • Ivan knocks centralized SDN control planes in this post. It sounds like Ivan favors scale-out architectures, not scale-up architectures (which are typically what is seen in centralized control plane deployments).
  • Looking for more VMware NSX content? Anthony Burke has started a new series focusing on VMware NSX in pure vSphere environments. As far as I can tell, Anthony is up to 4 posts in the series so far. Check them out here: part 1, part 2, part 3, and part 4. Enjoy!

Servers/Hardware

  • Good friend Simon Seagrave is back to the online world again with this heads-up on a potential NIC issue with an HP Proliant firmware update. The post also contains a link to a fix for the issue. Glad to see you back again, Simon!
  • Tom Howarth asks, “Is the x86 blade server dead?” (OK, so he didn’t use those words specifically. I’m paraphrasing for dramatic effect.) The basic premise of Tom’s position is that new technologies like server-side caching and VSAN/Ceph/Sanbolic (turning direct-attached storage into shared storage) will dramatically change the landscape of the data center. I would generally agree, although I’m not sure that I agree with Tom’s statement that “complexity is reduced” with these technologies. I think we’re just shifting the complexity to a different place, although it’s a place where I think we can better manage the complexity (and perhaps mask it). What do you think?

Security

Cloud Computing/Cloud Management

  • Juan Manuel Rey has launched a series of blog posts on deploying OpenStack with KVM and VMware NSX. He has three parts published so far; all good stuff. See part 1, part 2, and part 3.
  • Kyle Mestery brought to my attention (via Twitter) this list of the “best newly-available OpenStack guides and how-to’s”. It was good to see a couple of Cody Bunch’s articles on the list; Cody’s been producing some really useful OpenStack content recently.
  • I haven’t had the opportunity to use SaltStack yet, but I’m hearing good things about it. It’s always helpful (to me, at least) to be able to look at products in the context of solving a real-world problem, which is why seeing this post with details on using SaltStack to automate OpenStack deployment was helpful.
  • Here’s a heads-up on a potential issue with the vCAC 6.0.1.1 upgrade—the upgrade apparently changes some configuration files. The linked blog post provides more details on which files get changed. If you’re looking at doing this upgrade, read this to make sure you aren’t adversely affected.
  • Here’s a post with some additional information on OpenStack live migration that you might find useful.

Operating Systems/Applications

  • RHEL7, Docker, and Puppet together? Here’s a post on just such a use case (oh, I forgot to mention OpenStack’s involved, too).
  • Have you ever walked through a spider web because you didn’t see it ahead of time? (Not very fun.) Sometimes I feel that way with certain technologies or projects—like there are connections there with other technologies, projects, trends, etc., that aren’t quite “visible” just yet. That’s where I am right now with the recent hype around containers and how they are going to replace VMs. I’m not so sure I agree with that just yet…but I have more noodling to do on the topic.

Storage

  • “Server SAN” seems to be the name that is emerging to describe various technologies and architectures that create pools of storage from direct-attached storage (DAS). This would include products like VMware VSAN as well as projects like Ceph and others. Stu Miniman has a nice write-up on Server SAN over at Wikibon; if you’re not familiar with some of the architectures involved, that might be a good place to start. Also at Wikibon, David Floyer has a write-up on the rise of Server SAN that goes into a bit more detail on business and technology drivers, friction to adoption, and some recommendations.
  • Red Hat recently announced they were acquiring Inktank, the company behind the open source scale-out Ceph project. Jon Benedict, aka “Captain KVM,” weighs in with his thoughts on the matter. Of course, there’s no shortage of thoughts on the acquisition—a quick web search will prove that—but I find it interesting that none of the “big names” in storage social media had anything to say (not that I could find, anyway). Howard? Stephen? Chris? Martin? Bueller?

Virtualization

  • Doug Youd pulled together a nice summary of some of the issues and facts around routed vMotion (vMotion across layer 3 boundaries, such as across a Clos fabric/leaf-spine topology). It’s definitely worth a read (and not just because I get mentioned in the article, either—although that doesn’t hurt).
  • I’ve talked before—although it’s been a while—about Hyper-V’s choice to rely on host-level NIC teaming in order to provide network link redundancy to virtual machines. Ben Armstrong talks about another option, guest-level NIC teaming, in this post. I’m not so sure that using guest-level teaming is any better than relying on host-level NIC teaming; what’s really needed is a more full-featured virtual networking layer.
  • Want to run nested ESXi on vCHS? Well, it’s not supported…but William Lam shows you how anyway. Gotta love it!
  • Brian Graf shows you how to remove IP pools using PowerCLI.

Well, that’s it for this time around. As always, I welcome all courteous comments, so feel free to share your thoughts, ideas, rants, links, or feedback in the comments below.

Tags: , , , , , , , , , , , , ,

In an earlier post, I provided an introduction to OpenStack Heat, and provided an example Heat template that launched two instances with a logical network and a logical router. Here I am going to provide another view of a Heat template that does the same thing, but uses YAML and the HOT format instead of JSON and the CFN format.

Here’s the full template (click here if the code box below isn’t showing up):

I won’t walk through the whole template again, but rather just talk briefly about a couple of the differences between this YAML-encoded template and the earlier JSON-encoded template:

  • You’ll note the syntax is much simpler. JSON can trip you up on commas and such if you’re not careful; YAML is simpler and cleaner.
  • You’ll note the built-in functions are different, as I pointed out in my first Heat post. Instead of using Ref to refer to an object defined elsewhere in the template, HOT uses get_resource instead.

Aside from these differences, you’ll note that the resource types and properties match between the two; this is because resource types are separate and independent from the template format.

Feel free to post any questions, corrections, or clarifications in the comments below. Thanks for reading!

Tags: , , ,

In this post, I’m going to provide a quick introduction to OpenStack Heat, the orchestration service that allows you to spin up multiple instances, logical networks, and other cloud services in an automated fashion. Note that this is only an introductory post—I’m not an expert on Heat, but I did want to share at least some basic information to help others get started as well.

Let’s start with some terminology, so that there is no confusion about the terms later when we start using them in specific examples:

  • Stack: In Heat parlance, a stack is the collection of objects—or resources—that will be created by Heat. This might include instances (VMs), networks, subnets, routers, ports, router interfaces, security groups, security group rules, auto-scaling rules, etc.
  • Template: Heat uses the idea of a template to define a stack. If you wanted to have a stack that created two instances connected by a private network, then your template would contain the definitions for two instances, a network, a subnet, and two network ports. Since templates are central to how Heat operates, I’ll show you examples of templates in this post.
  • Parameters: A Heat template has three major sections, and one of those sections defines the template’s parameters. These are tidbits of information—like a specific image ID, or a particular network ID—that are passed to the Heat template by the user. This allows users to create more generic templates that could potentially use different resources.
  • Resources: Resources are the specific objects that Heat will create and/or modify as part of its operation, and the second of the three major sections in a Heat template.
  • Output: The third and last major section of a Heat template is the output, which is information that is passed to the user, either via OpenStack Dashboard or via the heat stack-list and heat stack-show commands.
  • HOT: Short for Heat Orchestration Template, HOT is one of two template formats used by Heat. HOT is not backwards-compatible with AWS CloudFormation templates and can only be used with OpenStack. Templates in HOT format are typically—but not necessarily required to be—expressed as YAML (more information on YAML here). (I’ll do my best to avoid saying “HOT template,” as that would be redundant, wouldn’t it?)
  • CFN: Short for AWS CloudFormation, this is the second template format that is supported by Heat. CFN-formatted templates are typically expressed in JSON (see here and see my non-programmer’s introduction to JSON for more information on JSON specifically).

OK, that should be enough to get us going. (BTW, the OpenStack Heat documentation actually has a really good glossary. Please note that this link might break as OpenStack development continues.)

Architecturally, Heat has a few major components:

  • The heat-api component implements an OpenStack-native RESTful API. This components processes API requests by sending them to the Heat engine via AMQP.
  • The heat-api-cfn component provides an API compatible with AWS CloudFormation, and also forwards API requests to the Heat engine over AMQP.
  • The heat-engine component provides the main orchestration functionality.

All of these components would typically be installed on an OpenStack “controller” node that also housed the API servers for Nova, Glance, Neutron, etc. As far as I know, though, there is nothing that requires them to be installed on the same system. Like most of the rest of the OpenStack services, Heat uses a back-end database for maintaining state information.

Now that you have an idea about Heat’s architecture, I’ll walk you through an example template that I created and tested on my own OpenStack implementation (running OpenStack Havana on Ubuntu 12.04 with KVM and VMware NSX). Here’s the full template:

(Can’t see the code above? Click here.)

Let’s walk through this template real quick:

  • First, note that I’ve specified the template version as “AWSTemplateFormatVersion”. One thing that confused me at first was the relationship between the template format (CFN vs. HOT) and resource types. It turns out these are independent of one another; you can—as I have done here—use HOT resource types (like OS::Neutron::Net) in a CFN template. Obviously, if you use HOT resources you’re not fully compatible with AWS. Also, as I stated earlier, CFN templates are typically expressed in JSON (as mine is). Heat does support YAML for CFN templates, although again you’d be sacrificing AWS compatibility.
  • You’ll note that my template skips any use of parameters and goes straight to resources. This is perfectly acceptable, although it means that some values (like the shared public provider network to which the logical router uplinks and the security group) have to be hard-coded in the template.
  • One thing that the template format does control is some of the syntax. So, for example, you’ll note the template uses “Resources”, “Type”, and “Properties.” In some of the other template formats, these could be specified lowercase.
  • The first resource defined is a logical network, defined as type OS::Neutron::Net.
  • The next resource is a subnet (of type OS::Neutron::Subnet), which is associated with the previously-defined logical network through the use of the Ref built-in function on line 20. Built-in functions are another thing controlled by the template format, so when you want to refer to another object in a CFN template, you’ll use the Ref function as I did here. This associates the “network_id” property of the subnet with the logical network defined just prior. You’ll also note that the subnet resource has a number of properties associated with it—CIDR, DNS name servers, DHCP, and gateway IP address.
  • The third resource defined is a logical router.
  • After the logical router is defined, the template links the logical router to a pre-existing provider network via the OS::Neutron::RouterGateway type. (This was deprecated in Icehouse in favor of an external_gateway_info property on the logical router.) The UUID listed there is the UUID of a pre-existing provider network. Note the use of the Ref function again to link this resource back to the logical router.
  • Next up the template creates an interface on the logical router, using two Ref instances to link this router interface back to the logical router and the subnet created earlier. This means we are adding an interface to the referenced logical router on the specified subnet (and that interface will assume the IP address specified by the “gateway_ip” property on the subnet).
  • Next the template creates two Neutron ports, and links them to the default security group. Note that if you don’t specify a security group when creating the Neutron port, it will have none—and no traffic will pass.
  • Finally, the Heat template creates two instances (type OS::Nova::Server), using the “m1.xsmall” flavor and a hard-coded Glance image ID. These instances are connected to the Neutron ports created earlier using the Ref function once more.

(In case it wasn’t obvious already, you can’t just copy-and-paste this Heat template and use it in your own environment, as it references UUIDs for objects in my environment that won’t be the same.)

If you are going to use JSON (as I have here), then I’d recommend bookmarking a JSON validation site, such as jsonlint.com.

Once you have your Heat template defined, you can then use this template to create a stack, either via the heat CLI client or via the OpenStack Dashboard. I’ll attach a screenshot from a stack that I deployed via the Dashboard so that you can see what it looks like (click the image for a larger version):

A deployed Heat stack in OpenStack Dashboard

Kinda nifty, don’t you think? Anyway, I hope this brief introduction to OpenStack Heat has proven useful. I do plan on covering some additional topics with OpenStack Heat in the near future, so stay tuned. In the meantime, if you have any questions, corrections, or clarifications, I invite you to add them to the comments below.

Tags: , , , ,

Reader Brian Markussen—with whom I had the pleasure to speak at the Danish VMUG in Copenhagen earlier this month—brought to my attention an issue between VMware vSphere’s health check feature and Cisco UCS when using Cisco’s VIC cards. His findings, confirmed by VMware support and documented in this KB article, show that the health check feature doesn’t work properly with Cisco UCS and the VIC cards.

Here’s a quote from the KB article:

The distributed switch network health check, including the VLAN, MTU, and teaming policy check can not function properly when there are hardware virtual NICs on the server platform. Examples of this include but are not limited to Broadcom Flex10 systems and Cisco UCS systems.

(Ignore the fact that “UCS systems” is redundant.)

According to Brian, a fix for this issue will be available in a future update to vSphere. In the meantime, there doesn’t appear to be any workaround, so plan accordingly.

Tags: , , , , , ,

Welcome to part 13 of the Learning NSX blog series, in which I revisit the idea of logical networking with VMware NSX. This is a topic I first discussed in part 5 of this series, but I want to go back and look at it again, this time from a more practical perspective of what it looks like to use VMware NSX for logical networking in an OpenStack environment.

If you haven’t been keeping up with the Learning NVP/NSX series, you’ll probably want to go back and catch up. Links to all the articles are found on my Learning NVP/NSX page. You’ll particularly want to be sure that you’ve read part 11 and part 12, which cover the OpenStack integration I’ll be leveraging in this post.

To start things off, let’s first do a quick recap of what it looks like to manually create a logical network in VMware NSX (all of this is described in part 5 of the series):

  1. Create a logical switch.
  2. Add logical switch ports to the newly-created logical switch.
  3. Edit the attachment of the logical switch ports to connect a VM’s virtual network interface card (NIC).

These three steps will establish a simple logical network within VMware NSX. Of course, this logical network won’t have any Dynamic Host Configuration Protocol (DHCP) services, but it will still work (you could manually assign IP address to VMs attached to this logical network).

Now that we have VMware NSX integrated with OpenStack, let’s revisit this process to see what it looks like. (I’ll assume that you’re logged into the OpenStack dashboard and have the necessary permissions to create networks, launch instances, etc.)

First, you’d need to create a network in OpenStack. To do this, it’s as simple as selecting Networks > Create Network, then providing a name for the new network (you could also use the neutron net-create command as well):

Creating a logical network in OpenStack

To exactly mirror the process I showed you in part 5—which did not include DHCP services—you’d need to also go to the Subnet tab and uncheck “Create Subnet” as well as go to the Subnet Detail tab and uncheck “Enable DHCP.” Once you unselect those options and click Create, then OpenStack will (through the Neutron plugin for NSX) create a logical switch in NSX. You can pop into NSX Manager to see this:

New logical switch in NSX Manager

As I pointed out in part 12, the UUID and os_tid tag on this object in NSX will provide the necessary ties back to the corresponding object in OpenStack.

Now go spin up a new instance and attach that instance to the logical network you just created. What you’ll find is that OpenStack will automatically handle the creation of the logical switch ports as well as the attachment of the VM’s virtual NIC to the logical switch. This helps underscore how VMware NSX was designed to be used in conjunction with a cloud management/orchestration system like OpenStack. (You can verify that the logical switch port is automatically created using NSX Manager and comparing the number of logical switch ports both before and after launching the new instance.)

Now that we have OpenStack up and running, though, we can create a logical network that does have DHCP services:

  1. Use the neutron net-create command to create a new logical network:
  2. neutron net-create logical-net-02
  3. Use neutron subnet-create to create a subnet for the new network:
  4. neutron subnet-create --name logical-subnet-02 logical-net-02 10.1.1.0/24

If you log into NSX Manager, you’ll see that a new logical switch (whose name matches the name you gave the logical network above) has been created, and you’ll also note that 1 logical switch port is already in use—even though you haven’t launched any instances yet! The easiest way to find out what is attached to that port is via the OpenStack dashboard. Once logged into the dashboard, select Networks, then click on the network you just created, and scroll down to the list of Ports. You’ll see there that OpenStack has automatically created a logical switch port for the DHCP services associated with the subnet you created above:

Ports on a logical network

If you’re a command-line freak, you could also get this information from the CLI:

  1. Find the subnet associated with the logical network you just created:
  2. SUBNET_ID=$(neutron subnet-list | awk '/\ logical-net-02\ / {print $2}')
  3. List all the ports on that subnet:
  4. neutron port-list | grep $SUBNET_ID
  5. In this case, there is only one port on that subnet, so you can capture the ID of that port in order to get more information about the port:
  6. PORT_ID=$(neutron port-list | grep $SUBNET_ID | awk '{print $2}')
  7. List the information associated with that specific port, paying particular attention to the device_owner attribute (which should show “network:dhcp”):
  8. neutron port-show $PORT_ID

If you have been reading along diligently, you’ll probably be able to put 2 and 2 together here to realize that the “network:dhcp” port is actually a port on OVS on the network node (which, if you’ll recall, is registered as a hypervisor in VMware NSX). If you’ve really been following my stuff closely, you’ll probably also know that the OVS port is connected to a veth pair, which in turn connects to a network namespace where an instance of dnsmasq is running. (Want to learn more about network namespaces? See here.)

At this point, you should have a fairly clear understanding of how logical networking functions within an OpenStack environment with VMware NSX. I wanted to take the time to revisit this topic because future posts are going to assume that you understand these basic concepts and interactions as we explore more advanced functionality and more complex networking topologies.

Thanks for reading, and feel free to post any corrections, clarifications, or questions in the comments below.

Tags: , , , , , ,

« Older entries § Newer entries »