Collaboration

This category contains posts that discuss collaboration and collaborative software, services, technologies, products, or projects.

This is part 17 of the Learning NSX blog series. In this post, I’ll show you how to add layer 2 (L2) connectivity to your NSX environment, and how to leverage that L2 connectivity in an NSX-powered OpenStack implementation. This will allow you, as an operator of an NSX-powered OpenStack cloud, to offer L2/bridged connectivity to your tenants as an additional option.

As you might expect, this post does build on content from previous posts in the series. Links to all the posts in the series are available on the Learning NVP/NSX page; in particular, this post will leverage content from part 6. Additionally, I’ll be discussing using NSX in the context of OpenStack, so reviewing part 11 and part 12 might also be helpful.

There are 4 basic steps to adding L2 connectivity to your NSX-powered OpenStack environment:

  1. Add at least one NSX gateway appliance to your NSX implementation. (Ideally, you would add two NSX gateway appliances for redundancy.)
  2. Create an NSX L2 gateway service.
  3. Configure OpenStack for L2 connectivity by configuring Neutron to use the L2 gateway service you just created.
  4. Add L2 connectivity to a Neutron logical network by attaching to the L2 gateway service.

Let’s take a look at each of these steps. (By the way, if the concept of “L2 connectivity” doesn’t make sense to you, please review part 1 of my “Introduction to Networking” series.)

Adding an NSX Gateway Appliance

I described the process for adding an NSX gateway appliance in part 6 of the series, so refer back to that article for details on how to add an NSX gateway appliance. The process for adding a gateway appliance is the same regardless of whether you’ll use that gateway appliance for L2 (bridged) or L3 (routed) connectivity.

A few things to note:

  • Generally, your gateway appliance will have at least three (3) network interfaces. One of those interfaces will be used for management traffic, one for transport (overlay) traffic, and one for external traffic. You’ll need to assign IP addresses to the management and transport interfaces, but the external interface does not require an IP address.
  • If you are going to use the gateway appliance to provide L2 connectivity to multiple VLANs, you’ll want to ensure that all appropriate VLANs are trunked to the external interface of the gateway appliances. If you are deploying redundant gateway appliances, make sure all the VLANs are trunked to all appliances.

Once you have the gateway appliance built and added to NSX using the instructions in part 6, you’re ready to proceed to the next step.

Creating an NSX L2 Gateway Service

After your gateway appliances (I’ll assume you’re using two appliances for redundancy) are built and added to NSX, you’re ready to create the L2 gateway service that will provide the L2 connectivity in and out of a NSX-backed logical network. This process is similar to the process described in part 9 of the series, which showed you how to add an L3 gateway service to NSX. (If you’re unclear on the difference between a gateway appliance and a gateway service, check out part 15 for a more detailed explanation.)

Before we walk through creating an L2 gateway service, keep in mind that you may connect either an L3 gateway service or an L2 gateway service to a single broadcast domain on the physical network. Let’s say you connect an L3 gateway service to VLAN 100 (perhaps using multiple VLANs as described in part 16). You can’t also connect an L2 gateway service to VLAN 100 as well; you’d need to use a different VLAN on the outside of the L2 gateway service. Be sure to take this fact into account in your designs.

To create an L2 gateway service, follow these steps from within NSX Manager:

  1. From the menu across the top of the NSX Manager page, select Network Components > Services > Gateway Services. This will take you to a page titled “Network Components Query Results,” where NSX Manager has precreated and executed a query for the list of gateway services. Your list may or may not be empty, depending on whether you’ve created other gateway services. Any gateway services that you’ve already created will be listed here.

  2. Click the Add button. This will open the Create Gateway Service dialog.

  3. Select “L2 Gateway Service” from the list. Other options in this list include “L3 Gateway Service” (you saw this in part 9) and “VTEP L2 Gateway Service” (to integrate a third-party top-of-rack [ToR] switch into NSX; you’ll use this in a future post). Click Next, or click on the “2. Basics” button on the left.

  4. Provide a display name for the new L2 gateway service, then click Next (or click on “3. Transport Nodes” on the left). You can optionally add tags here as well, in case you wanted to associate additional metadata with this logical object in NSX.

  5. On the Transport Nodes screen, click Add Gateway to select a gateway appliance (which is classified as a transport node within NSX; hypervisors are also transport nodes) to host this L2 gateway service.

  6. From the Edit Gateway dialog box that pops up, you’ll need to select a transport node and a device ID. The first option, the transport node, is pretty straightforward; this is a gateway appliance on which to host this gateway service. The device ID is the bridge (recall that NSX gateway appliances, by default, create OVS bridges to map to their interfaces) connected to the external network.

  7. Once you’ve added two (2) gateway appliances as transport nodes for your gateway service, click Save to create the gateway service and return to NSX Manager. You can create a gateway service with only a single gateway appliance, but it won’t be redundant and protected against the failure of the gateway appliance.

NSX is now ready to provide L2 (bridged) connectivity between NSX-backed logical networks and external networks connected to the gateway appliances in the L2 gateway service. Before we can leverage this option inside OpenStack, though, we’ll need to first configure OpenStack to recognize and use this new L2 gateway service.

Configure OpenStack for L2 Connectivity

Configuring OpenStack for L2 connectivity using NSX builds upon the specific details presented in part 12 of this series. I highly recommend reviewing that post if you haven’t already read it.

To configure OpenStack to recognize the L2 gateway service you just created, you’ll need to edit the configuration file for the NSX plugin on the Neutron server. In earlier versions of the plugin, this file was called nvp.ini and was found in the /etc/neutron/plugins/nicira directory. (In fact, this is the information I shared with you in part 12.) Newer versions of the plugin, however, use a configuration file named nsx.ini located in the /etc/neutron/plugins/vmware directory. I’ll assume you are using a newer version of the plugin.

Only a single change is needed to nsx.ini in order to configure OpenStack to recognize/use the new L2 gateway service. Simply add the UUID of the L2 gateway service (easily obtained via NSX Manager) to the nsx.ini file as the value for the default_l2_gw_service_uuid setting. (You followed a similar procedure in part 12 as part of the OpenStack integration, but for L3 connectivity that time.) Then restart the Neutron server, and you should be ready to go!

Neutron recognizes L2 gateway services as network gateways, so all the related Neutron commands use the term net-gateway. You can verify that the L2 gateway service is recognized by OpenStack Neutron by running the following command with admin permissions:

neutron net-gateway-list

You should see a single entry in the list, with a description that reads something like “default L2 gateway service” or similar. As long as you see that entry, you’re ready to proceed! If you don’t see that entry, it’s time to check in NSX Manager and/or double-check your typing.

Adding L2 Connectivity to a Neutron Logical Network

With the NSX gateway appliances installed, the L2 gateway service created, and OpenStack Neutron configured appropriately, you’re now in a position to add L2 connectivity to a Neutron logical network. However, there are a few limitations that you’ll want to consider:

  • A given Neutron logical network may be connected to either a logical router (hosted on gateway appliances that are part of an L3 gateway service) or a network gateway (an L2 gateway service), but not both. In other words, you can provide L3 (routed) or L2 (bridged) connectivity into and out of logical networks, but not both simultaneously.
  • Each Neutron logical network may be associated with exactly one broadcast domain on the physical network. Similarly, each broadcast domain on the physical network may be associated with exactly one Neutron logical network. For example, you can’t associate VLAN 100 with both logical network A as well as logical network B.
  • Finally, by default network gateway operations are restricted to users with administrative credentials only. There is a model whereby tenants can have their own network gateways, but for the purposes of this article we’ll assume the default model of provider-supplied gateways.

With these considerations in mind, let’s walk through what’s required to add L2 connectivity to a Neutron logical network.

  1. If you don’t already have a logical network, create one using the neutron net-create command. This can be done with standard tenant credentials.

  2. If you had to create the logical network, create a subnet as well using the neutron subnet-create command. You can leave DHCP enabled on this Neutron subnet, as the Neutron DHCP server (which is an instance of dnsmasq running in a network namespace on a Neutron network node) won’t provide addresses to systems on the physical network. However, the logical network and the physical network are going to be sharing an IP address space, so it would probably be a good idea to control the range of addresses using the --allocation-pool parameter when creating the subnet. As with creating the network, standard tenant credentials are all that are needed here.

  3. You’ll need to get the UUID of the network gateway, which you can do with this command: neutron net-gateway-list | awk '/\ default\ / {print $2}'. (You can also assign this to an environment variable for use later, if that helps you.) You’ll also need the the UUID of the logical network, which you can also store into an environment variable. This command and all subsequent commands require administrative credentials.

  4. Attach the logical network to the network gateway using the neutron net-gateway-connect command. Assuming that you’ve stored the UUID of the network gateway in $GWID and the UUID for the logical network in $NID, then the command you’d use would be neutron net-gateway-connect $GWID $NID --segmentation_type=flat. This command must be done by someone with administrative credentials.

  5. If you are using multiple VLANs on the outside of the network gateway, then you’d replace --segmentation_type=flat with --segmentation_type=vlan and adding another parameter, --segmentation_id= and the appropriate VLAN ID. For example, if you wanted to bridge the logical network to VLAN 200, then you’d use segmentation_type=vlan and segmentation_id=200.

  6. That’s it! You now have your Neutron logical network bridged out to a broadcast domain on the physical network.

If you need to change the mapping between a broadcast domain on the physical network and a Neutron logical network, simply use neutron net-gateway-disconnect to disconnect the existing logical network, and then use neutron net-gateway-connect to connect a different logical network to the physical network segment.

I hope you’ve found this post to be useful. The use of L2 gateways offers administrators and operators a new option for network connectivity for tenants in addition to L3 routing. I’ll explore additional options for network connectivity in future posts, so stay tuned. In the meantime, feel free to share any comments, thoughts, or corrections in the comments below.

Tags: , , , , ,

One of the great things about this site is the interaction I enjoy with readers. It’s always great to get comments from readers about how an article was informative, answered a question, or helped solve a problem. Knowing that what I’ve written here is helpful to others is a very large part of why I’ve been writing here for over 9 years.

Until today, I’ve left comments (and sometimes trackbacks) open on very old blog posts. Just the other day I received a comment on a 4 year old article where a reader was sharing another way to solve the same problem. Unfortunately, that has to change. Comment spam on the site has grown considerably over the last few months, despite the use of a number of plugins to help address the issue. It’s no longer just an annoyance; it’s now a problem.

As a result, starting today, all blog posts more than 3 years old will automatically have their comments and trackbacks closed. I hate to do it—really I do—but I don’t see any other solution to the increasing blog spam.

I hope that this does not adversely impact my readers’ ability to interact with me, but it is a necessary step.

Thanks to all who continue to read this site. I do sincerely appreciate your time and attention, and I hope that I can continue to provide useful and relevant content to help make peoples’ lives better.

Tags: , ,

In just a few weeks, I’ll be participating in a book sprint to create a book that provides architecture and design guidance for building OpenStack-based clouds. For those of you who aren’t familiar with the idea of a book sprint, it’s been described to me like this:

  1. Take a group of people—a mix of technical experts and experienced writers—and lock them in a room.
  2. Don’t let them out until they’ve written a book.

(That really is how people have described it to me. I’ll share my experiences after it’s done.)

The architecture/design guide that results from this book sprint will join the results of previous book sprints: the operations guide and the security guide. (Are there others?)

This is my first book sprint, and I’m really excited to be able to participate. It will be great to have the opportunity to work with some very talented folks within the OpenStack community (some I’ve met/already know, others I will be meeting for the very first time). VMware has generously offered to host the book sprint, so we’ll be spending the week locked in a room in Palo Alto—but I’m hoping we might “need” to get some inspiration and spend some time outside on the campus!

Stay tuned for more details, as well as a post-mortem after the book sprint has wrapped up. Writing a book in five days is going to be a challenge, but I’m looking forward to the opportunity!

Tags: , ,

I recently had the opportunity to conduct an e-mail interview with Jesse Proudman, founder and CEO of Blue Box. The interview is posted below. While it gets a bit biased toward Blue Box at times (he started the company, after all), there are some interesting points raised.

[Scott Lowe] Tell the readers here a little bit about yourself and Blue Box.

[Jesse Proudman] My name is Jesse Proudman. I love the Internet’s “plumbing”. I started working in the infrastructure space in 1997 to capitalize on my “gear head fascination” with the real-time nature of server infrastructure. In 2003, I founded Blue Box from my college dorm room to be a managed hosting company focused on overcoming the complexities of highly customized open source infrastructure running high traffic web applications. Unlike many hosting and cloud startups that evolved to become focused solely on selling raw infrastructure, Blue Box subscribes to the belief that many businesses demand fully rounded solutions vs. raw infrastructure that they must assemble.

In 2007, Blue Box developed proprietary container-based cloud technology for both our public and private cloud offerings. Blue Box customers combine bare metal infrastructure with on-demand cloud containers for a hybrid deployment coupled with fully managed support including 24×7 monitoring. In Q3 of 2013, Blue Box launched OpenStack On-Demand, a hosted, single tenant private cloud offering. Capitalizing on our 10 years of infrastructure experience, this single-tenant hosted private cloud delivers on all six tenants today’s IT teams require as they evolve their cloud strategy.

Outside of Blue Box, I have an amazing wife and daughter, and I have a son due in February. I am a fanatical sports car racer and also am actively involved in the Seattle entrepreneurial community, guiding the next generation of young entrepreneurs through the University of Puget Sound Business Leadership, 9Mile Labs and University of Washington’s Entrepreneurial mentorship programs.

[SL] Can you tell me a bit more about why you see the continuing OpenStack API debate to be irrelevant?

[JP] First, I want to be specific that when I say irrelevant, I don’t mean unhealthy. This debate is a healthy one to be having. The sharing of ideas and opinions is the cornerstone of the open source philosophy.

But I believe the debate may be premature.

Imagine a true IaaS stack as a tree. Strong trees must have a strong trunk to support their many branches. For IaaS technology, the trunk is built of essential cloud core services: compute, networking and storage. In OpenStack, these equate to Nova, Neutron, Cinder and Swift (or Ceph). The branches then consist of everything else that evolves the offering and makes it more compelling and easier to use: everything that builds upon the strong foundation of the trunk. In OpenStack, these branches include services like Ceilometer, Heat, Trove and Marconi.

I consider API compatibility a branch.

Without a robust, reliable sturdy trunk, the branches become irrelevant, as there isn’t a strong supporting foundation to hold them up. And if neither the trunk, nor the branches are reliable, then the API to talk to them certainly isn’t relevant.

In is my belief that OpenStack needs concentrate on strengthening the trunk before putting significant emphasis into the possibilities that exist in the upper reaches of the canopy.

OpenStack’s core is quite close. Grizzly was the first release many would define as “stable” and Havana is the first release where that stability could convert into operational simplicity. But there still is room for improvement (particularly with projects like Neutron), so it is my argument to focus on strengthening the core before exploring new projects.

Once the core is strong then the challenge becomes the development of the service catalogue. Amazon has over one hundred different services that can be integrated together into a powerful ecosystem. OpenStack’s service catalogue is still very young and evolving rapidly. Focus here is required to ensure this evolution is effective.

Long term, I certainly believe API compatibility with AWS (or Azure, or GCE) can bring value to the OpenStack ecosystem. Early cloud adopters who took to AWS before alternatives existed have technology stacks written to interface directly with Amazon’s APIs. Being able to provide compatibility for those prospects means preventing them from having to rewrite large sections of their tooling to work with OpenStack.

API compatibility provided via a higher-level proxy would allow for the breakout of maintenance to a specific group of engineers focused on that requirement (and remove that burden from the individual service teams). It’s important to remember that chasing external APIs will always be a moving target.

In the short run, I believe it wise to rally the community around a common goal: strengthen the trunk and intelligently engineer the branches.

[SL] What are your thoughts on public vs. private OpenStack?

[JP] For many, OpenStack draws much of its appeal from the availability of both public, hosted private and on-premise private implementations. While “cloud bursting” still lives more in the realms of fantasy than reality, the power of a unified API and service stack across multiple consumption models enables incredible possibilities.

Conceptually, public cloud is generally better defined and understood than private cloud. Private cloud is a relatively new phenomenon, and for many has really meant advanced virtualization. While it’s true private clouds have traditionally meant on-premise implementations, hosted private cloud technologies are empowering a new wave of companies who recognize the power of elastic capabilities, and the value that single-tenant implementations can deliver. These organizations are deploying applications into hosted private clouds, seeing the value proposition that can bring.

A single-sourced vendor or technology won’t dominate this world. OpenStack delivers flexibility through its multiple consumption models, and that only benefits the customer. Customers can use that flexibility to deploy workloads to the most appropriate venue, and that only will ensure further levels of adoption.

[SL] There’s quite a bit of discussion that private cloud strictly a transitional state. Can you share your thoughts on that topic?

[JP] In 2012, we began speaking with IT managers across our customer base, and beyond. Through those interviews, we confirmed what we now call the “six tenets of private cloud.” Our customers and prospects are all evolving their cloud strategies in real time, and are looking for solutions that satisfy these requirements:

  1. Ease of use ­ new solutions should be intuitively simple. Engineers should be able to use existing tooling, and ops staff shouldn’t have to go learn an entirely new operational environment.

  2. Deliver IaaS and PaaS – IaaS has become a ubiquitous requirement, but we repeatedly heard requests for an environment that would also support PaaS deployments.

  3. Elastic capabilities – the desire to the ability to grow and contract private environments much in the same way they could in a public cloud.

  4. Integration with existing IT infrastructure ­ businesses have significant investments in existing data center infrastructure: load balancers, IDS/IPS, SAN, database infrastructure, etc. From our conversations, integration of those devices into a hosted cloud environment brought significant value to their cloud strategy.

  5. Security policy control ­ greater compliance pressures mean a physical “air gap” around their cloud infrastructure can help ensure compliance and ease peace of mind.

  6. Cost predictability and control – Customers didn’t want to need a PhD to understand how much they’ll owe at the end of the month. Budgets are projected a year in advance, and they needed to know they could project their budgeted dollars into specific capacity.

Public cloud deployments can certainly solve a number of these tenets, but we quickly discovered that no offering on the market today was solving all six in a compelling way.

This isn’t a zero sum game. Private cloud, whether it be on-premise or in a hosted environment, is here to stay. It will be treated as an additional tool in the toolbox. As buyers reallocate the more than $400 billion that’s spent annually on IT deployments, I believe we’ll see a whole new wave of adoption, especially when private cloud offerings address the six tenets of private cloud.

[SL] Thanks for your time, Jesse!

If anyone has any thoughts to share about some of the points raised in the interview, feel free to speak up in the comments. As Jesse points out, debate can be healthy, so I invite you to post your (courteous and professional) thoughts, ideas, or responses below. All feedback is welcome!

Tags: ,

No Man is an Island

The phrase “No man is an island” is attributed to John Donne, an English poet who lived in the late 1500s and early 1600s. The phrase comes from his Meditation XVII, and was borrowed later by Thomas Merton to become the title of a book he published in 1955. In both cases, the phrase is used to discuss the interconnected nature of humanity and mankind. (Side note: the phrase “for whom the bell tolls” comes from the same origin.)

What does this have to do with IT? That’s a good question. As I was preparing to start the day today, I took some time to reflect upon my career; specifically, the individuals that have been placed in my life and career. I think all people are prone to overlook the contributions that others have played in their own successes, but I think that IT professionals may be a bit more affected in this way. (I freely admit that, having spent my entire career as an IT professional, my view may be skewed.) So, in the spirit of recognizing that no man is an island—meaning that who we are and what we accomplish are intricately intertwined with those around us—I wanted to take a moment and express my thanks and appreciation for a few folks who have helped contribute to my success.

So, who has helped contribute to my achievements? The full list is too long to publish, but here are a few notables that I wanted to call out (in no particular order):

  • Chad Sakac took the opportunity to write the book that would become Mastering VMware vSphere 4 and gave it to me instead. (If you aren’t familiar with that story, read this.)
  • My wife, Crystal, is just awesome—she has enabled and empowered me in many, many ways. ‘Nuff said.
  • Forbes Guthrie allowed me to join him in writing VMware vSphere Design (as well as the 2nd edition), has been a great contributor to the Mastering VMware vSphere series, and has been a fabulous co-presenter at the last couple VMworld conferences.
  • Chris McCain (who recently joined VMware and has some great stuff in store—stay tuned!) wrote Mastering VMware Infrastructure 3, the book that I would revise to become Mastering VMware vSphere 4.
  • Andy Sholomon, formerly with Cisco and now with VCE, was kind enough to provide some infrastructure for me to use when writing Mastering VMware vSphere 5. Without it, writing the book would have been much more difficult.
  • Rick Scherer, Duncan Epping, and Jason Boche all served as technical editors for various books that I’ve written; their contributions and efforts helped make those books better.

To all of you: thank you.

The list could go on and on and on; if I didn’t expressly call your name out, please don’t feel bad. My point, though, is this: have you taken the time recently to thank others in your life that have contributed to your success?

Tags: , , ,

Divorcing Google

The time has come; all good things must come to an end. So it is with my relationship with Google and the majority of their online services. As of right now, I’m in the midst of separating myself from the majority of Google’s services. I’ve mentioned this several times on Twitter, and a number of people asked me to write about the process. So, here are the details so far.

The first question that usually comes up is, “Why leave Google?” That’s a fair question. There is no one reason, but rather a number of different factors that contributed to my decision:

  • Google kills off services seemingly on a whim. What if a service I’m come to use quite heavily is no longer valuable to Google? That was the case with Google Reader, a service for which I still haven’t found a reasonable alternative. (Feedly is close.)
  • Google is closing off their ecosystem. Everything ties back to Google+, even if you don’t want anything to do with Google+. Communications with Google Talk to external XMPP-based services no longer works, which means you can’t use Google Talk to communicate with other users using XMPP (only other Google Talk users).
  • Support for XMPP clients will stop working in May 2014 (which, in turn, will cause a number of other things to stop working). One thing that will be affected is the ability to use an Obihai device to connect to Google Voice, which will no longer work after this change.
  • The quality and reliability of their free service tiers isn’t so great (in my experience), and their paid service tiers aren’t price competitive in my opinion.
  • Google’s non-standard IMAP implementation is horribly, awfully slow.
  • Finally, Google is now doing things they said they’d never do (like putting banner ads in search results). What’s next?

Based on these factors, I made the decision to switch to other services instead of using Google. Here are the services that I’ve settled on so far:

  • For search, I’m using a combination of DuckDuckGo (for general searching) and Bing Images (for image searches). Bing Image Search is actually quite nice; it allows you to search according to license (so that you can find images that you are legally allowed to re-use).
  • For e-mail, I’m using Fastmail. Their IMAP service rocks and is noticeably faster than anything I’ve ever seen from Google. The same goes for their web-based interface, which is also screaming fast (and quite pleasant to use). The spam protection isn’t quite as good as Google’s, but I’m still in the process of training my Bayes database. I anticipate that it will improve over time.
  • For IM, I’m using Hosted.IM and Fastmail, both of which are XMPP-based. I’ll use Hosted.IM for one domain where my username contains a dot character; this isn’t supported on Fastmail. All other domains will run on a Fastmail XMPP server.
  • For contact and calendar syncing, I’m using Fruux. Fruux supports CardDAV and CalDAV, both of which are also supported natively on OS X and iOS (among other systems). Support for CardDAV/CalDAV on Android is also available inexpensively.

That frees me up from GMail, Google Calendar, Google Talk, and Google Contacts. I’ve never liked or extensively used Google Drive (Dropbox is miles ahead of Google Drive, in my humble opinion) or Google Docs, so I don’t really have to worry about those.

There are a couple of services for which I haven’t yet found a suitable replacement; for example, I haven’t yet found a replacement for Google Voice. I’m looking at SIP providers for my home line, but haven’t made any firm decisions yet. I also haven’t found a replacement for FeedBurner yet.

Also, I won’t be able to completely stop using Google services; since I own an Android phone, I have to use Google Play Store and Google Wallet. Since I don’t have a replacement (yet) for Google Voice, I have a single Google account that I use for these services as well as for IM to Google Talk contacts (since I can’t use XMPP to communicate with them). Once Google Voice is replaced, I’ll be down to using only Google Play, Google Wallet, and Google Talk.

So, that’s where things stand. I’m open to questions, thoughts, or suggestions for other services I should investigate. Just speak up in the comments below. All courteous comments are welcome!

Tags: , , ,

Next Monday, May 20, the OpenStack Denver meetup group will gather jointly with the inaugural meeting of the Infracoders Denver meetup group for a talk titled “Infrastructure as Code with Chef and OpenStack.” The joint meeting will be held at Innovation Pavilion in Centennial/Englewood (location information here). The event will start at 7PM.

Giving the presentation will be none other than Joshua Timberman of OpsCode (@jtimberman on Twitter). Joshua will be speaking on Chef, a system integration framework that is commonly used in “infrastructure as code” environments and in a number of OpenStack deployments. Joshua will discuss the basic principles of Chef, the primitives it provides, and how you can use it to drive your infrastructure toward full automation.

For more information, or to RSVP for the meetup event, you can visit either the OpenStack Denver meetup group event page or the Infracoders Denver meetup group event page. We do ask that you RSVP so that we can plan food and drinks for the event, but please only RSVP in one of the two meetup groups (not both).

<aside>Also, if you are interested in presenting at the OpenStack Denver meetup group or the Infracoders Denver meetup group, please let me know. We are actively seeking co-organizers as well as speakers/presenters for future events.</aside>

If you live in the South Denver metro area and are interested in either OpenStack or infrastructure as code, this is an event you won’t want to miss!

Tags: , , ,

Regular readers of this site know that my wife, Crystal, runs something called Spousetivities. Spousetivities originated out of boredom, essentially—Crystal was traveling with me to VMworld and wanted to find someone to hang out with while I was at the conference. That was VMworld 2008, and since that time she’s had activities at VMworld 2009, VMworld 2010 (including VMworld Europe 2010), VMworld 2011 (both US and Europe), and VMworld 2012 (US and Europe). She’s also had activities at EMC World (2011 and 2012), HP Discover EMEA, and Dell Storage Forum in Boston. This year, she’s added another conference: IBM Edge 2013 in Las Vegas!

IBM Edge 2013 (conference site here) runs from June 10–14 at Mandalay Bay in Las Vegas. If you are attending IBM Edge 2013 this year, I’d encourage you to consider bringing your spouse or significant other with you and getting them involved in Spousetivities. As is always the case, Crystal has a great line-up of activities planned for participants, including:

  • The ever-popular “Getting to Know You” breakfast on Monday, June 10
  • A “Culinary Mystery Tour” of famous restaurants along the Strip
  • A tour of Red Rock Canyon Conservation area and highlights of the famous Vegas strip
  • “Cooking at the Ranch,” where you’ll get to meet Chef Philip Dell of Sin City Chefs and the Food Network’s show “Chopped” (More details here.)
  • A Grand Canyon tour
  • A Hoover Dam tour
  • A wide variety of spa services from THE Bathhouse, including facials, manicures, massages, and pedicures

All in all, it looks like a great week of activities. For the conference attendee, you gain the benefit of being able to spend time with your partner in the evenings without having to worry about them during the day (leaving you to be able to focus on the conference). For the partner traveling with the attendee, you don’t have to worry about being alone, finding your way around town, or bothering your partner at the conference. It is truly a “win-win” for everyone involved.

All these activities have been discounted, thanks to IBM’s sponsorship of Spousetivities, so I encourage you to visit the registration page and get signed up as soon as possible.

Tags:

I’m very excited to announce the inaugural OpenStack Denver meetup, scheduled for 7 PM on Wednesday, January 9, 2013—only 6 days away!

If you haven’t already joined the meetup group, please head over to the group page on Meetup.com and join, then RSVP for the inaugural OpenStack Denver meeting. Cisco Systems was kind enough to sponsor the event, both by hosting us at their Englewood office (near Park Meadows Mall) as well as by supplying food (pizza) and drinks (soda/water).

At the inaugural meeting, we’ll first provide an OpenStack primer, so those who aren’t familiar with OpenStack will get an idea of what it’s all about and what’s included. Next, co-organizer Shannon McFarland will talk about what Cisco’s been doing with OpenStack, and then we’ll wrap up the first meetup with a discussion of desired future topics, speakers, and other logistical items.

This is going to be a great opportunity to meet other folks in the Denver area who are also interested in or working with OpenStack, so I highly encourage you to do your best to make it. See you there!

Tags: ,

Today I had the opportunity to speak at the Midwest Regional/Kansas City VMUG User Conference in Overland Park, KS. Below is the presentation I delivered, as hosted by SpeakerDeck.

If you’d like a PDF version of the deck for direct download, it is available here.

As always, courteous comments are both welcomed and encouraged! Feel free to speak up below.

Tags: ,

« Older entries