Collaboration

You are currently browsing articles tagged Collaboration.

For non-programmers, making a meaningful contribution to an open source project can be difficult; this is as true for OpenStack as for other open source projects. Documentation is a way to contribute, but in the case of OpenStack there is a non-trivial setup required in order to be able to contribute to the OpenStack documentation. In this post, I’m going to share how to set up the tools to contribute to OpenStack documentation in the hopes that it will help others get past the “barrier to entry” that currently exists.

I’ve long wanted to be more involved in supporting the OpenStack community, beyond my unofficial support via advocacy and blogging about OpenStack. I felt that documentation might be a way to achieve that goal. After all, I’ve written books and have been blogging for 9 years, so I should be able to add some value via documentation contributions. However, the toolchain that the OpenStack documentation uses requires a certain level of familiarity with development-focused tools, and the “how to” guides were less than ideal because of assumptions made regarding the knowledge level of new contributors. For these reasons, I felt that sharing how I (a non-programmer) set up the tools for contributing to OpenStack documentation might encourage other non-programmers to do the same, and thus get more people involved in the project.

This post is not intended to replace any official guides (like this one or this one), but rather to supplement such guides.

Using a Linux Environment

It’s no secret that I use OS X, and the toolchain that the OpenStack documentation team uses is—like the rest of OpenStack—pretty heavily biased toward Linux. One of the deterring factors for me was the level of difficulty to get these tools running on OS X. While it is possible to use the toolchain on OS X directly, it’s not necessarily simple or straightforward. It wasn’t until just recently that I realized: why not just use a Linux VM? Using a native Linux environment sidesteps all the messiness of trying to get a Linux-centric toolchain running on OS X (or Windows).

I initially started with a locally-hosted Linux VM on my MacBook Air, but after some consideration I decided to alter that approach and build my environment in a cloud VM running Ubuntu 12.04 on Amazon EC2 (via Ravello Systems).

The use of a cloud VM has some advantages and disadvantages:

  • Because the toolchain is heavily biased toward Linux, using a Linux-based cloud VM does simplify some things.
  • Cloud VMs will typically have much greater bandwidth, making it easier to install packages and pull down Git repositories. This was especially helpful this past week while I was traveling to the OpenStack Summit, where—due to slow conference wireless access—it took 2 hours to clone the GitHub repository for the OpenStack documentation (this is while I was trying to use a locally-hosted VM on my laptop).
  • It provides a clean separation between my local workstation and the development environment, meaning I can access my development environment from any system via SSH. This prevents me from having to keep multiple development environments in sync, which is something I would have had to do when working from my home office (where I use my MacBook Air as well as a Mac Pro).
  • The most significant drawback is that I cannot work on documentation patches when I have no network connectivity. For example, I’m writing this post as I’m traveling back from Paris. If I had a local development environment, I could assigned a few bugs to myself and worked on them while flying home. With a cloud VM-based environment, I cannot. I could have used a VM hosted on my local laptop (via VirtualBox or VMware Fusion), but that would have negated some of the other advantages.

In the end, you’ll have to evaluate for yourself what approach works best for you, your working style, and your existing obligations. However, the use of a Linux VM—local or via a cloud provider—can simplify the process of setting up the toolchain and getting started contributing, and it’s an approach I’d recommend for most anyone.

The information presented here assumes you are using an Ubuntu-based VM, either hosted with a cloud provider or on your local system. I’ll leave it to the readers to worry about the details of how to turn up the Ubuntu VM via their preferred mechanism.

Setting Up the Toolchain

Once you’ve established an Ubuntu Linux instance using the provider of your choice, use these steps to set up the documentation toolchain (note that, depending on the configuration of your Ubuntu Linux instance, you might need to preface these commands with sudo where appropriate):

  1. Update your system using apt-get update and apt-get -y upgrade. Depending on the updates that are installed, you might need to reboot.

  2. Generate an SSH keypair using ssh-keygen. Keep track of this keypair (and make note of the associated password, if you assigned one), as you’ll need it later. You can also re-use an existing keypair, if you’d like; I preferred to generate a keypair specifically for this purpose. If you are going to re-use an existing keypair, you’ll need to transfer them to this Linux instance via scp or your preferred mechanism.

  3. Install the necessary prerequisite packages (I’ve line wrapped the command below, denoted by a backslash, for readability):

    apt-get install maven git git-review python-pip libxml2-dev \
    libxslt1-dev python-dev gcc gettext zlib1g-dev
  4. Configure Git using git config to set your name and e-mail address. The e-mail address, in particular, should match what you’re using with the OpenStack Foundation. The commands to do this are git config --global
    user.name
    and git config --global user.email.

  5. Clone the GitHub repository where the documentation is found:

    git clone https://github.com/openstack/openstack-manuals.git
  6. Install the components required to run tests against the documentation sources (this command assumes that you cloned the repository into a subdirectory named “openstack-manuals”, which would be the default result of using the command above to clone the repository):

    pip install -r openstack-manuals/test-requirements.txt
  7. If you haven’t already established a Launchpad account, you’ll need one. Go create one (probably best done outside of the Ubuntu Linux environment) and make note of the username. Use your Launchpad account to verify that you can log in to both bugs.launchpad.net as well as review.openstack.org.

  8. While logged into review.openstack.org, go to the settings for your account and paste in the contents of the public key (not the private key) you generated earlier (or that you are re-using).

  9. Back in the Ubuntu Linux instance, set your Launchpad username:

    git config --global --add gitreview.username "<Launchpad username>"
  10. Change into the directory where you cloned the “openstack-manuals” repository and run git review -s. Assuming you already added the “gitreview.username” configuration parameter and that you’ve already uploaded the public key to your account on review.openstack.org, this should work without any issues. If you forgot to set “gitreview.username”, it should prompt you for your Launchpad username. If you forgot to upload the public SSH key, then it will probably error out.

  11. If git review -s completes without any obvious errors, you can double-check that a Gerrit upstream for the repository was added by using the command git remote -v. If you don’t see an upstream repo labeled gerrit, then something didn’t work. If you do see the upstream repo labeled gerrit, then you should be fine.

Once you’ve completed these steps, then you’re ready to start contributing to OpenStack documentation. I have a separate post planned that describes the process for actually contributing; stay tuned for that soon.

In the meantime, if anyone has any questions, corrections, or clarifications about the information in this post, please speak up in the comments below.

Tags: , , , , ,

One of the great things about this site is the interaction I enjoy with readers. It’s always great to get comments from readers about how an article was informative, answered a question, or helped solve a problem. Knowing that what I’ve written here is helpful to others is a very large part of why I’ve been writing here for over 9 years.

Until today, I’ve left comments (and sometimes trackbacks) open on very old blog posts. Just the other day I received a comment on a 4 year old article where a reader was sharing another way to solve the same problem. Unfortunately, that has to change. Comment spam on the site has grown considerably over the last few months, despite the use of a number of plugins to help address the issue. It’s no longer just an annoyance; it’s now a problem.

As a result, starting today, all blog posts more than 3 years old will automatically have their comments and trackbacks closed. I hate to do it—really I do—but I don’t see any other solution to the increasing blog spam.

I hope that this does not adversely impact my readers’ ability to interact with me, but it is a necessary step.

Thanks to all who continue to read this site. I do sincerely appreciate your time and attention, and I hope that I can continue to provide useful and relevant content to help make peoples’ lives better.

Tags: , ,

In just a few weeks, I’ll be participating in a book sprint to create a book that provides architecture and design guidance for building OpenStack-based clouds. For those of you who aren’t familiar with the idea of a book sprint, it’s been described to me like this:

  1. Take a group of people—a mix of technical experts and experienced writers—and lock them in a room.
  2. Don’t let them out until they’ve written a book.

(That really is how people have described it to me. I’ll share my experiences after it’s done.)

The architecture/design guide that results from this book sprint will join the results of previous book sprints: the operations guide and the security guide. (Are there others?)

This is my first book sprint, and I’m really excited to be able to participate. It will be great to have the opportunity to work with some very talented folks within the OpenStack community (some I’ve met/already know, others I will be meeting for the very first time). VMware has generously offered to host the book sprint, so we’ll be spending the week locked in a room in Palo Alto—but I’m hoping we might “need” to get some inspiration and spend some time outside on the campus!

Stay tuned for more details, as well as a post-mortem after the book sprint has wrapped up. Writing a book in five days is going to be a challenge, but I’m looking forward to the opportunity!

Tags: , ,

I recently had the opportunity to conduct an e-mail interview with Jesse Proudman, founder and CEO of Blue Box. The interview is posted below. While it gets a bit biased toward Blue Box at times (he started the company, after all), there are some interesting points raised.

[Scott Lowe] Tell the readers here a little bit about yourself and Blue Box.

[Jesse Proudman] My name is Jesse Proudman. I love the Internet’s “plumbing”. I started working in the infrastructure space in 1997 to capitalize on my “gear head fascination” with the real-time nature of server infrastructure. In 2003, I founded Blue Box from my college dorm room to be a managed hosting company focused on overcoming the complexities of highly customized open source infrastructure running high traffic web applications. Unlike many hosting and cloud startups that evolved to become focused solely on selling raw infrastructure, Blue Box subscribes to the belief that many businesses demand fully rounded solutions vs. raw infrastructure that they must assemble.

In 2007, Blue Box developed proprietary container-based cloud technology for both our public and private cloud offerings. Blue Box customers combine bare metal infrastructure with on-demand cloud containers for a hybrid deployment coupled with fully managed support including 24×7 monitoring. In Q3 of 2013, Blue Box launched OpenStack On-Demand, a hosted, single tenant private cloud offering. Capitalizing on our 10 years of infrastructure experience, this single-tenant hosted private cloud delivers on all six tenants today’s IT teams require as they evolve their cloud strategy.

Outside of Blue Box, I have an amazing wife and daughter, and I have a son due in February. I am a fanatical sports car racer and also am actively involved in the Seattle entrepreneurial community, guiding the next generation of young entrepreneurs through the University of Puget Sound Business Leadership, 9Mile Labs and University of Washington’s Entrepreneurial mentorship programs.

[SL] Can you tell me a bit more about why you see the continuing OpenStack API debate to be irrelevant?

[JP] First, I want to be specific that when I say irrelevant, I don’t mean unhealthy. This debate is a healthy one to be having. The sharing of ideas and opinions is the cornerstone of the open source philosophy.

But I believe the debate may be premature.

Imagine a true IaaS stack as a tree. Strong trees must have a strong trunk to support their many branches. For IaaS technology, the trunk is built of essential cloud core services: compute, networking and storage. In OpenStack, these equate to Nova, Neutron, Cinder and Swift (or Ceph). The branches then consist of everything else that evolves the offering and makes it more compelling and easier to use: everything that builds upon the strong foundation of the trunk. In OpenStack, these branches include services like Ceilometer, Heat, Trove and Marconi.

I consider API compatibility a branch.

Without a robust, reliable sturdy trunk, the branches become irrelevant, as there isn’t a strong supporting foundation to hold them up. And if neither the trunk, nor the branches are reliable, then the API to talk to them certainly isn’t relevant.

In is my belief that OpenStack needs concentrate on strengthening the trunk before putting significant emphasis into the possibilities that exist in the upper reaches of the canopy.

OpenStack’s core is quite close. Grizzly was the first release many would define as “stable” and Havana is the first release where that stability could convert into operational simplicity. But there still is room for improvement (particularly with projects like Neutron), so it is my argument to focus on strengthening the core before exploring new projects.

Once the core is strong then the challenge becomes the development of the service catalogue. Amazon has over one hundred different services that can be integrated together into a powerful ecosystem. OpenStack’s service catalogue is still very young and evolving rapidly. Focus here is required to ensure this evolution is effective.

Long term, I certainly believe API compatibility with AWS (or Azure, or GCE) can bring value to the OpenStack ecosystem. Early cloud adopters who took to AWS before alternatives existed have technology stacks written to interface directly with Amazon’s APIs. Being able to provide compatibility for those prospects means preventing them from having to rewrite large sections of their tooling to work with OpenStack.

API compatibility provided via a higher-level proxy would allow for the breakout of maintenance to a specific group of engineers focused on that requirement (and remove that burden from the individual service teams). It’s important to remember that chasing external APIs will always be a moving target.

In the short run, I believe it wise to rally the community around a common goal: strengthen the trunk and intelligently engineer the branches.

[SL] What are your thoughts on public vs. private OpenStack?

[JP] For many, OpenStack draws much of its appeal from the availability of both public, hosted private and on-premise private implementations. While “cloud bursting” still lives more in the realms of fantasy than reality, the power of a unified API and service stack across multiple consumption models enables incredible possibilities.

Conceptually, public cloud is generally better defined and understood than private cloud. Private cloud is a relatively new phenomenon, and for many has really meant advanced virtualization. While it’s true private clouds have traditionally meant on-premise implementations, hosted private cloud technologies are empowering a new wave of companies who recognize the power of elastic capabilities, and the value that single-tenant implementations can deliver. These organizations are deploying applications into hosted private clouds, seeing the value proposition that can bring.

A single-sourced vendor or technology won’t dominate this world. OpenStack delivers flexibility through its multiple consumption models, and that only benefits the customer. Customers can use that flexibility to deploy workloads to the most appropriate venue, and that only will ensure further levels of adoption.

[SL] There’s quite a bit of discussion that private cloud strictly a transitional state. Can you share your thoughts on that topic?

[JP] In 2012, we began speaking with IT managers across our customer base, and beyond. Through those interviews, we confirmed what we now call the “six tenets of private cloud.” Our customers and prospects are all evolving their cloud strategies in real time, and are looking for solutions that satisfy these requirements:

  1. Ease of use ­ new solutions should be intuitively simple. Engineers should be able to use existing tooling, and ops staff shouldn’t have to go learn an entirely new operational environment.

  2. Deliver IaaS and PaaS – IaaS has become a ubiquitous requirement, but we repeatedly heard requests for an environment that would also support PaaS deployments.

  3. Elastic capabilities – the desire to the ability to grow and contract private environments much in the same way they could in a public cloud.

  4. Integration with existing IT infrastructure ­ businesses have significant investments in existing data center infrastructure: load balancers, IDS/IPS, SAN, database infrastructure, etc. From our conversations, integration of those devices into a hosted cloud environment brought significant value to their cloud strategy.

  5. Security policy control ­ greater compliance pressures mean a physical “air gap” around their cloud infrastructure can help ensure compliance and ease peace of mind.

  6. Cost predictability and control – Customers didn’t want to need a PhD to understand how much they’ll owe at the end of the month. Budgets are projected a year in advance, and they needed to know they could project their budgeted dollars into specific capacity.

Public cloud deployments can certainly solve a number of these tenets, but we quickly discovered that no offering on the market today was solving all six in a compelling way.

This isn’t a zero sum game. Private cloud, whether it be on-premise or in a hosted environment, is here to stay. It will be treated as an additional tool in the toolbox. As buyers reallocate the more than $400 billion that’s spent annually on IT deployments, I believe we’ll see a whole new wave of adoption, especially when private cloud offerings address the six tenets of private cloud.

[SL] Thanks for your time, Jesse!

If anyone has any thoughts to share about some of the points raised in the interview, feel free to speak up in the comments. As Jesse points out, debate can be healthy, so I invite you to post your (courteous and professional) thoughts, ideas, or responses below. All feedback is welcome!

Tags: ,

No Man is an Island

The phrase “No man is an island” is attributed to John Donne, an English poet who lived in the late 1500s and early 1600s. The phrase comes from his Meditation XVII, and was borrowed later by Thomas Merton to become the title of a book he published in 1955. In both cases, the phrase is used to discuss the interconnected nature of humanity and mankind. (Side note: the phrase “for whom the bell tolls” comes from the same origin.)

What does this have to do with IT? That’s a good question. As I was preparing to start the day today, I took some time to reflect upon my career; specifically, the individuals that have been placed in my life and career. I think all people are prone to overlook the contributions that others have played in their own successes, but I think that IT professionals may be a bit more affected in this way. (I freely admit that, having spent my entire career as an IT professional, my view may be skewed.) So, in the spirit of recognizing that no man is an island—meaning that who we are and what we accomplish are intricately intertwined with those around us—I wanted to take a moment and express my thanks and appreciation for a few folks who have helped contribute to my success.

So, who has helped contribute to my achievements? The full list is too long to publish, but here are a few notables that I wanted to call out (in no particular order):

  • Chad Sakac took the opportunity to write the book that would become Mastering VMware vSphere 4 and gave it to me instead. (If you aren’t familiar with that story, read this.)
  • My wife, Crystal, is just awesome—she has enabled and empowered me in many, many ways. ‘Nuff said.
  • Forbes Guthrie allowed me to join him in writing VMware vSphere Design (as well as the 2nd edition), has been a great contributor to the Mastering VMware vSphere series, and has been a fabulous co-presenter at the last couple VMworld conferences.
  • Chris McCain (who recently joined VMware and has some great stuff in store—stay tuned!) wrote Mastering VMware Infrastructure 3, the book that I would revise to become Mastering VMware vSphere 4.
  • Andy Sholomon, formerly with Cisco and now with VCE, was kind enough to provide some infrastructure for me to use when writing Mastering VMware vSphere 5. Without it, writing the book would have been much more difficult.
  • Rick Scherer, Duncan Epping, and Jason Boche all served as technical editors for various books that I’ve written; their contributions and efforts helped make those books better.

To all of you: thank you.

The list could go on and on and on; if I didn’t expressly call your name out, please don’t feel bad. My point, though, is this: have you taken the time recently to thank others in your life that have contributed to your success?

Tags: , , ,

Divorcing Google

The time has come; all good things must come to an end. So it is with my relationship with Google and the majority of their online services. As of right now, I’m in the midst of separating myself from the majority of Google’s services. I’ve mentioned this several times on Twitter, and a number of people asked me to write about the process. So, here are the details so far.

The first question that usually comes up is, “Why leave Google?” That’s a fair question. There is no one reason, but rather a number of different factors that contributed to my decision:

  • Google kills off services seemingly on a whim. What if a service I’m come to use quite heavily is no longer valuable to Google? That was the case with Google Reader, a service for which I still haven’t found a reasonable alternative. (Feedly is close.)
  • Google is closing off their ecosystem. Everything ties back to Google+, even if you don’t want anything to do with Google+. Communications with Google Talk to external XMPP-based services no longer works, which means you can’t use Google Talk to communicate with other users using XMPP (only other Google Talk users).
  • Support for XMPP clients will stop working in May 2014 (which, in turn, will cause a number of other things to stop working). One thing that will be affected is the ability to use an Obihai device to connect to Google Voice, which will no longer work after this change.
  • The quality and reliability of their free service tiers isn’t so great (in my experience), and their paid service tiers aren’t price competitive in my opinion.
  • Google’s non-standard IMAP implementation is horribly, awfully slow.
  • Finally, Google is now doing things they said they’d never do (like putting banner ads in search results). What’s next?

Based on these factors, I made the decision to switch to other services instead of using Google. Here are the services that I’ve settled on so far:

  • For search, I’m using a combination of DuckDuckGo (for general searching) and Bing Images (for image searches). Bing Image Search is actually quite nice; it allows you to search according to license (so that you can find images that you are legally allowed to re-use).
  • For e-mail, I’m using Fastmail. Their IMAP service rocks and is noticeably faster than anything I’ve ever seen from Google. The same goes for their web-based interface, which is also screaming fast (and quite pleasant to use). The spam protection isn’t quite as good as Google’s, but I’m still in the process of training my Bayes database. I anticipate that it will improve over time.
  • For IM, I’m using Hosted.IM and Fastmail, both of which are XMPP-based. I’ll use Hosted.IM for one domain where my username contains a dot character; this isn’t supported on Fastmail. All other domains will run on a Fastmail XMPP server.
  • For contact and calendar syncing, I’m using Fruux. Fruux supports CardDAV and CalDAV, both of which are also supported natively on OS X and iOS (among other systems). Support for CardDAV/CalDAV on Android is also available inexpensively.

That frees me up from GMail, Google Calendar, Google Talk, and Google Contacts. I’ve never liked or extensively used Google Drive (Dropbox is miles ahead of Google Drive, in my humble opinion) or Google Docs, so I don’t really have to worry about those.

There are a couple of services for which I haven’t yet found a suitable replacement; for example, I haven’t yet found a replacement for Google Voice. I’m looking at SIP providers for my home line, but haven’t made any firm decisions yet. I also haven’t found a replacement for FeedBurner yet.

Also, I won’t be able to completely stop using Google services; since I own an Android phone, I have to use Google Play Store and Google Wallet. Since I don’t have a replacement (yet) for Google Voice, I have a single Google account that I use for these services as well as for IM to Google Talk contacts (since I can’t use XMPP to communicate with them). Once Google Voice is replaced, I’ll be down to using only Google Play, Google Wallet, and Google Talk.

So, that’s where things stand. I’m open to questions, thoughts, or suggestions for other services I should investigate. Just speak up in the comments below. All courteous comments are welcome!

Tags: , , ,

As most of you probably know, I visit quite a few VMUG User Conferences around the United States and around the world. I’d probably do even more if my calendar allowed, because it’s truly an honor for me to have the opportunity to help educate the VMware user community. I know I’m not alone in that regard; there are numerous VMware “rock stars” (not that I consider myself a “rock star”) out there who also work tirelessly to support the VMware community. One need not look very far to see some examples of these types of individuals: Mike Laverick, William Lam, Duncan Epping, Josh Atwell, Nick Weaver, Alan Renouf, Chris Colotti, Cody Bunch, or Cormac Hogan are all great examples. (And I’m sure there are many, many more I’ve forgotten!)

However, one thing that has consistently been a topic of discussion among those of us who frequent VMUGs has been this question: “How do we get users more engaged in VMUG?” VMUG is, after all, the VMware User Group. And while all of us are more than happy to help support VMUG (at least, I know I am), we’d also like to see more user engagement—more customers speaking about their use cases, their challenges, the things they’ve learned, and the things they want to learn. We want to see users get connected with other users, to share information and build a community-based body of knowledge. So how can we do that?

As I see it, there is a variety of reasons why users don’t volunteer to speak:

  • They might be afraid of public speaking, or aren’t sure how good they’ll be.
  • They feel like the information they could share won’t be helpful or useful to others.
  • They aren’t sure how to structure their presentation to make it informative yet engaging.

We (meaning a group of us that support a lot of these events) have tossed around a few ideas, but nothing has ever really materialized. Today I hope to change all that. Today, I’m announcing that I will personally help mentor 5 different VMware users who are willing to step up and volunteer to speak for the first time at a local VMUG meeting in the near future.

So what does this mean?

  • I will help you select a topic on which to speak (in coordination with your local VMUG leader).
  • I will provide guidance and feedback on gathering your content.
  • I will review and provide feedback and suggestions for improving your presentation.
  • If desired, I will provide tips and tricks for public speaking.

And I’m calling on others within the VMUG community who are frequent speakers to do the same. I think that Mike Laverick might have already done something like this; perhaps the others have as well. If so, that’s awesome. If not, I challenge you, as someone viewed in a technical leadership role within the VMware and VMUG communities, to use that leadership role in a way that I hope will reinvigorate and renew user involvement and participation in the VMware/VMUG community.

If you’re one of the 5 people who’s willing to take me up on this offer, the first step is contact me and your local VMUG leader and express your interest. Don’t have my e-mail address? Here’s your first challenge: it’s somewhere on this site.

If you’re already a frequent speaker at VMUGs and you, too, want to help mentor other speakers, you can either post a comment here to that effect (and provide people with a way of getting in touch with you), or—if you have your own blog—I encourage you to make the same offer via your own site. Where possible, I’ll try to update this (or you can use trackbacks) so that readers have a good idea of who out there is willing to provide assistance to help them become the next VMUG “rock star” presenter.

Good luck, and I look forward to hearing from you!

UPDATE: A few folks have noted that all the names I listed above are VMware employees, so I’ve added a couple others who are not. Don’t read too much into that; it was all VMware employees because I work at VMware, too, and they’re the ones I communicate with frequently. There are lots of passionate and dedicated VMUG supporters out there—you know who you are!

Also, be sure to check the comments; a number of folks are volunteering to also mentor new speakers.

Tags: , ,

Next Monday, May 20, the OpenStack Denver meetup group will gather jointly with the inaugural meeting of the Infracoders Denver meetup group for a talk titled “Infrastructure as Code with Chef and OpenStack.” The joint meeting will be held at Innovation Pavilion in Centennial/Englewood (location information here). The event will start at 7PM.

Giving the presentation will be none other than Joshua Timberman of OpsCode (@jtimberman on Twitter). Joshua will be speaking on Chef, a system integration framework that is commonly used in “infrastructure as code” environments and in a number of OpenStack deployments. Joshua will discuss the basic principles of Chef, the primitives it provides, and how you can use it to drive your infrastructure toward full automation.

For more information, or to RSVP for the meetup event, you can visit either the OpenStack Denver meetup group event page or the Infracoders Denver meetup group event page. We do ask that you RSVP so that we can plan food and drinks for the event, but please only RSVP in one of the two meetup groups (not both).

<aside>Also, if you are interested in presenting at the OpenStack Denver meetup group or the Infracoders Denver meetup group, please let me know. We are actively seeking co-organizers as well as speakers/presenters for future events.</aside>

If you live in the South Denver metro area and are interested in either OpenStack or infrastructure as code, this is an event you won’t want to miss!

Tags: , , ,

Regular readers of this site know that my wife, Crystal, runs something called Spousetivities. Spousetivities originated out of boredom, essentially—Crystal was traveling with me to VMworld and wanted to find someone to hang out with while I was at the conference. That was VMworld 2008, and since that time she’s had activities at VMworld 2009, VMworld 2010 (including VMworld Europe 2010), VMworld 2011 (both US and Europe), and VMworld 2012 (US and Europe). She’s also had activities at EMC World (2011 and 2012), HP Discover EMEA, and Dell Storage Forum in Boston. This year, she’s added another conference: IBM Edge 2013 in Las Vegas!

IBM Edge 2013 (conference site here) runs from June 10–14 at Mandalay Bay in Las Vegas. If you are attending IBM Edge 2013 this year, I’d encourage you to consider bringing your spouse or significant other with you and getting them involved in Spousetivities. As is always the case, Crystal has a great line-up of activities planned for participants, including:

  • The ever-popular “Getting to Know You” breakfast on Monday, June 10
  • A “Culinary Mystery Tour” of famous restaurants along the Strip
  • A tour of Red Rock Canyon Conservation area and highlights of the famous Vegas strip
  • “Cooking at the Ranch,” where you’ll get to meet Chef Philip Dell of Sin City Chefs and the Food Network’s show “Chopped” (More details here.)
  • A Grand Canyon tour
  • A Hoover Dam tour
  • A wide variety of spa services from THE Bathhouse, including facials, manicures, massages, and pedicures

All in all, it looks like a great week of activities. For the conference attendee, you gain the benefit of being able to spend time with your partner in the evenings without having to worry about them during the day (leaving you to be able to focus on the conference). For the partner traveling with the attendee, you don’t have to worry about being alone, finding your way around town, or bothering your partner at the conference. It is truly a “win-win” for everyone involved.

All these activities have been discounted, thanks to IBM’s sponsorship of Spousetivities, so I encourage you to visit the registration page and get signed up as soon as possible.

Tags:

I’m very excited to announce the inaugural OpenStack Denver meetup, scheduled for 7 PM on Wednesday, January 9, 2013—only 6 days away!

If you haven’t already joined the meetup group, please head over to the group page on Meetup.com and join, then RSVP for the inaugural OpenStack Denver meeting. Cisco Systems was kind enough to sponsor the event, both by hosting us at their Englewood office (near Park Meadows Mall) as well as by supplying food (pizza) and drinks (soda/water).

At the inaugural meeting, we’ll first provide an OpenStack primer, so those who aren’t familiar with OpenStack will get an idea of what it’s all about and what’s included. Next, co-organizer Shannon McFarland will talk about what Cisco’s been doing with OpenStack, and then we’ll wrap up the first meetup with a discussion of desired future topics, speakers, and other logistical items.

This is going to be a great opportunity to meet other folks in the Denver area who are also interested in or working with OpenStack, so I highly encourage you to do your best to make it. See you there!

Tags: ,

« Older entries