In this article, I’ll show you how to install Ubuntu packages from a specific repository. It’s not that this is a terribly difficult process, but it also isn’t necessarily intuitive for those who haven’t had to do it before. I ran into this while trying to install an alpha release of LXC 1.0.0 for my recent post on automatically connecting LXC to Open vSwitch (OVS).

Normally, you’d install a package using apt-get like this:

apt-get install package-name

However, when you want to install a package from a specific repository, the command syntax shifts slightly so that it looks like this instead:

apt-get install package-name/repository

So, in my particular case, I was trying to install LXC. However, I needed the alpha LXC 1.0.0 package from the precise-backports repository. So the command to do that looked like this:

apt-get install lxc/precise-backports

Are there other useful apt-get tidbits like this that other readers might find particularly useful? Feel free to share in the comments below. Thanks for reading!

Tags: , ,

Welcome to Technology Short Take #38, another installment in my irregularly-published series that collects links and thoughts on data center-related technologies from around the web. But enough with the introduction, let’s get on to the content already!

Networking

  • Jason Edelman does some experimenting with the Python APIs on a Cisco Nexus 3000. In the process, he muses about the value of configuration management tool chains such as Chef and Puppet in a world of “open switch” platforms such as Cumulus Linux.
  • Speaking of Cumulus Linux…did you see the announcement that Dell has signed a reseller agreement with Cumulus Networks? I’m pretty excited about this announcement, and I hope that Cumulus sees great success as a result. There are a variety of write-ups about the announcement; so good, many not so good. The not-so-good variety typically refers to Cumulus’ product as an SDN product when technically it isn’t. This article on Barron’s by Tiernan Ray is a pretty good summary of the announcement and some of its implications.
  • Pete Welcher has launched a series of articles discussing “practical SDN,” focusing on the key leaders in the market: NSX, DFA, and the yet-to-be-launched ACI. In the initial installation of the series, he does a good job of providing some basics around each of the products, although (as would be expected of a product that hasn’t launched yet) he has to do some guessing when it comes to ACI. The series continues with a discussion of L2 forwarding and L3 forwarding across the various products. Definitely worth reading, in my opinion.
  • Nick Buraglio takes away all your reasons for not collecting flow-based data from your environment with his write-up on installing nfsen and nfdump for NetFlow and/or sFlow collection.
  • Terry Slattery has a nice write-up on new network designs that are ideally suited for SDN. If you are looking for a primer on “next-generation” network designs, this is worth reviewing.
  • Need some Debian packages for Open vSwitch 2.0? Here’s another article from Nick Buraglio—he has some information to help you out.

Servers/Hardware

Nothing this time, but check back next time.

Security

Nothing from my end. Maybe you have something you’d like to share in the comments?

Cloud Computing/Cloud Management

  • Christian Elsen (who works in Integration Engineering at VMware) has a nice series of articles going on using OpenStack with vSphere and NSX. The series starts here, but follow the links at the bottom of that article for the rest of the posts. This is really good stuff—he includes the use of the NSX vSwitch with vSphere 5.5, and talks about vSphere OpenStack Virtual Appliance (VOVA) as well. All in all, well worth a read in my opinion.
  • Maish Saidel-Keesing (one of my co-authors on the first edition of VMware vSphere Design and also a super-sharp guy) recently wrote an article on how adoption of OpenStack will slow the adoption of SDN. While I agree that widespread adoption of OpenStack could potentially retard the evolution of enterprise IT, I’m not necessarily convinced that it will slow the adoption of SDN and network virtualization solutions. Why? Because, in part, I believe that the full benefits of something like OpenStack need a good network virtualization solution in order to be realized. Yes, some vendors are writing plugins for Neutron that manipulate physical switches. But for developers to get true isolation, application portability, the ability to re-create production environments in development—all that is going to require network virtualization.
  • Here’s a useful OpenStack CLI cheat sheet for some commonly-used commands.

Operating Systems/Applications

  • If you’re using Ansible (a product I haven’t had a chance to use but I’m closely watching), but I came across this article on an upcoming change to the SSH transport that Ansible uses. This change, referred to as “ssh_alt,” promises a significant performance increase for Ansible. Good stuff.
  • I don’t think I’ve mentioned this before, but Forbes Guthrie (my co-author on the VMware vSphere Design books and an already great guy) has a series going on using Linux as a domain controller for a vSphere-based lab. The series is up to four parts now: part 1, part 2, part 3, and part 4.
  • Need (or want) to increase the SCSI timeout for a KVM guest? See these instructions.
  • I’ve been recommending that IT pros get more familiar with Linux, as I think its influence in the data center will continue to grow. However, the problem that I sometimes face is that experienced folks tend to share these “super commands” that ordinary folks have a hard time decomposing. However, this site should make that easier. I’ve tried it—it’s actually pretty handy.

Storage

  • Jim Ruddy (an EMCer, former co-worker of mine, and an overall great guy) has a pretty cool series of articles discussing the use of EMC ViPR in conjunction with OpenStack. Want to use OpenStack Glance with EMC ViPR using ViPR’s Swift API support? See here. Want a multi-node Cinder setup with ViPR? Read how here. Multi-node Glance with ViPR? He’s got it. If you’re new to ViPR (who outside of EMC isn’t?), you might also find his articles on deploying EMC ViPR, setting up back-end storage for ViPR, or deploying object services with ViPR to also be helpful.
  • Speaking of ViPR, EMC has apparently decided to release it for free for non-commercial use. See here.
  • Looking for more information on VSAN? Look no further than Cormac Hogan’s extensive VSAN series (up to Part 14 at last check!). The best way to find this stuff is to check articles tagged VSAN on Cormac’s site. The official VMware vSphere blog also has a series of articles running; check out part 1 and part 2.

Virtualization

  • Did you happen to see this news about Microsoft Hyper-V Recovery Manager (HRM)? This is an Azure-hosted service that can be roughly compared to VMware’s Site Recovery Manager (SRM). However, unlike SRM (which is hosted on-premise), HRM is hosted by Microsoft Azure. As the article points out, it’s important to understand that this doesn’t mean your VMs are replicated to Azure—it’s just the orchestration portion of HRM that is running in Azure.
  • Oh, and speaking of Hyper-V…in early January Microsoft released version 3.5 of their Linux Integration Services, which primarily appears to be focused on adding Linux distribution support (CentOS/RHEL 6.5 is now supported).
  • Gregory Gee has a write-up on installing the Cisco CSR 1000V in VirtualBox. (I’m a recent VirtualBox convert myself; I find the vboxmanage command just so very handy.) Note that I haven’t tried this myself, as I don’t have a Cisco login to get the CSR 1000V code. If any readers have tried it, I’d love to hear your feedback. Gregory also has a few other interesting posts I’m planning to review in the next few weeks as well.
  • Sunny Dua, who works with VMware PSO in India, has a series of blog posts on architecting vSphere environments. It’s currently up to five parts; I don’t know how many more (if any) are planned. Here are the links: part 1 (clusters), part 2 (vCenter SSO), part 3 (storage), part 4 (design process), and part 5 (networking).

It’s time to wrap up now before this gets any longer. If you have any thoughts or tidbits you’d like to share, I welcome any and all courteous comments. Join (or start) the conversation!

Tags: , , , , , , , , , , , ,

I’ve previously discussed using Open vSwitch (OVS) with Linux Containers (LXC) in a couple of previous posts (here and here). In this post, I’m going to show you one way to have your containers automatically connected to OVS on startup without having to use libvirt.

I tested this configuration using Ubuntu 12.04 with the Linux 3.8.0 kernel and an alpha release of LXC 1.0.0 from the precise-backports repository. The version of LXC in the 12.04 repositories (version 0.7.5, if I recall correctly) isn’t new enough to support the specific feature I’m describing here, so plan accordingly.

If you aren’t familiar with LXC, I’d suggest you first read my LXC introductory post. You’ll probably also find the post on using LXC, OVS, and GRE tunnels useful, as some of the information there is applicable here also.

Ready? Let’s get started.

Configuring the Container

You can create your container using the standard LXC tools:

lxc-create -n cn-01 -t ubuntu

As you may already know, this will create a container named “cn–01″ based on the Ubuntu template. The configuration for this container will be found, by default, at /var/lib/lxc/cn–01/config. By default, unless you’ve changed the configuration of your system, this container will be configured to use virtual Ethernet network interfaces and be attached to the default LXC bridge.

The changes required to make the container connect to OVS are, fortunately, quite minimal:

  1. First, remove or comment out the lxc.network.link line. This is the configuration parameter that causes the container to attach to the default LXC bridge (normally called “lxcbr0″).
  2. Add a configuration line to run a script after creating the network interfaces. In my examples here, I’ll assume the script is called “ovsup” and is stored in the /etc/lxc/ directory. The configuration parameter should look something like this:
lxc.network.script.up = /etc/lxc/ovsup

(Note that there is also a corresponding lxc.network.script.down configuration parameter, but I won’t be using it in this example.)

Once you’ve made these changes to the container’s configuration, then you’re ready to create the actual script.

Creating the Network Attachment Script

Your script—the one referenced on the lxc.network.script.up in the container’s configuration file—should look something like this:

(If you can’t see the code block above, please click here.)

LXC passes five parameters to the script when it is called:

  1. The name of the container
  2. The configuration section of the container’s configuration (“net” in this case)
  3. Either “up” or “down”, depending on which configuration option is calling the script (lxc.network.script.up passes “up”, lxc.network.script.down passes “down”)
  4. The type of networking (“veth” in this case)
  5. The name of the interface (randomly generated unless you have included lxc.network.veth.peer in the container’s configuration)

This simple script doesn’t really need anything other than the interface name, so it only uses parameter 5 (the $5 in the script). The script first ensures that the appropriate OVS bridge exists (creating it if necessary), then deletes the interface from the OVS bridge (if it exists) and adds it back to the OVS bridge.

(Note: If you are using the lxc.network.script.down configuration parameter, you could eliminate the line to delete the port from the OVS bridge and place it in the down script instead. Or, you could write logic into the script to see if “down” is being called and delete the port. There are a variety of ways to approach the situation.)

Using this configuration, when you start the container the host-side virtual Ethernet interface created by LXC will be automatically added to OVS, and your container will have whatever network connectivity is dictated by the OVS configuration. This could include tunneled connectivity (as described here) or bridged connectivity.

If you have any questions, feedback, or corrections, please feel free to speak up in the comments below. I encourage reader interaction!

Tags: , , ,

In this post, I’m going to show you a workaround to running Synergy on OS X Mavericks. If you visit the official Synergy page, you’ll note that the site indicates that full Mavericks support is still pending. However, if you’re willing to “get your hands dirty,” you can run Synergy on OS X Mavericks right now.

If you’re unfamiliar with Synergy, read this write-up (from 2 years ago) on how I use Synergy in my home office setup. The basic gist behind Synergy is that one computer will run the Synergy server; other computers will run the Synergy client and connect to the Synergy server. You’ll be able to use the keyboard and mouse attached to the Synergy server to control the Synergy clients.

Here’s how to get Synergy support running on OS X Mavericks now:

  1. Download the latest 10.8 Synergy build from the website. (I didn’t include a link here because the link changes as the version changes, so the link would become stale rather quickly.) This downloads as a .DMG file to your computer.
  2. Double-click the .DMG to open and mount it on your desktop. Inside the .DMG, you’ll see the Synergy app icon.
  3. Right-click (or Ctrl-click) on the Synergy app and select “Show Package Contents.”
  4. Double-click on Contents, then MacOS.
  5. In the MacOS file, copy the synergys and synergyc files to a different location. It doesn’t really matter where, just make note of the location.
  6. Close all the window and eject (unmount) the downloaded .DMG file.

For your Synergy server, you’ll need an appropriate configuration file. You can check my previously-mentioned Synergy post for an example configuration file, or you can peruse the official wiki. Either way, create an appropriate configuration file, and make note of its name and location.

When you’re ready, just launch the Synergy server from the OS X Terminal, like this (I’m assuming that synergys and its configuration file—creatively named synergy.conf—are stored in your home directory):

~/synergys -c ~/synergy.conf

Using whatever method you prefer, copy the previously-extracted synergyc file to your Synergy client(s). As before, it doesn’t really matter too much where you put the file, just make a note of the location. Then, using the OS X Terminal, run this (as before, I’m assuming synergyc is in your home directory):

~/synergyc <Name of Synergy server>

That’s it! You should now be able to use the keyboard and mouse on the Synergy server to control the Synergy client. I can verify that current builds of the Synergy client (synergyc) work just fine on OS X Mavericks, and I would imagine that the Synergy server would work fine as well (I just haven’t had time to test it). If anyone has tested it and would like to provide feedback in the comments, I’m sure other readers would appreciate it.

Enjoy! (By the way, if you do find Synergy to be useful, I’d recommend donating to the project.)

Tags: , ,

For the last couple of years, I’ve been sharing my annual “projects list” and then grading myself on the progress (or lack thereof) on the projects at the end of the year. For example, I shared my 2012 project list in early January 2012, then gave myself grades on my progress in early January 2013.

In this post, I’m going to grade myself on my 2013 project list. Here’s the project list I posted just under a year ago:

  1. Continue to learn German.
  2. Reinforce base Linux knowledge.
  3. Continue using Puppet for automation.
  4. Reinforce data center networking fundamentals.

So, how did I do? Here’s my assessment of my progress:

  1. Continue to learn German: I have made some progress here, though certainly not the progress that I wanted to learn. I’ve incorporated the use of Memrise, which has been helpful, but I still haven’t made the progress I’d like. If anyone has any other suggestions for additional tools, I’m open to your feedback. Grade: D (below average)

  2. Reinforce base Linux knowledge: I’ve been suggesting to VMUG attendees that they needed to learn Linux, as it’s popping up all over the place in all sorts of roles. In my original 2013 project list, I said that I was going to focus on RHEL and RHEL variants, but over the course of the year ended up focusing more on Debian and Ubuntu instead (due to more up-to-date packages and closer alignment with OpenStack). Despite that shift in focus, I think I’ve made decent progress here. There’s always room to grow, of course. Grade: B (above average)

  3. Continue using Puppet for automation: I’ve made reasonable progress here, expanding my use of Puppet to include managing Debian/Ubuntu software repositories (see here and here for examples), managing SSH keys, managing Open vSwitch (OVS) via a third-party module, and—most recently—exploring the use of Puppet with OpenStack (no blog posts—yet). There’s still quite a bit I need to learn (some of my manifests don’t work quite as well as I’d like), but I did make progress here. Grade: C (average)

  4. Reinforce data center networking fundamentals: Naturally, my role at VMware has me spending a great deal of time on how network virtualization affects DC networking, and this translated into some progress on this project. While I gained solid high-level knowledge on a number of DC networking topics, I think I was originally thinking I needed more low-level “in the weeds” knowledge. In that regard, I don’t feel like I did well; on the flip side, though, I’m not sure whether I really needed more low-level “in the weeds” knowledge. This highlights a key struggle for me personally: how to balance the deep, “in the weeds” knowledge with the high-level knowledge. Suggestions on how others have overcome this challenge are welcome. Grade: C (average)

In summary: not bad, but could have been better!

What’s not reflected in this project list is the progress I made with understanding OpenStack, or my deepened level of knowledge of OVS (just browse articles tagged OVS for an idea of what I’ve been doing in that area).

Over the next week or two, I’ll be reflecting on my progress with my 2013 projects and thinking about what projects I should be taking in 2014. In the meantime, I would love to hear any feedback, suggestions, or thoughts on projects I should consider, technologies that should be incorporated, or learning techniques I should leverage. Feel free to speak up in the comments below.

Tags: , , , , , , ,

I’m back with another “Reducing the Friction” blog post, this time to talk about training an e-mail spam filter. As you may recall if you read one of the earlier posts (may I suggest this one and this one?), I use the phrase “reducing the friction” to talk about streamlining and simplifying commonly-performed tasks so as to allow you to improve your own personal efficiency and/or adopt more efficient habits or workflows.

I recently moved my e-mail services off Google and onto Fastmail. (I described the reasons why I made this move in this post.) Fastmail has been great—I find their e-mail service to be much faster than what I was seeing with Google. The one drawback, though, has been an increase in spam. Not a huge increase, mind you, but enough to notice. Fastmail recommends that you help train your personal spam filter by moving messages into a folder you designate, and then telling their systems to consider everything in that folder to be spam. While that’s not hard, it’s also not very streamlined, so I took up the task of making it even easier and faster.

(Note that, as a Mac user, most of my tips focus on Mac applications. If you’re a user of another platform, I do apologize—but I can only speak about what I use myself.)

To help make this easier, I came up with this bit of AppleScript:

(Click here if you don’t see a code block above this paragraph.)

To make this work on your system, all you need to do is just change the two property declarations at the top. Set them to the correct values for your system.

As you can tell by the comments in the code, this script was designed to be run from within Apple’s Mail app itself. To make that easy, I use a fantastic tool called FastScripts (highly recommended!). Using FastScripts, I can easily designate an application-specific shortcut key (I use Ctrl-Cmd-S) to invoke the script from within Apple Mail. Boom! Just like that, you now have a super-easy way to both help speed up processing your e-mail as well as helping train your personal spam filter. (Note: if you are also a FastMail customer, refer to the FastMail help screens while logged in to get more details on marking a folder for spam learning.)

I hope this helps someone out there!

Tags: , , ,

In this post, I’m going to show you how to manage Open vSwitch (OVS) using the popular open source configuration management tool Puppet. This is not the first time I’ve written about this topic; in the past I showed you how to automate OVS configuration with Puppet via a hack utilizing some RHEL-OVS integrations. This post, however, focuses on the use of an actual Puppet module that will manage the configuration of OVS, a much cleaner solution—in my view, at least—than leveraging the file-based integrations I discussed earlier.

The Puppet module I’ll be using and discussing in this post is the L23Network module (found here on GitHub). This is an extremely flexible and useful module, capable of not only configuring and managing network interfaces but also capable of managing the configuration of OVS. The latter functionality—managing the configuration of OVS—will be the primary focus of this article (with one exception).

The L23Network module is pretty well-documented, so I won’t bother regurgitating the documentation here. Instead, I’ll just try to provide some specific examples, and tie those examples back to some of the various OVS configurations I’ve shown you in earlier posts.

First, let’s get “the one exception” I mentioned earlier out of the way. In OVS environments, you’ll often need to bring up a physical interface without assigning that interface an IP address. For example, consider a physical interface that is providing bridged connectivity to guest domains (VMs) on an OVS bridge. You’ll want the interface to be up, but the interface does not need an IP address. Using the L23Network module, you can accomplish that with this piece of code in your manifest:

l23network::l3::ifconfig {'eth1': ipaddr => 'none'}

Now that eth1 is up, you could create a bridge to which to attach it with this code:

l23network::l2::bridge {'br-ex': }

And then you could actually attach eth1 like this:

l23network::l2::port {'eth1': bridge => 'br-ex'}

You could then provide multi-VLAN bridged connectivity to guest domains via libvirt as I explained in my post on using VLANs with libvirt and OVS. (Or, if you are using LXC with libvirt and OVS, you could provide multi-VLAN bridged connectivity to containers.)

The L23Network module can also work with other types of interfaces, not just physical interfaces. Want to create an internal interface, perhaps to use as a tunnel endpoint for GRE tunnel as I described here? Use this snippet of Puppet code:

l23network::l2::port {'tep0': bridge => 'br-tun', type => 'internal'}

You could then assign the newly-created tep0 interface an IP address on your transport network like this:

l23network::l3::ifconfig {'tep0': ipaddr => '10.1.1.1/24'}

(In theory, you could also use the L23Network module to create an internal interface so as to run host management through OVS, but then you could run into issues communicating with the Puppet server over the same interfaces the Puppet server is configuring.)

I haven’t yet used L23Network to create/manage patch ports or GRE ports, but the documentation indicates the module is capable of doing so. This is an area that I plan to explore in a bit more detail in the near future (in my copious free time).

Based on the snippets I’ve given you above, it should be pretty straightforward how to combine these various pieces together to fully configure and manage OVS instances across a large number of systems. However, if you have any questions, feel free to post them in the comments below. I also welcome all other courteous feedback; you are encouraged to start (or join) the conversation.

Tags: , , , , , ,

Some time ago, I showed you how to use Puppet to add Ubuntu Cloud Archive support to your Ubuntu installation. Since that time, OpenStack has had a new release (the Havana release) and the Ubuntu Cloud Archive repository has been updated with new packages to support the Havana release. In this post, I’ll show you an updated snippet of code to take advantage of these newer packages in the Ubuntu Cloud Archive repository.

For reference, here’s the original Puppet code I posted in the first article:

(If you can’t see the code snippet above, please click here.)

That points your Ubuntu installation to the Grizzly packages.

Here’s updated code that will point your installation to the appropriate packages to support OpenStack’s Havana release:

(Click here if you can’t see the code snippet above.)

As you can see, there is only one small change between the two code snippets: changing “precise-updates/grizzly” in the first to “precise-updates/havana” in the second. (Naturally, this assumes you’re using Ubuntu 12.04, the latest LTS release as of this writing.) I know this seems like a pretty simple thing to post, but I wanted to include it here for the sake of completeness and the benefit of future readers.

Feel free to speak up in the comments with any questions, suggestions, or corrections.

Tags: , , , ,

I recently had the opportunity to conduct an e-mail interview with Jesse Proudman, founder and CEO of Blue Box. The interview is posted below. While it gets a bit biased toward Blue Box at times (he started the company, after all), there are some interesting points raised.

[Scott Lowe] Tell the readers here a little bit about yourself and Blue Box.

[Jesse Proudman] My name is Jesse Proudman. I love the Internet’s “plumbing”. I started working in the infrastructure space in 1997 to capitalize on my “gear head fascination” with the real-time nature of server infrastructure. In 2003, I founded Blue Box from my college dorm room to be a managed hosting company focused on overcoming the complexities of highly customized open source infrastructure running high traffic web applications. Unlike many hosting and cloud startups that evolved to become focused solely on selling raw infrastructure, Blue Box subscribes to the belief that many businesses demand fully rounded solutions vs. raw infrastructure that they must assemble.

In 2007, Blue Box developed proprietary container-based cloud technology for both our public and private cloud offerings. Blue Box customers combine bare metal infrastructure with on-demand cloud containers for a hybrid deployment coupled with fully managed support including 24×7 monitoring. In Q3 of 2013, Blue Box launched OpenStack On-Demand, a hosted, single tenant private cloud offering. Capitalizing on our 10 years of infrastructure experience, this single-tenant hosted private cloud delivers on all six tenants today’s IT teams require as they evolve their cloud strategy.

Outside of Blue Box, I have an amazing wife and daughter, and I have a son due in February. I am a fanatical sports car racer and also am actively involved in the Seattle entrepreneurial community, guiding the next generation of young entrepreneurs through the University of Puget Sound Business Leadership, 9Mile Labs and University of Washington’s Entrepreneurial mentorship programs.

[SL] Can you tell me a bit more about why you see the continuing OpenStack API debate to be irrelevant?

[JP] First, I want to be specific that when I say irrelevant, I don’t mean unhealthy. This debate is a healthy one to be having. The sharing of ideas and opinions is the cornerstone of the open source philosophy.

But I believe the debate may be premature.

Imagine a true IaaS stack as a tree. Strong trees must have a strong trunk to support their many branches. For IaaS technology, the trunk is built of essential cloud core services: compute, networking and storage. In OpenStack, these equate to Nova, Neutron, Cinder and Swift (or Ceph). The branches then consist of everything else that evolves the offering and makes it more compelling and easier to use: everything that builds upon the strong foundation of the trunk. In OpenStack, these branches include services like Ceilometer, Heat, Trove and Marconi.

I consider API compatibility a branch.

Without a robust, reliable sturdy trunk, the branches become irrelevant, as there isn’t a strong supporting foundation to hold them up. And if neither the trunk, nor the branches are reliable, then the API to talk to them certainly isn’t relevant.

In is my belief that OpenStack needs concentrate on strengthening the trunk before putting significant emphasis into the possibilities that exist in the upper reaches of the canopy.

OpenStack’s core is quite close. Grizzly was the first release many would define as “stable” and Havana is the first release where that stability could convert into operational simplicity. But there still is room for improvement (particularly with projects like Neutron), so it is my argument to focus on strengthening the core before exploring new projects.

Once the core is strong then the challenge becomes the development of the service catalogue. Amazon has over one hundred different services that can be integrated together into a powerful ecosystem. OpenStack’s service catalogue is still very young and evolving rapidly. Focus here is required to ensure this evolution is effective.

Long term, I certainly believe API compatibility with AWS (or Azure, or GCE) can bring value to the OpenStack ecosystem. Early cloud adopters who took to AWS before alternatives existed have technology stacks written to interface directly with Amazon’s APIs. Being able to provide compatibility for those prospects means preventing them from having to rewrite large sections of their tooling to work with OpenStack.

API compatibility provided via a higher-level proxy would allow for the breakout of maintenance to a specific group of engineers focused on that requirement (and remove that burden from the individual service teams). It’s important to remember that chasing external APIs will always be a moving target.

In the short run, I believe it wise to rally the community around a common goal: strengthen the trunk and intelligently engineer the branches.

[SL] What are your thoughts on public vs. private OpenStack?

[JP] For many, OpenStack draws much of its appeal from the availability of both public, hosted private and on-premise private implementations. While “cloud bursting” still lives more in the realms of fantasy than reality, the power of a unified API and service stack across multiple consumption models enables incredible possibilities.

Conceptually, public cloud is generally better defined and understood than private cloud. Private cloud is a relatively new phenomenon, and for many has really meant advanced virtualization. While it’s true private clouds have traditionally meant on-premise implementations, hosted private cloud technologies are empowering a new wave of companies who recognize the power of elastic capabilities, and the value that single-tenant implementations can deliver. These organizations are deploying applications into hosted private clouds, seeing the value proposition that can bring.

A single-sourced vendor or technology won’t dominate this world. OpenStack delivers flexibility through its multiple consumption models, and that only benefits the customer. Customers can use that flexibility to deploy workloads to the most appropriate venue, and that only will ensure further levels of adoption.

[SL] There’s quite a bit of discussion that private cloud strictly a transitional state. Can you share your thoughts on that topic?

[JP] In 2012, we began speaking with IT managers across our customer base, and beyond. Through those interviews, we confirmed what we now call the “six tenets of private cloud.” Our customers and prospects are all evolving their cloud strategies in real time, and are looking for solutions that satisfy these requirements:

  1. Ease of use ­ new solutions should be intuitively simple. Engineers should be able to use existing tooling, and ops staff shouldn’t have to go learn an entirely new operational environment.

  2. Deliver IaaS and PaaS – IaaS has become a ubiquitous requirement, but we repeatedly heard requests for an environment that would also support PaaS deployments.

  3. Elastic capabilities – the desire to the ability to grow and contract private environments much in the same way they could in a public cloud.

  4. Integration with existing IT infrastructure ­ businesses have significant investments in existing data center infrastructure: load balancers, IDS/IPS, SAN, database infrastructure, etc. From our conversations, integration of those devices into a hosted cloud environment brought significant value to their cloud strategy.

  5. Security policy control ­ greater compliance pressures mean a physical “air gap” around their cloud infrastructure can help ensure compliance and ease peace of mind.

  6. Cost predictability and control – Customers didn’t want to need a PhD to understand how much they’ll owe at the end of the month. Budgets are projected a year in advance, and they needed to know they could project their budgeted dollars into specific capacity.

Public cloud deployments can certainly solve a number of these tenets, but we quickly discovered that no offering on the market today was solving all six in a compelling way.

This isn’t a zero sum game. Private cloud, whether it be on-premise or in a hosted environment, is here to stay. It will be treated as an additional tool in the toolbox. As buyers reallocate the more than $400 billion that’s spent annually on IT deployments, I believe we’ll see a whole new wave of adoption, especially when private cloud offerings address the six tenets of private cloud.

[SL] Thanks for your time, Jesse!

If anyone has any thoughts to share about some of the points raised in the interview, feel free to speak up in the comments. As Jesse points out, debate can be healthy, so I invite you to post your (courteous and professional) thoughts, ideas, or responses below. All feedback is welcome!

Tags: ,

No Man is an Island

The phrase “No man is an island” is attributed to John Donne, an English poet who lived in the late 1500s and early 1600s. The phrase comes from his Meditation XVII, and was borrowed later by Thomas Merton to become the title of a book he published in 1955. In both cases, the phrase is used to discuss the interconnected nature of humanity and mankind. (Side note: the phrase “for whom the bell tolls” comes from the same origin.)

What does this have to do with IT? That’s a good question. As I was preparing to start the day today, I took some time to reflect upon my career; specifically, the individuals that have been placed in my life and career. I think all people are prone to overlook the contributions that others have played in their own successes, but I think that IT professionals may be a bit more affected in this way. (I freely admit that, having spent my entire career as an IT professional, my view may be skewed.) So, in the spirit of recognizing that no man is an island—meaning that who we are and what we accomplish are intricately intertwined with those around us—I wanted to take a moment and express my thanks and appreciation for a few folks who have helped contribute to my success.

So, who has helped contribute to my achievements? The full list is too long to publish, but here are a few notables that I wanted to call out (in no particular order):

  • Chad Sakac took the opportunity to write the book that would become Mastering VMware vSphere 4 and gave it to me instead. (If you aren’t familiar with that story, read this.)
  • My wife, Crystal, is just awesome—she has enabled and empowered me in many, many ways. ‘Nuff said.
  • Forbes Guthrie allowed me to join him in writing VMware vSphere Design (as well as the 2nd edition), has been a great contributor to the Mastering VMware vSphere series, and has been a fabulous co-presenter at the last couple VMworld conferences.
  • Chris McCain (who recently joined VMware and has some great stuff in store—stay tuned!) wrote Mastering VMware Infrastructure 3, the book that I would revise to become Mastering VMware vSphere 4.
  • Andy Sholomon, formerly with Cisco and now with VCE, was kind enough to provide some infrastructure for me to use when writing Mastering VMware vSphere 5. Without it, writing the book would have been much more difficult.
  • Rick Scherer, Duncan Epping, and Jason Boche all served as technical editors for various books that I’ve written; their contributions and efforts helped make those books better.

To all of you: thank you.

The list could go on and on and on; if I didn’t expressly call your name out, please don’t feel bad. My point, though, is this: have you taken the time recently to thank others in your life that have contributed to your success?

Tags: , , ,

« Older entries § Newer entries »