General

You are currently browsing articles tagged General.

Crossing the Threshold

Last week while attending the CloudStack Collaboration Conference in my home city of Denver, I had a bit of a realization. I wanted to share it here in the hopes that it might serve as an encouragement for others out there.

Long-time readers know that one of my projects over the last couple of years has been to become more fluent in Linux (refer back to my 2012 project list and my 2013 project list). I gave myself a B+ for my efforts last year, feeling that I had made good progress over the course of the year. Even so, I still felt like there was still so much that I needed to learn. As so many of us are inclined to do, I was more focused on what I still hadn’t learned instead of taking a look at what I had learned.

This is where last week comes in. Before the conference started, I participated in a couple of “mini boot camps” focused on CloudStack and related tools/clients/libraries. (You may have seen some of my tweets about tools like cloudmonkey, Apache libcloud, and awscli/ec2stack.) As I worked through the boot camps, I could hear the questions that other attendees were asking as well as the tasks with which others were struggling. Folks were wrestling with what I thought were pretty simple tasks; these were not, after all, very complex exercises. So the lab guide wasn’t complete or correct; you should be able to figure it out, right?

Then it hit me. I’m a Linux guy now.

That’s right—I had crossed the threshold between “working on being a Linux guy” and “being a Linux guy.” It’s not that I know everything there is to know (far from it!), but that the base level of knowledge had finally accrued to a level where—upon closer inspection—I realized that I was fluent enough that I could perform most common tasks without a great deal of effort. I knew enough to know what to do when something didn’t work, or wasn’t configured properly, and the general direction in which to look when trying to determine exactly what was going on.

At this point you might be wondering, “What does that have to do with encouraging me?” That’s a fair question.

As IT professionals—especially those on the individual contributor (IC) track instead of the management track—we are tasked with having to constantly learn new products, new technologies, and new methodologies. Because the learning never stops (and that isn’t a bad thing, in my humble opinion), we tend to focus on what we haven’t mastered. We forget to look at what we have learned, at the progress that we have made. Maybe, like me, you’re on a journey of learning and education to move from being a specialist in one type of technology to a practitioner of another type. If that’s the case, perhaps it’s time you stop saying “I will be a <new technology> person” and say “I am a <new technology> person.” Perhaps it’s time for you to cross the threshold.

Tags: , ,

I recently had the opportunity to conduct an e-mail interview with Jesse Proudman, founder and CEO of Blue Box. The interview is posted below. While it gets a bit biased toward Blue Box at times (he started the company, after all), there are some interesting points raised.

[Scott Lowe] Tell the readers here a little bit about yourself and Blue Box.

[Jesse Proudman] My name is Jesse Proudman. I love the Internet’s “plumbing”. I started working in the infrastructure space in 1997 to capitalize on my “gear head fascination” with the real-time nature of server infrastructure. In 2003, I founded Blue Box from my college dorm room to be a managed hosting company focused on overcoming the complexities of highly customized open source infrastructure running high traffic web applications. Unlike many hosting and cloud startups that evolved to become focused solely on selling raw infrastructure, Blue Box subscribes to the belief that many businesses demand fully rounded solutions vs. raw infrastructure that they must assemble.

In 2007, Blue Box developed proprietary container-based cloud technology for both our public and private cloud offerings. Blue Box customers combine bare metal infrastructure with on-demand cloud containers for a hybrid deployment coupled with fully managed support including 24×7 monitoring. In Q3 of 2013, Blue Box launched OpenStack On-Demand, a hosted, single tenant private cloud offering. Capitalizing on our 10 years of infrastructure experience, this single-tenant hosted private cloud delivers on all six tenants today’s IT teams require as they evolve their cloud strategy.

Outside of Blue Box, I have an amazing wife and daughter, and I have a son due in February. I am a fanatical sports car racer and also am actively involved in the Seattle entrepreneurial community, guiding the next generation of young entrepreneurs through the University of Puget Sound Business Leadership, 9Mile Labs and University of Washington’s Entrepreneurial mentorship programs.

[SL] Can you tell me a bit more about why you see the continuing OpenStack API debate to be irrelevant?

[JP] First, I want to be specific that when I say irrelevant, I don’t mean unhealthy. This debate is a healthy one to be having. The sharing of ideas and opinions is the cornerstone of the open source philosophy.

But I believe the debate may be premature.

Imagine a true IaaS stack as a tree. Strong trees must have a strong trunk to support their many branches. For IaaS technology, the trunk is built of essential cloud core services: compute, networking and storage. In OpenStack, these equate to Nova, Neutron, Cinder and Swift (or Ceph). The branches then consist of everything else that evolves the offering and makes it more compelling and easier to use: everything that builds upon the strong foundation of the trunk. In OpenStack, these branches include services like Ceilometer, Heat, Trove and Marconi.

I consider API compatibility a branch.

Without a robust, reliable sturdy trunk, the branches become irrelevant, as there isn’t a strong supporting foundation to hold them up. And if neither the trunk, nor the branches are reliable, then the API to talk to them certainly isn’t relevant.

In is my belief that OpenStack needs concentrate on strengthening the trunk before putting significant emphasis into the possibilities that exist in the upper reaches of the canopy.

OpenStack’s core is quite close. Grizzly was the first release many would define as “stable” and Havana is the first release where that stability could convert into operational simplicity. But there still is room for improvement (particularly with projects like Neutron), so it is my argument to focus on strengthening the core before exploring new projects.

Once the core is strong then the challenge becomes the development of the service catalogue. Amazon has over one hundred different services that can be integrated together into a powerful ecosystem. OpenStack’s service catalogue is still very young and evolving rapidly. Focus here is required to ensure this evolution is effective.

Long term, I certainly believe API compatibility with AWS (or Azure, or GCE) can bring value to the OpenStack ecosystem. Early cloud adopters who took to AWS before alternatives existed have technology stacks written to interface directly with Amazon’s APIs. Being able to provide compatibility for those prospects means preventing them from having to rewrite large sections of their tooling to work with OpenStack.

API compatibility provided via a higher-level proxy would allow for the breakout of maintenance to a specific group of engineers focused on that requirement (and remove that burden from the individual service teams). It’s important to remember that chasing external APIs will always be a moving target.

In the short run, I believe it wise to rally the community around a common goal: strengthen the trunk and intelligently engineer the branches.

[SL] What are your thoughts on public vs. private OpenStack?

[JP] For many, OpenStack draws much of its appeal from the availability of both public, hosted private and on-premise private implementations. While “cloud bursting” still lives more in the realms of fantasy than reality, the power of a unified API and service stack across multiple consumption models enables incredible possibilities.

Conceptually, public cloud is generally better defined and understood than private cloud. Private cloud is a relatively new phenomenon, and for many has really meant advanced virtualization. While it’s true private clouds have traditionally meant on-premise implementations, hosted private cloud technologies are empowering a new wave of companies who recognize the power of elastic capabilities, and the value that single-tenant implementations can deliver. These organizations are deploying applications into hosted private clouds, seeing the value proposition that can bring.

A single-sourced vendor or technology won’t dominate this world. OpenStack delivers flexibility through its multiple consumption models, and that only benefits the customer. Customers can use that flexibility to deploy workloads to the most appropriate venue, and that only will ensure further levels of adoption.

[SL] There’s quite a bit of discussion that private cloud strictly a transitional state. Can you share your thoughts on that topic?

[JP] In 2012, we began speaking with IT managers across our customer base, and beyond. Through those interviews, we confirmed what we now call the “six tenets of private cloud.” Our customers and prospects are all evolving their cloud strategies in real time, and are looking for solutions that satisfy these requirements:

  1. Ease of use ­ new solutions should be intuitively simple. Engineers should be able to use existing tooling, and ops staff shouldn’t have to go learn an entirely new operational environment.

  2. Deliver IaaS and PaaS – IaaS has become a ubiquitous requirement, but we repeatedly heard requests for an environment that would also support PaaS deployments.

  3. Elastic capabilities – the desire to the ability to grow and contract private environments much in the same way they could in a public cloud.

  4. Integration with existing IT infrastructure ­ businesses have significant investments in existing data center infrastructure: load balancers, IDS/IPS, SAN, database infrastructure, etc. From our conversations, integration of those devices into a hosted cloud environment brought significant value to their cloud strategy.

  5. Security policy control ­ greater compliance pressures mean a physical “air gap” around their cloud infrastructure can help ensure compliance and ease peace of mind.

  6. Cost predictability and control – Customers didn’t want to need a PhD to understand how much they’ll owe at the end of the month. Budgets are projected a year in advance, and they needed to know they could project their budgeted dollars into specific capacity.

Public cloud deployments can certainly solve a number of these tenets, but we quickly discovered that no offering on the market today was solving all six in a compelling way.

This isn’t a zero sum game. Private cloud, whether it be on-premise or in a hosted environment, is here to stay. It will be treated as an additional tool in the toolbox. As buyers reallocate the more than $400 billion that’s spent annually on IT deployments, I believe we’ll see a whole new wave of adoption, especially when private cloud offerings address the six tenets of private cloud.

[SL] Thanks for your time, Jesse!

If anyone has any thoughts to share about some of the points raised in the interview, feel free to speak up in the comments. As Jesse points out, debate can be healthy, so I invite you to post your (courteous and professional) thoughts, ideas, or responses below. All feedback is welcome!

Tags: ,

No Man is an Island

The phrase “No man is an island” is attributed to John Donne, an English poet who lived in the late 1500s and early 1600s. The phrase comes from his Meditation XVII, and was borrowed later by Thomas Merton to become the title of a book he published in 1955. In both cases, the phrase is used to discuss the interconnected nature of humanity and mankind. (Side note: the phrase “for whom the bell tolls” comes from the same origin.)

What does this have to do with IT? That’s a good question. As I was preparing to start the day today, I took some time to reflect upon my career; specifically, the individuals that have been placed in my life and career. I think all people are prone to overlook the contributions that others have played in their own successes, but I think that IT professionals may be a bit more affected in this way. (I freely admit that, having spent my entire career as an IT professional, my view may be skewed.) So, in the spirit of recognizing that no man is an island—meaning that who we are and what we accomplish are intricately intertwined with those around us—I wanted to take a moment and express my thanks and appreciation for a few folks who have helped contribute to my success.

So, who has helped contribute to my achievements? The full list is too long to publish, but here are a few notables that I wanted to call out (in no particular order):

  • Chad Sakac took the opportunity to write the book that would become Mastering VMware vSphere 4 and gave it to me instead. (If you aren’t familiar with that story, read this.)
  • My wife, Crystal, is just awesome—she has enabled and empowered me in many, many ways. ‘Nuff said.
  • Forbes Guthrie allowed me to join him in writing VMware vSphere Design (as well as the 2nd edition), has been a great contributor to the Mastering VMware vSphere series, and has been a fabulous co-presenter at the last couple VMworld conferences.
  • Chris McCain (who recently joined VMware and has some great stuff in store—stay tuned!) wrote Mastering VMware Infrastructure 3, the book that I would revise to become Mastering VMware vSphere 4.
  • Andy Sholomon, formerly with Cisco and now with VCE, was kind enough to provide some infrastructure for me to use when writing Mastering VMware vSphere 5. Without it, writing the book would have been much more difficult.
  • Rick Scherer, Duncan Epping, and Jason Boche all served as technical editors for various books that I’ve written; their contributions and efforts helped make those books better.

To all of you: thank you.

The list could go on and on and on; if I didn’t expressly call your name out, please don’t feel bad. My point, though, is this: have you taken the time recently to thank others in your life that have contributed to your success?

Tags: , , ,

In this post I’m going to show you how to make JSON (JavaScript Object Notation) output more readable using a BBEdit Text Filter. This post comes out of some recent work I’ve done in learning how to interact with various REST APIs. My initial REST API explorations have focused on the NVP/NSX API, but I plan to soon expand my explorations to include other APIs, like OpenStack.

<aside>You might be wondering why I’m exploring REST APIs and stuff like JSON. I believe that having a better understanding of the APIs these products use will help drive a deeper and more complete understanding of the underlying products. I could be wrong…time will tell.</aside>

BBEdit Text Filters, as you may already know, simply take the current text (or selected text) in BBEdit, do something to it, and then output the result. The “do something to it” is, of course, the magic. You can, for example—and this something that I do—use the MultiMarkdown command-line executable to transform a (Multi)Markdown document in BBEdit to HTML. All that is required is to place the script (or a link to the script) in the ~/Library/Application Support/BBEdit/Text Filters directory. The script just needs to accept input on STDIN, transform it in whatever way you want, and spit out the results on STDOUT. BBEdit does the rest.

In this case, you’re going to use an extremely simple Bash shell script containing a single Python command to transform JSON-serialized output into a more human-readable format.

First, let’s take a look at some JSON-serialized output. Here’s the output from an API call to NVP/NSX to list the logical switches:

(To view the information if the code block isn’t available, click here.)

It is human-readable, but just barely. How can we make this a bit easier for humans to read and parse? Well, it turns out that OS X (and probably most recent flavors of Linux) come with a version of Python pre-installed, and the pre-installed version of Python comes with the ability to “prettify” (make more human readable) JSON text. (In the case of OS X 10.8 “Mountain Lion”, the pre-installed version of Python is version 2.7.2.) With grateful thanks to the folks on Twitter who introduced me to this trick, the command you would use in this instance is as follows:

python -m json.tool

Very simple, right? To turn this into a BBEdit Text Filter, we need only wrap this into a very simple shell script, such as this:

(If you aren’t able to see the code block above, please click here.)

Place this script (or a link to this script) in the ~/Library/Application Support/BBEdit/Text Filters directory, restart BBEdit, and you should be good to go. Now you can copy and paste the output from an API call like the output above, run it through this text filter, and get output that looks like this:

(Click here if the code block above isn’t visible.)

Given that I’m new to a lot of this stuff, I’m sure that I have probably overlooked something along the way. There might be better and/or more efficient ways of handling this, or better tools to use. If you have any suggestions on how to improve any of this—or just suggestions on how I might do better in my API explorations—feel free to speak out in the comments below.

Tags: , , , , ,

As part of some work I’ve been doing to stretch myself and my boundaries, I’ve recently started diving a bit deeper into working with REST APIs. As I started this exploration, one thing that kept coming up again and again was JSON. In this post, I’m going to try to provide an introduction to JSON for non-programmers (like me).

Let’s start with the acronym: “JSON” stands for “JavaScript Object Notation”. It’s a lightweight, text-based format, and is frequently used in conjunction with REST APIs and web-based services. You can find more details on the specifics of the JSON format at the JSON web site.

The basic structures of JSON are:

  • A set of name/value pairs
  • An ordered list of values

Now, that sounds simple enough, but let’s look at some examples to really bring this home. The examples that I’ll use are taken from API responses in my virtualized NVP/NSX lab using the NVP/NSX API.

First, here’s an example of a set of name/value pairs (I’ve taken the liberty of making the raw output from the API calls more readable for clarity’s sake; raw JSON data typically wouldn’t have line returns or whitespace):

(Click here if you don’t see a code block above.)

Let’s break that down a bit:

  • Each object is surrounded by curly braces (referred to just as braces by some). The entire JSON response is itself an object—at least this is how I view it—so it is surrounded by braces. It contains three objects, which are part of the “results” array (more on that in just a second).
  • Each object may have multiple name/value pairs separated by a comma. Name/value pairs may represent a single value (as with “result_count”) or multiple values in an array (as with “results”). So, in this example, there are two name/value pairs: one named “result_count” and one named “results”. Note the use of the colon separating the name from the associated value(s).
  • The second item (object, if you will) in the API response is named “results”, but note that its value isn’t a single value; rather, it’s an array of values. Arrays are surrounded by brackets, and each element/item in the array is separated by a comma. In this particular case—and this will vary from API to API, as far as I know—note that the “result_count” value tells you exactly how many items are in the “results” array, making it incredibly easy to iterate through the items in the array.
  • In the “results” array, there are three items (or objects). Each of these items—each surrounded by braces—has three name/value pairs, separated by commas, with a colon separating the name from the value.

As you can see, JSON has a very simple structure and format, once you’re able to break it down.

There are numerous other examples and breakdowns of JSON around the web; here are a few that I found helpful in my education (which is still ongoing):

JSON Basics: What You Need to Know
JSON: What It Is, How It Works, & How to Use It (This one gets a bit deep for non-programmers, but you might find it helpful nevertheless.)
JSON Tutorial

You may also see the term “JSON-serialized”; this generally refers to data that has been formatted as JSON. To JSON-serialize data means to put it into JSON format; to deserialize JSON data means to parse (or deconstruct) the JSON output into some other format.

I’m sure there’s a great deal more that could (and perhaps should) be said about JSON, but I did say this would be a non-programmer’s introduction to JSON. If you have any questions, thoughts, suggestions, or clarifications, please feel free to speak up in the comments below.

UPDATE: I’ve edited the text above based on some feedback in the comments. Thanks for your feedback; the post is better for it!

Tags: , ,

In this post, I’ll share my thoughts on the Timbuk2 Commute messenger bag. It was about two months ago that I tweeted that I bought a new bag:

@scott_lowe: I picked up a new @timbuk2 messenger bag yesterday. Looking forward to seeing how well it works on my next business trip.

The bag I ended up purchasing was the Timbuk2 Commute in black. Since I bought it in early September (just after returning from San Francisco for VMworld), I’ve traveled with it to various points in the US, to Canada, and most recently to Barcelona for VMworld EMEA. Here are my thoughts on the bag now that I’ve logged a decent amount of travel with it:

  • Although it’s a “small” bag—the smallest size offered in the Commute line—I’ve found that it has plenty of space to carry the stuff that I regularly need. I regularly carry my 13″ MacBook Air, my full-size iPad, my Bose QC15 headphones, all my various power adapters/chargers/cables, a small notebook, and I still have some room left over. (If you have a larger laptop, you’ll need a larger bag; the small Commute only accommodates up to a 13″ laptop.)
  • The default shoulder pad that comes with the bag is woefully inadequate. I strongly recommend getting the Deluxe Strap Pad. My first couple of trips were with the default pad, and after a few hours the bag’s presence was noticeable. With the Deluxe Strap Pad, carrying my bag for a few hours is a breeze, and carrying it for 12 hours a day during VMworld EMEA was bearable (I can’t imagine doing it with the default shoulder pad.)
  • The TSA-friendly “lie flat” design doesn’t necessarily lie very flat, especially if the main compartment is full. This can make it a little challenging in the security line, but this is a very minor nit overall. The design does, however, make it super easy to get to my laptop (or anything else in that compartment).
  • While getting to my laptop is easy, getting to stuff in the bag isn’t quite so easy. (This is probably by design.) If you have smaller items in your bag that you’re going to need to get out and put back in frequently, the clips+velcro on the Commute’s flap make this a big more work. Again, this is probably by design (to prevent other people from being able to easily get into your bag).
  • The zip-open rear compartment has a space on one side for the laptop; here my 13" MacBook Air (with a Speck case) fits very well. On the opposite side is a pair of slightly smaller compartments separated by a velcro divider. These smaller compartments are just a tad too small to hold a full-size iPad, though I suspect an iPad mini (or similarly-sized tablet) would fit quite well there.
  • A full-size iPad does fit well, however, in the pocket on the inside of the main compartment.
  • The complement of pockets and organizers inside the main compartment makes it easy to store (and subsequently find) all the small things you often need when traveling. In my case, the pockets and organizers easily keep my chargers and charging cables, pens, business cards, and such neatly organized and accessible.

Overall, I’m pretty happy with the bag, and I would recommend it to anyone who travels relatively light and is looking for a messenger-style bag. This bag wouldn’t have worked in years past when I was doing implementations/installations at customer sites (you invariably end up having to carry a ton of cables, tools, software, connectors, etc. in situations like that), but now that I’m able to focus on core essentials—laptop, tablet, notebook, and limited accessories—this bag is perfect.

If you have any additional feedback on the Timbuk2 Commute bag you’d like to share, I’d love to hear it (and I’m sure other readers would as well). Feel free to add your thoughts in the comments below.

Tags: ,

Shortly after I published Technology Short Take #27, a reader asked me on Twitter how I managed the information that goes into each Technology Short Take (TST) article. Although I’ve blogged about my productivity setup before, that post is now over two years old and horribly out of date. Given that I need to post a more current description of my tools and workflow, I thought I’d take a brief moment to answer the reader’s question. Here’s how I go about building the Technology Short Take articles.

I’ve mentioned before that I have three “layers” of tools: a consumption layer, an organization layer, and a creation layer. I’ll break down the process according to these layers.

The Consumption Layer

This is where I go about actually finding the content that gets pulled into a TST post. There’s nothing terribly unique here; I have a collection of RSS feeds which I subscribe and I get content from people I follow on Twitter. I will also visit various Usenet newsgroups and certain IRC channels (only on irc.freenode.net) from time to time.

If you’re interested in seeing the RSS feeds to which I subscribe, here’s an OPML list of all my subscriptions.

The majority of the content I consume is web-based, so when I find an article that I want to use in a TST post, I’ll save that as a Safari web archive. I wish there was a more platform-independent method, but as yet I haven’t found a good solution. Once I’ve saved a page for future use, then we move into the organization layer.

The Organization Layer

As content is discovered via any of the various consumption layer tools, I then need to get that content “sucked” into my primary organization layer tool. I use a really, really fancy tool—it’s called the file system.

When I save a web page that I’m planning on including in a TST article, I generally save it, by default, to the Desktop. I have a program named Hazel that watches the Desktop and Downloads folders for web archive files, and automatically moves them to a WebArchives folder. From there, I use a couple of saved Spotlight searches to identify newly-created web archives that don’t have a source URL or don’t have any OpenMeta tags assigned. For these newly-created web archives, I use the Spotlight comments field to store the source URL, and I use an application named Tagger to assign OpenMeta tags.

Once a web archive has its source URL and OpenMeta tags assigned, then I have a group of saved Spotlight searches the group files together by topic: virtualization, storage, OpenStack, Open vSwitch, etc. This makes it super easy for me to find web archives—or other files—related to a particular topic. All these saved searches are built on queries involving the OpenMeta tags.

Content will remain here until either a) I use it in a TST article and no longer need it; or b) I use it in a TST article but feel it’s worth keeping for future reference. I might keep content for quite a long time before I use it. Case in point: the Q-tools stuff from Dave Gray that eventually found its way into some of my VMUG presentations was something I found in 2009 (it was published in 2008).

The Creation Layer

After collecting content for a while, a scheduled, recurring OmniFocus action pops up reminding me to write the next TST post. At this point, I go back to my organization layer tools (saved Spotlight searches and content folders) to pull out the various pieces of information that I want to include. I write the post in Markdown using TextMate, building off a skeleton template that has all the content headers already in place.

Using the saved searches I described above, I’ll search through my content to see what I want to include in the TST post. When an item is included in a TST blog post, I’ll write my thoughts about the article or post, then grab the source URL from the Spotlight comments to make a hyperlink to the content. If the content is useful and informative, I might keep it around; otherwise, I’ll generally delete the saved web archive or bookmark file. I repeat this process, going through all my saved content, until I feel that the TST post is sufficiently full of content.

Then, because it’s all written in Markdown, I convert the post to HTML and actually publish it to the site using the excellent MarsEdit application. TextMate makes this incredibly easy with just a few keystrokes.

And that’s it! That’s the “mystery” behind the Technology Short Take articles. Feel free to post any questions or thoughts you have about my workflow and tools in the comments below. Courteous comments are always welcome.

Tags: , ,

Every now and then, it’s kind of fun to look back at the content that I’ve generated in my 7 years of blogging here (soon to be 8 years). With that in mind, here are some “posts from the past” for early December.

4 Years Ago (Early December 2008)

Installing the VI Power Documenter
Continuing the FCoE Discussion

3 Years Ago (Early December 2009)

What is SR-IOV?
Snow Leopard, Time Machine, and Iomega ix4-200d

2 Years Ago (Early December 2010)

VLAN Trunking Between Nexus 5010 and Dell PowerConnect Switches
Using Device Aliass on a Cisco MDS

1 Year Ago (Early December 2011)

Some Initial MPLS Reading
Examining VXLAN
Revisiting VXLAN and Layer 3 Connectivity

Tags: , ,

Choosing the Right Tool

Kyle Mestery of Cisco recently shared a link on Twitter about Marco Arment’s choice to move back to a dual Mac setup. The article started me thinking along two parallel lines:

  1. First, it got me to thinking about my own Mac setup.
  2. Second, I wondered if there was a similar parallel about the choice of tools in data centers today.

Allow me to explain. In his article, Marco talks about how he abandoned his dual Mac setup—in which he was using a 13″ MacBook Air and a Mac Pro—for a single Mac setup using a really beefed up 15″ MacBook Pro. He had hoped to reduce the overhead of managing multiple Macs by consolidating to a single Mac that would bring together the best attributes of both. What he found a year later was that his attempt to use one tool (the 15″ MacBook Pro) really ended up being less beneficial instead of more beneficial. Instead of getting the best of both worlds, he instead inherited the drawbacks of both. To that end, he’s now moving back to a dual Mac setup.

<aside>Just as a point of clarification for those who might be unfamiliar with Apple’s product lines: the MacBook Air is a highly portable ultraslim notebook with limited expandability; the MacBook Pro is Apple’s more expandable and more powerful notebook; and the Mac Pro is a workstation-class desktop computer with oodles of CPU cores and gobs of RAM.</aside>

The first thought process that occurred to me regarded my own Mac setup. I recently purchased a Mac Pro because I needed a computer that could provide more raw compute capacity than my laptop possessed. Reading Marco’s article validated (in a way) my thinking. However, it also challenged me to consider the type and configuration of laptop that I use. I migrated from a 15″ MacBook Pro to a smaller 13″ MacBook Pro last year, but I still insisted on a MacBook Pro with 8GB of RAM instead of a lightweight MacBook Air with only 4GB of RAM. Don’t get me wrong; I’m extremely happy with my MBP. It does lead me to believe, though, that my next laptop choice needs to be a choice that optimizes its role. What do I mean by that? I chose a Mac Pro because I needed CPU power and lots of RAM. The Mac Pro was the right tool for that job. Similarly, when I select my next laptop, I need to prioritize what I need out of a laptop (weight, size, mobility) and choose the right tool for the job. Instead of choosing a laptop based on how expandable it is, perhaps I should be looking at how suited it is for a highly mobile worker.

There’s a second train of thought here as well, and this train of thought pertains to the tools that we, as IT professionals, choose for our data centers. We also need to make sure that we are selecting the right tool for the job. Marco’s experience shows that using a single tool to perform multiple functions doesn’t always work as well as one might think. He made his initial decision in an effort to reduce complexity—an admirable goal, I’d say. I believe that many IT professionals also strive to reduce complexity in their data centers, and many IT professionals probably do that by reducing the number of technologies, products, or vendors in their data center. But are we sacrificing the functionality of our data centers as a result? When we are choosing on the basis of reducing complexity instead of choosing the right tool for the job, what are we losing? What are we giving up? Instead of trying to shoehorn a solution we already have into a role for which it really isn’t suited simply to “reduce complexity,” shouldn’t we focus instead on choosing the right tool for the job? Shouldn’t we select tools that are optimized to perform the function we need them to perform? Yes, we do need standards and guidelines, and I’m not saying that we shouldn’t strive to keep complexity from overwhelming the data center. But what should be our primary driver for tool selection—the reduction of complexity via the re-use of a less-than-optimal tool, or the selection of a tool optimized for the function it needs to perform?

I’d love to hear your thoughts in the comments.

Tags: ,

In case you hadn’t heard, Spousetivities—the spouse activities that had their genesis at VMworld 2008 in Las Vegas—has expanded once again! Last year, Spousetivities expanded to include EMC World, and just last week Crystal concluded her second year of activities at EMC World 2012 in Las Vegas. This year, Spousetivities expands to not only include VMworld and EMC World, but also Dell Storage Forum 2012, located this year in Boston, MA.

This is a great opportunity for Dell Storage Forum conference attendees to bring along their families to Boston and have confidence that their families will be well cared for and offered a great set of activities. Here’s a quick peek at what Spousetivities has planned for Dell Storage Forum:

  • A whale watching tour (get to know Zeppelin, Regulus, Ember, Eden, Tear, and the other whales!)
  • Private trolley and walking tour of MIT
  • A sight-seeing trolley tour
  • Trolley and walking wine tour
  • Boston Freedom Trail sight-seeing tour
  • Private lunch cruise on a paddle wheel boat
  • Private trolley and walking tour of Harvard

If you ask me, that’s a pretty impressive line-up of activities. Dell Storage Forum 2012 starts in only a couple of weeks, so hurry and visit the registration page to sign up for your activities. Also, help spread the word to anyone you might know who is also attending Dell Storage Forum. Given that Crystal and Spousetivities is new to the Dell community, I know that she could use all the help she can to get the word out about Spousetivities at Dell Storage Forum this year.

If you’re headed to Boston for Dell Storage Forum and considering taking your family/spouse/partner with you, you definitely need to take a look at the list of activities. Your family/spouse/partner is worth it!

Tags: ,

« Older entries