General

This category contains general posts about blogging or this site.

Crossing the Threshold

Last week while attending the CloudStack Collaboration Conference in my home city of Denver, I had a bit of a realization. I wanted to share it here in the hopes that it might serve as an encouragement for others out there.

Long-time readers know that one of my projects over the last couple of years has been to become more fluent in Linux (refer back to my 2012 project list and my 2013 project list). I gave myself a B+ for my efforts last year, feeling that I had made good progress over the course of the year. Even so, I still felt like there was still so much that I needed to learn. As so many of us are inclined to do, I was more focused on what I still hadn’t learned instead of taking a look at what I had learned.

This is where last week comes in. Before the conference started, I participated in a couple of “mini boot camps” focused on CloudStack and related tools/clients/libraries. (You may have seen some of my tweets about tools like cloudmonkey, Apache libcloud, and awscli/ec2stack.) As I worked through the boot camps, I could hear the questions that other attendees were asking as well as the tasks with which others were struggling. Folks were wrestling with what I thought were pretty simple tasks; these were not, after all, very complex exercises. So the lab guide wasn’t complete or correct; you should be able to figure it out, right?

Then it hit me. I’m a Linux guy now.

That’s right—I had crossed the threshold between “working on being a Linux guy” and “being a Linux guy.” It’s not that I know everything there is to know (far from it!), but that the base level of knowledge had finally accrued to a level where—upon closer inspection—I realized that I was fluent enough that I could perform most common tasks without a great deal of effort. I knew enough to know what to do when something didn’t work, or wasn’t configured properly, and the general direction in which to look when trying to determine exactly what was going on.

At this point you might be wondering, “What does that have to do with encouraging me?” That’s a fair question.

As IT professionals—especially those on the individual contributor (IC) track instead of the management track—we are tasked with having to constantly learn new products, new technologies, and new methodologies. Because the learning never stops (and that isn’t a bad thing, in my humble opinion), we tend to focus on what we haven’t mastered. We forget to look at what we have learned, at the progress that we have made. Maybe, like me, you’re on a journey of learning and education to move from being a specialist in one type of technology to a practitioner of another type. If that’s the case, perhaps it’s time you stop saying “I will be a <new technology> person” and say “I am a <new technology> person.” Perhaps it’s time for you to cross the threshold.

Tags: , ,

For the last couple of years, I’ve been sharing my annual “projects list” and then grading myself on the progress (or lack thereof) on the projects at the end of the year. For example, I shared my 2012 project list in early January 2012, then gave myself grades on my progress in early January 2013.

In this post, I’m going to grade myself on my 2013 project list. Here’s the project list I posted just under a year ago:

  1. Continue to learn German.
  2. Reinforce base Linux knowledge.
  3. Continue using Puppet for automation.
  4. Reinforce data center networking fundamentals.

So, how did I do? Here’s my assessment of my progress:

  1. Continue to learn German: I have made some progress here, though certainly not the progress that I wanted to learn. I’ve incorporated the use of Memrise, which has been helpful, but I still haven’t made the progress I’d like. If anyone has any other suggestions for additional tools, I’m open to your feedback. Grade: D (below average)

  2. Reinforce base Linux knowledge: I’ve been suggesting to VMUG attendees that they needed to learn Linux, as it’s popping up all over the place in all sorts of roles. In my original 2013 project list, I said that I was going to focus on RHEL and RHEL variants, but over the course of the year ended up focusing more on Debian and Ubuntu instead (due to more up-to-date packages and closer alignment with OpenStack). Despite that shift in focus, I think I’ve made decent progress here. There’s always room to grow, of course. Grade: B (above average)

  3. Continue using Puppet for automation: I’ve made reasonable progress here, expanding my use of Puppet to include managing Debian/Ubuntu software repositories (see here and here for examples), managing SSH keys, managing Open vSwitch (OVS) via a third-party module, and—most recently—exploring the use of Puppet with OpenStack (no blog posts—yet). There’s still quite a bit I need to learn (some of my manifests don’t work quite as well as I’d like), but I did make progress here. Grade: C (average)

  4. Reinforce data center networking fundamentals: Naturally, my role at VMware has me spending a great deal of time on how network virtualization affects DC networking, and this translated into some progress on this project. While I gained solid high-level knowledge on a number of DC networking topics, I think I was originally thinking I needed more low-level “in the weeds” knowledge. In that regard, I don’t feel like I did well; on the flip side, though, I’m not sure whether I really needed more low-level “in the weeds” knowledge. This highlights a key struggle for me personally: how to balance the deep, “in the weeds” knowledge with the high-level knowledge. Suggestions on how others have overcome this challenge are welcome. Grade: C (average)

In summary: not bad, but could have been better!

What’s not reflected in this project list is the progress I made with understanding OpenStack, or my deepened level of knowledge of OVS (just browse articles tagged OVS for an idea of what I’ve been doing in that area).

Over the next week or two, I’ll be reflecting on my progress with my 2013 projects and thinking about what projects I should be taking in 2014. In the meantime, I would love to hear any feedback, suggestions, or thoughts on projects I should consider, technologies that should be incorporated, or learning techniques I should leverage. Feel free to speak up in the comments below.

Tags: , , , , , , ,

I’m back with another “Reducing the Friction” blog post, this time to talk about training an e-mail spam filter. As you may recall if you read one of the earlier posts (may I suggest this one and this one?), I use the phrase “reducing the friction” to talk about streamlining and simplifying commonly-performed tasks so as to allow you to improve your own personal efficiency and/or adopt more efficient habits or workflows.

I recently moved my e-mail services off Google and onto Fastmail. (I described the reasons why I made this move in this post.) Fastmail has been great—I find their e-mail service to be much faster than what I was seeing with Google. The one drawback, though, has been an increase in spam. Not a huge increase, mind you, but enough to notice. Fastmail recommends that you help train your personal spam filter by moving messages into a folder you designate, and then telling their systems to consider everything in that folder to be spam. While that’s not hard, it’s also not very streamlined, so I took up the task of making it even easier and faster.

(Note that, as a Mac user, most of my tips focus on Mac applications. If you’re a user of another platform, I do apologize—but I can only speak about what I use myself.)

To help make this easier, I came up with this bit of AppleScript:

(Click here if you don’t see a code block above this paragraph.)

To make this work on your system, all you need to do is just change the two property declarations at the top. Set them to the correct values for your system.

As you can tell by the comments in the code, this script was designed to be run from within Apple’s Mail app itself. To make that easy, I use a fantastic tool called FastScripts (highly recommended!). Using FastScripts, I can easily designate an application-specific shortcut key (I use Ctrl-Cmd-S) to invoke the script from within Apple Mail. Boom! Just like that, you now have a super-easy way to both help speed up processing your e-mail as well as helping train your personal spam filter. (Note: if you are also a FastMail customer, refer to the FastMail help screens while logged in to get more details on marking a folder for spam learning.)

I hope this helps someone out there!

Tags: , , ,

I recently had the opportunity to conduct an e-mail interview with Jesse Proudman, founder and CEO of Blue Box. The interview is posted below. While it gets a bit biased toward Blue Box at times (he started the company, after all), there are some interesting points raised.

[Scott Lowe] Tell the readers here a little bit about yourself and Blue Box.

[Jesse Proudman] My name is Jesse Proudman. I love the Internet’s “plumbing”. I started working in the infrastructure space in 1997 to capitalize on my “gear head fascination” with the real-time nature of server infrastructure. In 2003, I founded Blue Box from my college dorm room to be a managed hosting company focused on overcoming the complexities of highly customized open source infrastructure running high traffic web applications. Unlike many hosting and cloud startups that evolved to become focused solely on selling raw infrastructure, Blue Box subscribes to the belief that many businesses demand fully rounded solutions vs. raw infrastructure that they must assemble.

In 2007, Blue Box developed proprietary container-based cloud technology for both our public and private cloud offerings. Blue Box customers combine bare metal infrastructure with on-demand cloud containers for a hybrid deployment coupled with fully managed support including 24×7 monitoring. In Q3 of 2013, Blue Box launched OpenStack On-Demand, a hosted, single tenant private cloud offering. Capitalizing on our 10 years of infrastructure experience, this single-tenant hosted private cloud delivers on all six tenants today’s IT teams require as they evolve their cloud strategy.

Outside of Blue Box, I have an amazing wife and daughter, and I have a son due in February. I am a fanatical sports car racer and also am actively involved in the Seattle entrepreneurial community, guiding the next generation of young entrepreneurs through the University of Puget Sound Business Leadership, 9Mile Labs and University of Washington’s Entrepreneurial mentorship programs.

[SL] Can you tell me a bit more about why you see the continuing OpenStack API debate to be irrelevant?

[JP] First, I want to be specific that when I say irrelevant, I don’t mean unhealthy. This debate is a healthy one to be having. The sharing of ideas and opinions is the cornerstone of the open source philosophy.

But I believe the debate may be premature.

Imagine a true IaaS stack as a tree. Strong trees must have a strong trunk to support their many branches. For IaaS technology, the trunk is built of essential cloud core services: compute, networking and storage. In OpenStack, these equate to Nova, Neutron, Cinder and Swift (or Ceph). The branches then consist of everything else that evolves the offering and makes it more compelling and easier to use: everything that builds upon the strong foundation of the trunk. In OpenStack, these branches include services like Ceilometer, Heat, Trove and Marconi.

I consider API compatibility a branch.

Without a robust, reliable sturdy trunk, the branches become irrelevant, as there isn’t a strong supporting foundation to hold them up. And if neither the trunk, nor the branches are reliable, then the API to talk to them certainly isn’t relevant.

In is my belief that OpenStack needs concentrate on strengthening the trunk before putting significant emphasis into the possibilities that exist in the upper reaches of the canopy.

OpenStack’s core is quite close. Grizzly was the first release many would define as “stable” and Havana is the first release where that stability could convert into operational simplicity. But there still is room for improvement (particularly with projects like Neutron), so it is my argument to focus on strengthening the core before exploring new projects.

Once the core is strong then the challenge becomes the development of the service catalogue. Amazon has over one hundred different services that can be integrated together into a powerful ecosystem. OpenStack’s service catalogue is still very young and evolving rapidly. Focus here is required to ensure this evolution is effective.

Long term, I certainly believe API compatibility with AWS (or Azure, or GCE) can bring value to the OpenStack ecosystem. Early cloud adopters who took to AWS before alternatives existed have technology stacks written to interface directly with Amazon’s APIs. Being able to provide compatibility for those prospects means preventing them from having to rewrite large sections of their tooling to work with OpenStack.

API compatibility provided via a higher-level proxy would allow for the breakout of maintenance to a specific group of engineers focused on that requirement (and remove that burden from the individual service teams). It’s important to remember that chasing external APIs will always be a moving target.

In the short run, I believe it wise to rally the community around a common goal: strengthen the trunk and intelligently engineer the branches.

[SL] What are your thoughts on public vs. private OpenStack?

[JP] For many, OpenStack draws much of its appeal from the availability of both public, hosted private and on-premise private implementations. While “cloud bursting” still lives more in the realms of fantasy than reality, the power of a unified API and service stack across multiple consumption models enables incredible possibilities.

Conceptually, public cloud is generally better defined and understood than private cloud. Private cloud is a relatively new phenomenon, and for many has really meant advanced virtualization. While it’s true private clouds have traditionally meant on-premise implementations, hosted private cloud technologies are empowering a new wave of companies who recognize the power of elastic capabilities, and the value that single-tenant implementations can deliver. These organizations are deploying applications into hosted private clouds, seeing the value proposition that can bring.

A single-sourced vendor or technology won’t dominate this world. OpenStack delivers flexibility through its multiple consumption models, and that only benefits the customer. Customers can use that flexibility to deploy workloads to the most appropriate venue, and that only will ensure further levels of adoption.

[SL] There’s quite a bit of discussion that private cloud strictly a transitional state. Can you share your thoughts on that topic?

[JP] In 2012, we began speaking with IT managers across our customer base, and beyond. Through those interviews, we confirmed what we now call the “six tenets of private cloud.” Our customers and prospects are all evolving their cloud strategies in real time, and are looking for solutions that satisfy these requirements:

  1. Ease of use ­ new solutions should be intuitively simple. Engineers should be able to use existing tooling, and ops staff shouldn’t have to go learn an entirely new operational environment.

  2. Deliver IaaS and PaaS – IaaS has become a ubiquitous requirement, but we repeatedly heard requests for an environment that would also support PaaS deployments.

  3. Elastic capabilities – the desire to the ability to grow and contract private environments much in the same way they could in a public cloud.

  4. Integration with existing IT infrastructure ­ businesses have significant investments in existing data center infrastructure: load balancers, IDS/IPS, SAN, database infrastructure, etc. From our conversations, integration of those devices into a hosted cloud environment brought significant value to their cloud strategy.

  5. Security policy control ­ greater compliance pressures mean a physical “air gap” around their cloud infrastructure can help ensure compliance and ease peace of mind.

  6. Cost predictability and control – Customers didn’t want to need a PhD to understand how much they’ll owe at the end of the month. Budgets are projected a year in advance, and they needed to know they could project their budgeted dollars into specific capacity.

Public cloud deployments can certainly solve a number of these tenets, but we quickly discovered that no offering on the market today was solving all six in a compelling way.

This isn’t a zero sum game. Private cloud, whether it be on-premise or in a hosted environment, is here to stay. It will be treated as an additional tool in the toolbox. As buyers reallocate the more than $400 billion that’s spent annually on IT deployments, I believe we’ll see a whole new wave of adoption, especially when private cloud offerings address the six tenets of private cloud.

[SL] Thanks for your time, Jesse!

If anyone has any thoughts to share about some of the points raised in the interview, feel free to speak up in the comments. As Jesse points out, debate can be healthy, so I invite you to post your (courteous and professional) thoughts, ideas, or responses below. All feedback is welcome!

Tags: ,

No Man is an Island

The phrase “No man is an island” is attributed to John Donne, an English poet who lived in the late 1500s and early 1600s. The phrase comes from his Meditation XVII, and was borrowed later by Thomas Merton to become the title of a book he published in 1955. In both cases, the phrase is used to discuss the interconnected nature of humanity and mankind. (Side note: the phrase “for whom the bell tolls” comes from the same origin.)

What does this have to do with IT? That’s a good question. As I was preparing to start the day today, I took some time to reflect upon my career; specifically, the individuals that have been placed in my life and career. I think all people are prone to overlook the contributions that others have played in their own successes, but I think that IT professionals may be a bit more affected in this way. (I freely admit that, having spent my entire career as an IT professional, my view may be skewed.) So, in the spirit of recognizing that no man is an island—meaning that who we are and what we accomplish are intricately intertwined with those around us—I wanted to take a moment and express my thanks and appreciation for a few folks who have helped contribute to my success.

So, who has helped contribute to my achievements? The full list is too long to publish, but here are a few notables that I wanted to call out (in no particular order):

  • Chad Sakac took the opportunity to write the book that would become Mastering VMware vSphere 4 and gave it to me instead. (If you aren’t familiar with that story, read this.)
  • My wife, Crystal, is just awesome—she has enabled and empowered me in many, many ways. ‘Nuff said.
  • Forbes Guthrie allowed me to join him in writing VMware vSphere Design (as well as the 2nd edition), has been a great contributor to the Mastering VMware vSphere series, and has been a fabulous co-presenter at the last couple VMworld conferences.
  • Chris McCain (who recently joined VMware and has some great stuff in store—stay tuned!) wrote Mastering VMware Infrastructure 3, the book that I would revise to become Mastering VMware vSphere 4.
  • Andy Sholomon, formerly with Cisco and now with VCE, was kind enough to provide some infrastructure for me to use when writing Mastering VMware vSphere 5. Without it, writing the book would have been much more difficult.
  • Rick Scherer, Duncan Epping, and Jason Boche all served as technical editors for various books that I’ve written; their contributions and efforts helped make those books better.

To all of you: thank you.

The list could go on and on and on; if I didn’t expressly call your name out, please don’t feel bad. My point, though, is this: have you taken the time recently to thank others in your life that have contributed to your success?

Tags: , , ,

Happy Thanksgiving 2013

In the United States, today is Thanksgiving—a day to give thanks for the many things we have in our lives. I just wanted to take a moment today to give thanks for a few of the most important things in my life:

  • My Lord and Savior, Jesus Christ
  • My beautiful and loving wife, Crystal
  • My awesome kids (Summer, Johnny, Elizabeth, Michael, Rhys, Sean, and Cameron)
  • My sons-in-law (Matt and Christopher)
  • My exchange student, Tim
  • My granddaughter, Lexi
  • My health
  • My career

If you celebrate Thanksgiving, I encourage you to take a moment and consider the important things in your life for which you should be thankful.

If you don’t celebrate Thanksgiving, that’s OK too—it might still be beneficial to reflect upon the good things in your life.

Happy Thanksgiving!

Tags:

In this post I’m going to show you how to make JSON (JavaScript Object Notation) output more readable using a BBEdit Text Filter. This post comes out of some recent work I’ve done in learning how to interact with various REST APIs. My initial REST API explorations have focused on the NVP/NSX API, but I plan to soon expand my explorations to include other APIs, like OpenStack.

<aside>You might be wondering why I’m exploring REST APIs and stuff like JSON. I believe that having a better understanding of the APIs these products use will help drive a deeper and more complete understanding of the underlying products. I could be wrong…time will tell.</aside>

BBEdit Text Filters, as you may already know, simply take the current text (or selected text) in BBEdit, do something to it, and then output the result. The “do something to it” is, of course, the magic. You can, for example—and this something that I do—use the MultiMarkdown command-line executable to transform a (Multi)Markdown document in BBEdit to HTML. All that is required is to place the script (or a link to the script) in the ~/Library/Application Support/BBEdit/Text Filters directory. The script just needs to accept input on STDIN, transform it in whatever way you want, and spit out the results on STDOUT. BBEdit does the rest.

In this case, you’re going to use an extremely simple Bash shell script containing a single Python command to transform JSON-serialized output into a more human-readable format.

First, let’s take a look at some JSON-serialized output. Here’s the output from an API call to NVP/NSX to list the logical switches:

(To view the information if the code block isn’t available, click here.)

It is human-readable, but just barely. How can we make this a bit easier for humans to read and parse? Well, it turns out that OS X (and probably most recent flavors of Linux) come with a version of Python pre-installed, and the pre-installed version of Python comes with the ability to “prettify” (make more human readable) JSON text. (In the case of OS X 10.8 “Mountain Lion”, the pre-installed version of Python is version 2.7.2.) With grateful thanks to the folks on Twitter who introduced me to this trick, the command you would use in this instance is as follows:

python -m json.tool

Very simple, right? To turn this into a BBEdit Text Filter, we need only wrap this into a very simple shell script, such as this:

(If you aren’t able to see the code block above, please click here.)

Place this script (or a link to this script) in the ~/Library/Application Support/BBEdit/Text Filters directory, restart BBEdit, and you should be good to go. Now you can copy and paste the output from an API call like the output above, run it through this text filter, and get output that looks like this:

(Click here if the code block above isn’t visible.)

Given that I’m new to a lot of this stuff, I’m sure that I have probably overlooked something along the way. There might be better and/or more efficient ways of handling this, or better tools to use. If you have any suggestions on how to improve any of this—or just suggestions on how I might do better in my API explorations—feel free to speak out in the comments below.

Tags: , , , , ,

As part of some work I’ve been doing to stretch myself and my boundaries, I’ve recently started diving a bit deeper into working with REST APIs. As I started this exploration, one thing that kept coming up again and again was JSON. In this post, I’m going to try to provide an introduction to JSON for non-programmers (like me).

Let’s start with the acronym: “JSON” stands for “JavaScript Object Notation”. It’s a lightweight, text-based format, and is frequently used in conjunction with REST APIs and web-based services. You can find more details on the specifics of the JSON format at the JSON web site.

The basic structures of JSON are:

  • A set of name/value pairs
  • An ordered list of values

Now, that sounds simple enough, but let’s look at some examples to really bring this home. The examples that I’ll use are taken from API responses in my virtualized NVP/NSX lab using the NVP/NSX API.

First, here’s an example of a set of name/value pairs (I’ve taken the liberty of making the raw output from the API calls more readable for clarity’s sake; raw JSON data typically wouldn’t have line returns or whitespace):

(Click here if you don’t see a code block above.)

Let’s break that down a bit:

  • Each object is surrounded by curly braces (referred to just as braces by some). The entire JSON response is itself an object—at least this is how I view it—so it is surrounded by braces. It contains three objects, which are part of the “results” array (more on that in just a second).
  • Each object may have multiple name/value pairs separated by a comma. Name/value pairs may represent a single value (as with “result_count”) or multiple values in an array (as with “results”). So, in this example, there are two name/value pairs: one named “result_count” and one named “results”. Note the use of the colon separating the name from the associated value(s).
  • The second item (object, if you will) in the API response is named “results”, but note that its value isn’t a single value; rather, it’s an array of values. Arrays are surrounded by brackets, and each element/item in the array is separated by a comma. In this particular case—and this will vary from API to API, as far as I know—note that the “result_count” value tells you exactly how many items are in the “results” array, making it incredibly easy to iterate through the items in the array.
  • In the “results” array, there are three items (or objects). Each of these items—each surrounded by braces—has three name/value pairs, separated by commas, with a colon separating the name from the value.

As you can see, JSON has a very simple structure and format, once you’re able to break it down.

There are numerous other examples and breakdowns of JSON around the web; here are a few that I found helpful in my education (which is still ongoing):

JSON Basics: What You Need to Know
JSON: What It Is, How It Works, & How to Use It (This one gets a bit deep for non-programmers, but you might find it helpful nevertheless.)
JSON Tutorial

You may also see the term “JSON-serialized”; this generally refers to data that has been formatted as JSON. To JSON-serialize data means to put it into JSON format; to deserialize JSON data means to parse (or deconstruct) the JSON output into some other format.

I’m sure there’s a great deal more that could (and perhaps should) be said about JSON, but I did say this would be a non-programmer’s introduction to JSON. If you have any questions, thoughts, suggestions, or clarifications, please feel free to speak up in the comments below.

UPDATE: I’ve edited the text above based on some feedback in the comments. Thanks for your feedback; the post is better for it!

Tags: , ,

In this post, I’ll share my thoughts on the Timbuk2 Commute messenger bag. It was about two months ago that I tweeted that I bought a new bag:

@scott_lowe: I picked up a new @timbuk2 messenger bag yesterday. Looking forward to seeing how well it works on my next business trip.

The bag I ended up purchasing was the Timbuk2 Commute in black. Since I bought it in early September (just after returning from San Francisco for VMworld), I’ve traveled with it to various points in the US, to Canada, and most recently to Barcelona for VMworld EMEA. Here are my thoughts on the bag now that I’ve logged a decent amount of travel with it:

  • Although it’s a “small” bag—the smallest size offered in the Commute line—I’ve found that it has plenty of space to carry the stuff that I regularly need. I regularly carry my 13″ MacBook Air, my full-size iPad, my Bose QC15 headphones, all my various power adapters/chargers/cables, a small notebook, and I still have some room left over. (If you have a larger laptop, you’ll need a larger bag; the small Commute only accommodates up to a 13″ laptop.)
  • The default shoulder pad that comes with the bag is woefully inadequate. I strongly recommend getting the Deluxe Strap Pad. My first couple of trips were with the default pad, and after a few hours the bag’s presence was noticeable. With the Deluxe Strap Pad, carrying my bag for a few hours is a breeze, and carrying it for 12 hours a day during VMworld EMEA was bearable (I can’t imagine doing it with the default shoulder pad.)
  • The TSA-friendly “lie flat” design doesn’t necessarily lie very flat, especially if the main compartment is full. This can make it a little challenging in the security line, but this is a very minor nit overall. The design does, however, make it super easy to get to my laptop (or anything else in that compartment).
  • While getting to my laptop is easy, getting to stuff in the bag isn’t quite so easy. (This is probably by design.) If you have smaller items in your bag that you’re going to need to get out and put back in frequently, the clips+velcro on the Commute’s flap make this a big more work. Again, this is probably by design (to prevent other people from being able to easily get into your bag).
  • The zip-open rear compartment has a space on one side for the laptop; here my 13" MacBook Air (with a Speck case) fits very well. On the opposite side is a pair of slightly smaller compartments separated by a velcro divider. These smaller compartments are just a tad too small to hold a full-size iPad, though I suspect an iPad mini (or similarly-sized tablet) would fit quite well there.
  • A full-size iPad does fit well, however, in the pocket on the inside of the main compartment.
  • The complement of pockets and organizers inside the main compartment makes it easy to store (and subsequently find) all the small things you often need when traveling. In my case, the pockets and organizers easily keep my chargers and charging cables, pens, business cards, and such neatly organized and accessible.

Overall, I’m pretty happy with the bag, and I would recommend it to anyone who travels relatively light and is looking for a messenger-style bag. This bag wouldn’t have worked in years past when I was doing implementations/installations at customer sites (you invariably end up having to carry a ton of cables, tools, software, connectors, etc. in situations like that), but now that I’m able to focus on core essentials—laptop, tablet, notebook, and limited accessories—this bag is perfect.

If you have any additional feedback on the Timbuk2 Commute bag you’d like to share, I’d love to hear it (and I’m sure other readers would as well). Feel free to add your thoughts in the comments below.

Tags: ,

IDF 2013 Summary and Thoughts

I’m back home in Denver after spending a few days in San Francisco at Intel Developer Forum (IDF) 2013, so I thought I’d take a few minutes to sit down and share a summary of the event and my thoughts.

First, here are links to all the liveblogging I did while at the conference:

IDF 2013 Keynote, Day 1:
http://blog.scottlowe.org/2013/09/10/idf–2013-keynote-day–1/

Enhancing OpenStack with Intel Technologies for Public, Private, and Hybrid Cloud:
http://blog.scottlowe.org/2013/09/10/idf–2013-enhancing-openstack-with-intel-technologies/

IDF 2013 Keynote, Day 2:
http://blog.scottlowe.org/2013/09/11/idf–2013-keynote-day–2/

Rack Scale Architecture for Cloud:
http://blog.scottlowe.org/2013/09/11/idf–2013-rack-scale-architecture-for-cloud/

Virtualizing the Network to Enable a Software-Defined Infrastructure (SDI):
http://blog.scottlowe.org/2013/09/11/idf–2013-virtualizing-the-network-to-enable-sdi/

The Future of Software-Defined Networking with the Intel Open Network Platform Switch Reference Design:
http://blog.scottlowe.org/2013/09/12/idf–2013-future-of-sdn-with-the-intel-onp-switch-reference-design/

Enabling Network Function Virtualization and Software Defined Networking with the Intel Open Network Platform Server Reference Architecture Design:
http://blog.scottlowe.org/2013/09/12/idf–2013-enabling-nfvsdn-with-intel-onp-server-reference-design/

Overall, I enjoyed the event and found it quite useful. It appears to me that Intel has a three-prong strategy:

  1. Expand the footprint of IA (Intel Architecture, what everyone else calls x86, x86_64, or x64) CPUs by moving into adjacent markets
  2. Extend Intel’s reach with non-IA hardware (FM6000 series, QuickAssist Server Acceleration Card [QASAC]
  3. Use software, especially open source software, to drive more development toward Intel-based solutions

I’m sure there’s probably more, but those are the ones that really stand out. You can see some evidence of these moves:

  • Intel’s Open Network Platform (ONP) Switch reference design (aka “Seacliff Trail”) helps drive #1 and #2; it contains an IA CPU for programmability and leverages the FM6000 series for high-speed networking functionality
  • Intel’s ONP Server reference design (“Sunrise Trail”) pushes Intel-based servers into markets they haven’t traditionally seen (telco/networking roles), especially in conjunction with strategy #3 above (as shown by the strong use of Intel DPDK to optimize Sunrise Trail for SDN/NFV applications)
  • Intel’s Avoton and newly-announced Quark families push Intel into new markets (micro-servers, tablets, phones, sensors) where they haven’t traditionally been a major player

All in all, it will be very interesting to see how things play out. As others have said, an definitely an interesting time to be in technology.

As with other relevant industry conferences (like VMworld, for example), one of the values of IDF is engaging in conversations with other professionals. I had a few of these conversations while at IDF:

  • I spent some time talking with an Intel employee about Intel’s Cache Acceleration Software (CAS), which came out of the acquisition of a company called Nevex. I wasn’t even aware that Intel was doing cache acceleration. Intel CAS operates at the operating system (OS) level, serving as a file-level cache on Windows and a block-level cache (with file awareness) on Linux. It also supports caching to a remote SSD (in a SAN, for example) so that you can still use vMotion in vSphere environments. In the near future, they’re looking at supporting cache clustering with a cache coherence algorithm that would allow you to use SSDs/flash from multiple servers as a single cache.
  • I had a brief conversation with an Intel engineer who specialized in SSDs (didn’t get to hit him up for some free Intel DC S3700s, though). We touched on a number of different areas, but one interesting statistic that came out of the conversation was the reality behind the “running out of writes” on an SSD. (This refers to the number of times you can write data to an SSD, which is made out to be a pretty big deal by some folks.) He spoke of a test that wrote 45GB an hour to an SSD; even at that rate, would have taken multiple decades of use before the SSD would not have been able to perform any writes.
  • Finally, I spent some time chatting with my friend Brian Johnson, who works in the networking group at Intel. There’s lots of cool stuff going on there, but—unfortunately—I can’t really discuss most of what he shared with me. Sorry folks! We did have an interesting discussion around the user experience, personal data, mobility, and ubiquitous connectivity. Who knows—maybe the next great startup will emerge out of our discussion! :-)

Anyway, that’s it for my IDF 2013 coverage. I hope that some of the information I shared proves useful to you in some way. As usual, courteous comments (with vendor disclosures, where needful) are always welcome.

(Disclosure: I work for VMware, but was invited to attend IDF at Intel’s expense.)

Tags: , ,

« Older entries