General

This category contains general posts about blogging or this site.

IDF 2014 Day 2 Recap

Following on from my IDF 2014 Day 1 recap, here’s a quick recap of day 2.

Data Center Mega-Session

You can read the liveblog here if you want all the gory details. If we boil it down to the essentials, it’s actually pretty simple. First, deliver more computing power in the hardware, either through the addition of FPGAs to existing CPUs or through the continued march of CPU power (via more cores or faster clock speeds or both). Second, make the hardware programmable, through standard interfaces. Third, expand the use of “big data” and analytics.

Technical Sessions

I attended a couple technical sessions today, but didn’t manage to get any of them liveblogged. Sorry! I did tweet a few things from the sessions, in case you follow me on Twitter.

Expo Floor

I did have an extremely productive conversation regarding Intel’s rack-scale architecture (RSA) efforts. I pushed the Intel folks on the show floor to really dive into what makes up RSA, and finally got some answers that I’ll share in a separate post. I will do my best to get a dedicated RSA piece published just as soon as I possibly can.

Also on the expo floor, I got my hands on some of the Intel optical transceivers and cables. The cables are really nice, and practically indestructible. I think this move by Intel will be good for optics in the data center.

Finally, I was also able to join for an episode of Intel Chip Chat, a podcast that Intel records regularly, including at events like IDF. It was great fun getting to spend some time talking about VMware NSX and network virtualization.

Closing Thoughts

Overall, another solid day at IDF 2014. Lots of good technical information presented (which, unfortunately, I did not do a very good job capturing), and equally good technical information available on the show floor.

I’ll try to do a better job with the liveblogging tomorrow. Thanks for reading!

Tags: , ,

IDF 2014: Data Center Mega-Session

This is a liveblog of the Data Center Mega-Session from day 2 of Intel Developer Forum (IDF) 2014 in San Francisco.

Diane Bryant, SVP and GM of the Data Center Group takes the stage promptly at 9:30am to kick off the data center mega-session. Bryant starts the discussion by setting out the key drivers affecting the data center: new devices (and new volumes of devices) and new services (AWS, Netflix, Twitter, etc.). This is the “digital service economy,” and Bryant insists that today’s data centers aren’t prepared to handle the digital service economy.

Bryant posits that in the future (not-so-distant future):

  • Systems will be workload optimized
  • Infrastructure will be software defined
  • Analytics will be pervasive

Per Bryant, when you’re operating at scale then efficiency matters, and that will lead organizations to choose platforms selected specifically for the workload. This leads to a discussion of customized offerings, and Bryant talks about an announcement earlier in the summer that combined a Xeon processor and a FPGA (field-programmable gate array) on the same die.

Bryant then introduces Karl Triebes, EVP and CTO of F5 Networks, who takes the stage to talk about FPGAs in F5 and how the joint Xeon/FPGA integrated solution from Intel plays into that role. F5′s products use Intel CPUs, but they also leverage FPGAs to selectively enable certain functions in hardware for improved performance. Triebes talks about how F5 and Intel have been working together for about 10 years, and discusses how F5 uses instruction set changes (they write their own microkernel—is that really sustainable moving forward?), new features, etc., and that includes leveraging the integrated FPGA in Intel’s new product.

The discussion now shifts to low-power system-on-chips (SoCs), such as the 64-bit Intel Atom. Bryant announces the third-generation SoC, named Xeon D and based on the Xeon platform. The Xeon D is sampling now. Bryant brings on stage Patty Kummrow, who is Director of Server SoC Development. Bryant and Kummrow talk about how Intel is addressing the need to customize the platform to address critical workloads: software (storage acceleration library, for example); in-package accelerator (FPGA, for example); SoC (potentially incorporating customer IP); and instruction set architectures (like the AES-NI instructions to enhance cryptographic functions). Kummrow shows off a Xeon D SoC and board.

Bryant shifts the discussion to software-defined infrastructure (SDI). The first area of SDI that Bryant focuses upon is storage, where growth is happening rapidly but storage is still siloed. Per Bryant, Intel believes that software-defined storage will address these concerns, and doing so in three ways:

  • Intel Storage Acceleration Libraries (ISA-L)
  • Open source investments in Ceph and OpenStack Swift
  • Prototype SDS controller providing separate of control plane and data plane

Bryant now turns to software-defined networking (SDN) and network functions virtualization (NFV), and—quite naturally—points to the telcos as the prime example of why SDN/NFV are so important. According to Bryant, NFV originated in October 2011, and now (just three years later) there will be commercial deployments by companies like AT&T, Verizon Wireless, SK telecom, and China Mobile. Bryant also talks about Intel’s Network Builders program, and talks about Nokia’s recent announcement (which is based on Intel Xeon).

Shifting now to the compute side, Bryant talks about Intel’s rack-scale architecture (RSA) efforts. RSA attempts to provide disaggregated pools of resources, a standard method of exposing hardware to software, and a composable infrastructure that can be assembled based on application resources.

Core to Intel’s RSA efforts is silicon photonics, which is a key point to allowing high-speed, low-latency connections between the disaggregated resources within an RSA approach. Silicon photonics will enable 100Gbps at greater than 300 meters, at a low cost and with high reliability. Also important, but sometimes overlooked, is that the silicon photonics cabling will be smaller and thinner.

Bryant introduces Andy Bechtolsheim, Founder and Chief Development Officer and Chairman of Arista Networks. Bryant gives Bechtolsheim the opportunity to talk about Arista’s recent launch of 100Gb networking and why 100Gb networking is important and necessary in modern data centers. Bryant states that she believes silicon photonics will be essential in delivering cost-effective 100Gb solutions, and that leads to a discussion of the CLR4 alliance. CLR4 is focused on delivering 100Gb over even greater distances.

Next, Bryant introduces Das Kamhout to talk about the need for an orchestration system in the data center. Kamhout talks about how advanced telemetry can be exposed to the orchestration system, which can make decisions based on that advanced telemetry. This will eventually lead to predictive actions. It boils down to a “watch, act, learn” feedback loop. The foundation is built on Intel technologies like Cache Acceleration, ISA-L, DPDK, QuickAssist, Cache QoS, and power and thermal awareness.

This “finally” leads into a discussion of pervasive analytics, which is one of the three key attributes of future data centers. Bryant states that pervasive analytics will help improve cities, discover treatments, reduce costs, and improve products—obviously all through data centers powered by Intel products. Intel’s focus is to enable analytics, and is working closely with the Hadoop community (specifically Cloudera).

According to Bryant, the new Intel E5–2600 v3 more than doubles the performance of Cloudera’s Hadoop distribution. Bryant brings out Mike Olson, Co-Founder and Chief Strategy Officer for Cloudera. Olson states that the consumer Internet “discovered” the idea of big data, but this is now taking off in all kinds of industries. Olson gives examples of hospitals instrumenting neonatal care units and cities gathering data on air quality more frequently and more comprehensively. Both Olson and Bryant reinforce the value of open source to “amplify” the effect of certain efforts. Olson again conflates big data and the Internet of Things (IoT), indicating that he believes that the two efforts are naturally coupled and will drive each other. Bryant next gives Olson the opportunity to talk about Cloudera Hadoop 5.2, which is optimized for Intel architectures to provide more performance and more security, which in turn will lead to accelerated adoption of Hadoop. Bryant reinforces the link between IoT/wearables and big data, mentioning again the “A-wear” program discussed yesterday in the keynote.

At this point Bryant wraps up the keynote and the session ends.

Tags: , ,

IDF 2014 Day 1 Recap

In case you hadn’t noticed, I’m at Intel Developer Forum (IDF) 2014 this week in San Francisco. Here’s a quick recap of day 1 (I should have published this last night—sorry for not getting it out sooner).

Day 1 Keynote

Here’s a liveblog of the IDF 2014 day 1 keynote.

The IDF keynotes are always a bit interesting for me. Intel has a very large consumer presence: PCs, ultrabooks, tablets, phones, 2-in–1/convertibles, all-in–1 devices. Naturally, this is a big part of the keynote. I don’t track or get involved in the consumer space; my focus is on the data center. It is kind of fun to see all the stuff going on in the consumer space, though. There were no major data center-centric announcements yesterday (day 1), but I suspect there will be some today (day 2) in a mega-session with Diane Bryant (SVP and GM of the Data Center Group at Intel). I’ll be liveblogging that mega-session, so stay tuned for details.

Technical Sessions

I was able to hit two technical sessions yesterday and liveblogged both of them:

Both were good sessions. The first one, on virtualizing the network, did highlight an important development regarding hardware offloads for Geneve, the next-generation network overlay encapsulation protocol. Intel announced yesterday that the new XL710 network adapters (which are 40Gbps adapters) will support Geneve hardware offloads. This is the first hardware offload for Geneve of which I am aware, and it signals increased support for Geneve. (The XL710 also supports offloads for VXLAN and NVGRE.) That’s cool.

The second session was more of an introductory session than anything else, but was useful nevertheless. I was already familiar with all the concepts discussed regarding Docker and containers and virtualization, but I did pick up a few useful analogies from the speaker, Nick Weaver. Nick didn’t share anything specific to containers with regard to work Intel might be doing, but as I was thinking about this after the session I wondered if Intel might do some work around enabling containers to use the x86 privilege rings/protection rings. This would improve container security and move Linux containers closer to the Bromium “microvisor” architecture. Nick was also bullish on Intel SGX, something I’m going to have to explore in a bit more detail (I don’t know anything about SGX yet).

Coffee Chats

One of the nice things about attending IDF is that the Intel folks do a great job of connecting influencers (bloggers, press, analysts) with key folks within Intel to discuss announcements, trends, etc. This year, this took the form of “coffee chats”—informal discussions that were, sadly, lacking coffee.

In any case, the discussions wandered around a bit (as these sorts of things are wont to do). Here are a few thoughts that I gleaned from the discussions or that resulted from the discussions:

  • Intel does have/is working with very large customers on customized silicon, typically these are tweaks to create a custom SKU (like more cores, higher frequencies, different power envelope, etc.). This is interesting, but obviously applicable only to the largest of customers given the cost involved.
  • Intel is working with a few other companies (Dell, Emerson, and HP) on a hardware API specification; early work on the API can be found here.
  • Intel is pushing forward with the idea of rack-scale architecture (RSA); this is something I blogged about last year (see this post). There’s another RSA-related session on Thursday that I’m hoping to be able to attend so I can provide more information. I’m on the fence about RSA; I still don’t see a compelling reason why users/consumers/operators should switch to RSA instead of buying servers. I may publish something else specific about RSA later; I still need to have some discussions with the Intel engineers on the floor and see if I’m missing something.
  • The networking-focused Fulcrum assets that Intel purchased a few years ago are continuing to be leveraged in a variety of ways, some of which are also related to the rack-scale architecture efforts. Personally, I’m less interested in how Intel is using the Fulcrum stuff in RSA, and more interested in work Intel might be doing around making it easier for Linux vendors to “hook into” Intel-based hardware platforms for the purpose of building disaggregated network operating systems. You may already know that I’m pretty bullish on Cumulus Linux, but Cumulus right now is heavily tied to the Broadcom chipsets, and—according to discussions I’ve had with Cumulus—the effort to port over to Intel’s Fulcrum chips is not insignificant. Any work that Intel can do to make that easier/faster/cheaper is all positive in my book. It would be great to see Intel release a DPDK equivalent that is focused on integration into the switching chipsets in their Open Networking Platform (ONP) switch reference architecture (see this post from last year).

Closing Thoughts

Clearly, there’s a lot going on within Intel, as the company works hard—and is being reasonably successful—to differentiate hardware in an environment where abstraction layers like hypervisors and cloud management platforms are trying to homogenize everything. The work that Intel has done (in conjunction with HyTrust) on geofencing is nice and is, I think, an indicator of ways that Intel can continue to innovate beyond just more cores, more efficiency, and faster clock speeds (not that there’s anything wrong with those!).

Stay tuned for more liveblogs from IDF 2014, and I’ll post a day 2 recap as well. Thanks for reading!

Tags: , ,

This is a live blog of session DATS004, titled “Bare-Metal, Docker Containers, and Virtualization: The Growing Choices for Cloud Applications.” The speaker is Nicholas Weaver (yes, that Nick Weaver, who now works at Intel).

Weaver starts his presentation by talking about “how we got here”, discussing the various technological shifts that have affected the computing landscape over the years. Weaver includes a discussion of the drivers behind virtualization as well as the pros and cons of virtualization.

That, naturally, leads to a discussion of containers. Containers are not all that new—Solaris Zones is a form of containers that existed back in 2004. Naturally, the recent hype associated with Docker has, according to Weaver, rejuvenated interest in the concept of containers.

Before Weaver gets too far into containers, he first provides a background of some of the core containerization pieces. This includes cgroups (the ability to control resource allocation/utilization), which is built into the Linux kernel. Namespace isolation is also important, which provides full process isolation (so that one process can’t see processes in another namespace). Namespace isolation isn’t just for processes; there’s also isolation for network entities, mounts, and users. LXC is a set of user-space tools that attempted to make using these constructs easier, but it hasn’t (until recently) been easy to really leverage these constructs.

Weaver next takes this relatively abstract discussion and makes it a bit more concrete with a specific example of how a microservice architecture would look under virtualization (OS instance, microservice libraries, and microservice itself) and well as under containers (shared OS instance and shared libraries plus microservice itself). Weaver talks about the “instant start” attribute of a container, but puts that in the context of the lifetime of the workload that’s running in the container. Start-up times don’t really matter for long-lived workloads, but for temporary, ephemeral workloads start-up times do matter. The pattern of “container on VM” is also mentioned by Weaver as another design pattern that some people use.

Next Weaver provides a quick list of pros and cons of containers:

  • Pros: faster lifecycle vs. virtual machines; containers what is running within the OS; ideal for homogenous application stacks on Linux; almost non-existent overhead
  • Cons: very complex to configure (by itself, absent some sort of orchestration system or operating at scale); currently much weaker security isolation than VMs; applications must run on Linux (because Windows doesn’t have the same container technologies)

Next, Weaver transitions the discussion to focus on Docker specifically. Weaver describes Docker as “an easy button for containers,” making the underlying containerization constructs (cgroups, namespaces, etc.) easier to use. Docker is simpler and easier than LXC (where multiple binaries were involved). Weaver believes that Docker images—which he describes as an ordered set of actions to build a container—are the real game-changer. Weaver’s discussion of Docker images leads to a review of a Dockerfile, which is a DSL (domain specific language) for creating Docker images. Docker images are built on a series of layers; underlying layers could be “just” OS images (like Ubuntu or CentOS), but they could also be customized builds that contain applications and/or data.

Image registries are how users can create images and share images with other users. The public Docker Hub is an example of an image registry.

The discussion now transitions into a quick review of the underlying Docker architecture. There is a Docker daemon that runs on Linux; the Docker client can be run elsewhere. The Docker client communicates with the Docker daemon (although you should note that in many cases the daemon listens on a local socket, which means using a Docker client remotely over the network won’t work).

The innovations that Weaver attributes to Docker include: images (like templates for VMs, and the use of copy-on-write makes them behave like code); API and CLI tools for managing container deployments; reduced complexity around deploying and managing containers; and support for namespaces and resource limits.

Weaver provides a more concrete example of how Docker can change a developer’s process for creating code. Here Weaver’s DevOps background really starts to show, as he discusses how Docker and containers would help streamline CI/CD operations.

Next up are the gotchas with containers. Trust is one gotcha; can we trust that one container won’t affect other containers? The answer, according to Weaver, is “it depends.” You still need to follow current recommended practices, such as no root access, host-level patches, auditing, and being aware of the default settings (which might be dangerous, if you aren’t aware). One way to address some of these concerns is to use VMs to provide strong security isolation between containers that need a stronger level of isolation than the standard container mechanisms can provide.

Intel, of course, is working on making containers better:

  • Security (Intel AES-NI, INtel TXT/TCP, Intel SGX)
  • Performance/flexibility (Intel VT-x/VT-d/VT-c)

Weaver wraps up the session with a quick summary of the key points from the session and some Q&A.

Tags: , , , ,

IDF 2014 Day 1 Keynote

This is a liveblog for the day 1 keynote at Intel Developer Forum (IDF) 2014. The keynote starts with an interesting musical piece that shows how technology can be used to allow a single performer to emulate the sound of a full band, and then kicks off with a “pocket avatar” presentation by Brian Krzanich, CEO of Intel Corporation. Krzanich takes the stage in person a few minutes later.

Krzanich starts with a recap of some of the discussions from last year’s IDF, and he points out some of the results over the last year. Among the accomplishments Krzanich lists, he mentions that Intel was the #2 shipper of tablets last year. (One would assume that Apple is #1.) Krzanich clearly believes that Intel has a bright future; he points out that projections show as many as 50 billion x86-based devices by 2020 (just 6 years away). That’s pretty massive growth; there are only an estimated 2.2 billion x86-based devices today.

The line-up today includes talks from Diane Bryant (data center), Kirk Skaugen (clients), Doug Fisher (software and services), and a live Q&A by Krzanich.

Krzanich starts a discussion of wearables and related devices with a mention of the SMS Audio headphones that also provide heart rate monitoring and other fitness data while listening to music. Next up, Krzanich talks about a fashion bracelet that can retrieve textual information from your cell phone and display it. The bracelet was displayed at the opening ceremony of the NYC fashion show last week.

Greg McKelvey of Fossil takes the stage with Krzanich to discuss wearables and Fossil’s experience as a fashion brand and previous wearable technology efforts. McKelvey talks at length about Fossil, but there is very little discussion of specific technology or technology trends.

Krzanich now switches gears to discuss the Internet of Things (IoT), which he believes to be connected to wearables in some ways (“wearables for things”, he calls them). Krzanich believes that IoT will only be successful if you have full intelligence edge-to-edge. Krzanich points out a couple of IoT partnering efforts; one of which involves attaching sensors to HVAC systems to allow for more efficient servicing of the units and another that involves sensors to allow cities to monitor air quality. However, standards are needed, according to Krzanich; he points to two industry consortia involved in standards for IoT. Those consortia are the Open Interconnect Consortium (OIC) and the Industrial Internet Consortium (IIC). OIC is more consumer-focused; IIC is more industrial-focused (as you can guess by the name). The goal of these consortia is to drive standards and interoperability.

At this point Krzanich hands it off to Diane Bryant, SVP and GM of the Data Center Group at Intel. Bryant is here to talk about the data center side of things, and to help set the scale she talks about smartphones. There are 1.9 billion smartphones with an average of 26 apps. If each app does 20 transactions daily, that totals up to a trillion transactions. That scale is massive, but it will be eclipsed by wearables, which by 2020 will include 50 billion devices generation 35 zettabytes of data. Bryant seems to focus on “big data” as the key data center driver, from wearables to health records and health research. Bryant mentions work being done with the Michael J. Fox Foundation (for Parkinson’s research), the Broad Institute (for cancer genomics research), the Francis Crick Institute (for bioinformatics training targeting oncologists), and the Knight Cancer Institute (to help create a 1.2PB cloud for genomics research). Bryant lays out a very ambitious goal to target cancer treatment via genomics by 2020.

Bryant next announces an analytics program for developers called “A-wear” (Analytics for Wearables).

Bryant hands it off to Kirk Skaugen, who will talk about client devices and personal computing. Skaugen believes that wires will be eliminated, passwords will be replaced by bioinformatics, and the personal computing journey will be revolutionized. Skaugen reinforces that Intel is committed to all operating systems, all form factors, and all use cases. Skaugen announces that the Intel Core M, now running on 14nm process, is in full production and will be available on shelves very quickly. This translates into lighter and thinner tablets with greater processing power and lower power utilization. Skaugen shows off a few platforms built on Core M.

Next Skaugen announces Skylake, the code name for the next-generation processor architecture that is anticipated for arrival in the second half of next year. (Will we see it at next IDF?) He shows off a demo of Skylake as well as a hardware reference platform for Skylake playing full 4K video. The roadmap behind Skylake includes continued shrinking from 14nm to 10nm.

Skaugen announces that Samsung is shipping Intel LTE Advanded (Intel XMM 7260) in the Samsung Galaxy Alpha premium smartphone.

After discussion Intel’s work in communications, Skaugen shifts gears to talk about improving the user experience. This involves getting rid of wires, getting rid of passwords, and improving the natural user experiences. Intel has been talking about 3-D interfaces, touch, and voice for the last few years (check the live blogs from previous years), but we have yet to see this vision be realized. They show a demo of setting a laptop (2-in–1 convertible) on a desk and having it automatically connect to peripherals and displays. The demo also includes wireless charging.

Skaugen wraps up his portion of the session with a review of the announcements before handling the baton to Doug Fisher to talk about software and services. Fisher talks about how Intel is partnering with a number of other companies to help developers deliver software faster. Fisher announces a new reference platform for Android based on Intel technology.

Next Fisher shows off a demo of 3-D camera technology and the associated software that provide additional information and context about data captured in photos (like taking a photo of a crate, capturing the location and dimensions of the crate, and linking to shipping systems to ship the crate). Fisher reinforces Krzanich’s mention of OIC and Intel’s participation in creating open, royalty-free standards for compatibility and interoperability among the billions of devices in the IoT. Intel is, in fact, delivering open source code (under the Apache 2.0 license) and contributing relevant patents to the OIC.

Fisher now turns it back over to Krzanich, who shows off the first production model of Intel RealSense 3-D camera/imaging technology. This technology captures not only visual data, but also distance, depth, motion effects, etc. Krzanich shows a live demo of RealSense depth awareness and how that information can be used to enable new filtering effects in software (like removing the color from everything except the closest layer). Krzanich brings Michael Dell up on the stage to talk about the device he used during the demo, which is a Dell Venue 8 7000 series tablet—supposedly the world’s thinnest tablet at just 6mm. Dell shows off a few more examples of using Intel’s RealSense technology to measure distance and change focal points of photos.

Dell also makes mention of new PowerEdge servers, including the R720XD, a 2U box with 36 cores, up to 1.5TB of RAM, and up to 100TB of storage.

At this point Krzanich transitions to a Q&A session with Renee James, President of Intel. This is a live Q&A session, which is kind of nice (and not very common for a conference keynote like this). At this point I’m wrapping up the liveblog instead of trying to capture the live Q&A (which can be difficult at times).

Tags: , ,

I mentioned this on Twitter a few days ago, but wanted to go ahead and formalize some of the details. A blog reader/Twitter follower contacted me with the idea of getting together briefly at VMworld 2014 for some prayer time. I thought it was a great idea (thanks David!), so here are the details.

What: A brief time of prayer
Where: Yerba Buena Gardens, behind Moscone North
When: Monday 8/25 through Wednesday 8/27 at 7:45am (this should give everyone enough time to grab breakfast before the keynotes start at 9am)
Who: All courteous attendees are welcome, but please note that this will be a distinctly Christian-focused and Christ-centric activity. (I encourage believers of other faiths/religions to organize equivalent activities.)
Why: To spend a few minutes in prayer over the day, the conference, and the attendees

There’s no need to RSVP or let me know that you’ll be there (although you are welcome to do so if you’d like, just so I have an idea of how many will be attending). This will be very informal and very casual—it’s just an opportunity for fellow followers of Christ to get together and say a few words of prayer.

I look forward to seeing you there!

Tags: , ,

(This is a repost of an announcement from the Spousetivities web site. I wanted to include it here for broader coverage. —Scott)

For seven years, Spousetivities has been fortunate to be part of the VMware/VMworld community. Since 2008, we’ve been the only community-focused and community-driven spouse activities program, and it’s been an honor. Spousetivities exists thanks to the support of the community. However, Spousetivities also exists to provide support back to that same community.

Last week, a member of our community was tragically taken from us. Jim Ruddy died in a car accident, leaving behind his wife Stephanie and their children. This is a horrible loss, and the community continues to mourn his loss. (My husband, Scott, worked with Jim at EMC for a number of years, as did many others.) In honor of Jim and to support the family he left behind, I worked with other members of the community to establish the Jim Ruddy Memorial Fund. As of this writing, that fund had raised over $15,000 to help support Stephanie and the kids in this very trying time.

No amount of money can replace Jim. However, this is a difficult time for Stephanie—not only emotionally and physically, but also financially. For that reason, Spousetivities is setting aside 10% of all proceeds raised by activities at VMworld 2014 to be donated to Jim Ruddy’s family via the Jim Ruddy Memorial Fund.

If you haven’t donated to the Jim Ruddy Memorial Fund yet, please consider doing so. If you (or your spouse/partner/significant other) is participating in Spousetivities at VMworld this year, please know that your participation means also helping a family in their time of need.

Being part of the community means giving back to the community.

Tags: , , ,

As IT pros at the “cutting edge” of technology and industry change, I think sometimes we forget that not everyone has the same mindset toward learning, growth, and career evolution. That’s especially true, I think, for those of us who are bloggers, because it’s our passion for the technology that not only drives us to write about it but also drives us to constantly explore new trends, new areas, and new concepts. It is that passion that drives us to seek out new ways the technology could be applied to our jobs. That passion sustained us over the years, as we progressed from Windows admins to VMware admins and now to virtualization and cloud architects. That passion led us to “bring our work home” and build home labs. We’ve had years of actively seeking out new layers of knowledge to build on top of what we already knew.

This isn’t a bad thing; not by any stretch. But my point is this—we must consider that not everyone is like us. Not everyone is driven by a passion for the technology. Not everyone seeks out new technologies and explores new ways to put those technologies to work. Some IT pros like to leave their work at work. And that’s OK, too. However, knowing that there are folks out there who don’t have that same passion and don’t have years of layering pieces of information on top one another, it’s our job not to berate them about change but rather to encourage and educate them about why change is needed, how that change will affect them, and what they can do about it.

There have been times that I’ve seen some IT pros lecture others about how they aren’t embracing change, they aren’t growing fast enough, how they aren’t headed in the right direction and how technology will leave them behind. (Shoot, I’ve probably done it as well—none of us are perfect, that’s for sure.) I think we can all agree that career evolution is a necessity, but rather than jumping on the “You’d better change or else” bandwagon, wouldn’t we be better served by asking these simple questions:

  • What can I do to help others understand the changes that are coming?
  • Are there things I can do to help others formulate a plan to cope with change?
  • What can I do to help others get the information they need?
  • How can I help others know in what ways this information applies to them?

I don’t know, perhaps I’m overly optimistic, overly idealistic, or overly naive (or all three). I just think that maybe if we spent less time preaching about how career evolution has to occur and instead focused on helping others succeed at career evolution, we’d probably all be a little bit better off. This aligns really well, too, with some thinking I’ve been doing about my own personal “mission statement” and purpose, which centers around helping others.

Feel free to tell me what you think in the comments below—courteous comments are always welcome.

Tags: ,

Crossing the Threshold

Last week while attending the CloudStack Collaboration Conference in my home city of Denver, I had a bit of a realization. I wanted to share it here in the hopes that it might serve as an encouragement for others out there.

Long-time readers know that one of my projects over the last couple of years has been to become more fluent in Linux (refer back to my 2012 project list and my 2013 project list). I gave myself a B+ for my efforts last year, feeling that I had made good progress over the course of the year. Even so, I still felt like there was still so much that I needed to learn. As so many of us are inclined to do, I was more focused on what I still hadn’t learned instead of taking a look at what I had learned.

This is where last week comes in. Before the conference started, I participated in a couple of “mini boot camps” focused on CloudStack and related tools/clients/libraries. (You may have seen some of my tweets about tools like cloudmonkey, Apache libcloud, and awscli/ec2stack.) As I worked through the boot camps, I could hear the questions that other attendees were asking as well as the tasks with which others were struggling. Folks were wrestling with what I thought were pretty simple tasks; these were not, after all, very complex exercises. So the lab guide wasn’t complete or correct; you should be able to figure it out, right?

Then it hit me. I’m a Linux guy now.

That’s right—I had crossed the threshold between “working on being a Linux guy” and “being a Linux guy.” It’s not that I know everything there is to know (far from it!), but that the base level of knowledge had finally accrued to a level where—upon closer inspection—I realized that I was fluent enough that I could perform most common tasks without a great deal of effort. I knew enough to know what to do when something didn’t work, or wasn’t configured properly, and the general direction in which to look when trying to determine exactly what was going on.

At this point you might be wondering, “What does that have to do with encouraging me?” That’s a fair question.

As IT professionals—especially those on the individual contributor (IC) track instead of the management track—we are tasked with having to constantly learn new products, new technologies, and new methodologies. Because the learning never stops (and that isn’t a bad thing, in my humble opinion), we tend to focus on what we haven’t mastered. We forget to look at what we have learned, at the progress that we have made. Maybe, like me, you’re on a journey of learning and education to move from being a specialist in one type of technology to a practitioner of another type. If that’s the case, perhaps it’s time you stop saying “I will be a <new technology> person” and say “I am a <new technology> person.” Perhaps it’s time for you to cross the threshold.

Tags: , ,

For the last couple of years, I’ve been sharing my annual “projects list” and then grading myself on the progress (or lack thereof) on the projects at the end of the year. For example, I shared my 2012 project list in early January 2012, then gave myself grades on my progress in early January 2013.

In this post, I’m going to grade myself on my 2013 project list. Here’s the project list I posted just under a year ago:

  1. Continue to learn German.
  2. Reinforce base Linux knowledge.
  3. Continue using Puppet for automation.
  4. Reinforce data center networking fundamentals.

So, how did I do? Here’s my assessment of my progress:

  1. Continue to learn German: I have made some progress here, though certainly not the progress that I wanted to learn. I’ve incorporated the use of Memrise, which has been helpful, but I still haven’t made the progress I’d like. If anyone has any other suggestions for additional tools, I’m open to your feedback. Grade: D (below average)

  2. Reinforce base Linux knowledge: I’ve been suggesting to VMUG attendees that they needed to learn Linux, as it’s popping up all over the place in all sorts of roles. In my original 2013 project list, I said that I was going to focus on RHEL and RHEL variants, but over the course of the year ended up focusing more on Debian and Ubuntu instead (due to more up-to-date packages and closer alignment with OpenStack). Despite that shift in focus, I think I’ve made decent progress here. There’s always room to grow, of course. Grade: B (above average)

  3. Continue using Puppet for automation: I’ve made reasonable progress here, expanding my use of Puppet to include managing Debian/Ubuntu software repositories (see here and here for examples), managing SSH keys, managing Open vSwitch (OVS) via a third-party module, and—most recently—exploring the use of Puppet with OpenStack (no blog posts—yet). There’s still quite a bit I need to learn (some of my manifests don’t work quite as well as I’d like), but I did make progress here. Grade: C (average)

  4. Reinforce data center networking fundamentals: Naturally, my role at VMware has me spending a great deal of time on how network virtualization affects DC networking, and this translated into some progress on this project. While I gained solid high-level knowledge on a number of DC networking topics, I think I was originally thinking I needed more low-level “in the weeds” knowledge. In that regard, I don’t feel like I did well; on the flip side, though, I’m not sure whether I really needed more low-level “in the weeds” knowledge. This highlights a key struggle for me personally: how to balance the deep, “in the weeds” knowledge with the high-level knowledge. Suggestions on how others have overcome this challenge are welcome. Grade: C (average)

In summary: not bad, but could have been better!

What’s not reflected in this project list is the progress I made with understanding OpenStack, or my deepened level of knowledge of OVS (just browse articles tagged OVS for an idea of what I’ve been doing in that area).

Over the next week or two, I’ll be reflecting on my progress with my 2013 projects and thinking about what projects I should be taking in 2014. In the meantime, I would love to hear any feedback, suggestions, or thoughts on projects I should consider, technologies that should be incorporated, or learning techniques I should leverage. Feel free to speak up in the comments below.

Tags: , , , , , , ,

« Older entries