Scott's Weblog The weblog of an IT pro specializing in cloud computing, virtualization, and networking, all with an open source view

Technology Short Take 96

Welcome to Technology Short Take 96! Ahead, lying in wait, is a unique collection of links, articles, and thoughts about various data center technologies. Browse if you dare…OK, so I’m being a bit melodramatic. It’s still some good stuff here!


  • Via Matt Oswalt and Michael Bushong, I came across this article on Juniper’s use of P4. Interesting stuff…P4 definitely has the potential to dramatically reshape networking in new ways, in my humble opinion.
  • Maxime Lagresle of XING outlines how they went about troubleshooting an unexplained connection timeout on Kubernetes/Docker.
  • Ajay Chenampara outlines how POAP (Power On Auto Provisioning), a feature of Cisco NX-OS, works to streamline provisioning new network switches.
  • Don Schenck has a high-level overview of Istio and service meshes.
  • Daniel Álvarez has a good article describing some OVN profiling and optimizing he recently performed. I believe the patches he mentioned in the post have already been accepted into the OVN codebase.


Nothing this time around; sorry! If you have some articles you feel are worthy of inclusion in the next Tech Short Take, send them my way!


Cloud Computing/Cloud Management

Operating Systems/Applications

  • Adrian Hornsby has a series (two parts so far) on designing a multi-region active-active architecture. The focus of this series seems to be more on application architecture than infrastructure architecture, although there are clear overlaps in areas. Part 1 is here; part 2 is here. I highly recommend this series, with one caveat: it does occasionally feel like an AWS infomercial. Granted, Adrian works for AWS, so some of that is to be expected, but the Kool-Aid is a bit strong at times.
  • Kevin Carter explains how to ship journals remotely using systemd-journald.
  • VMware PowerCLI 10.0 is now available.
  • I think I mentioned this before, but here’s an update on Mitogen. Some very interesting possibilities ahead!


  • Chris Evans discusses a recent re-org that aligns ScaleIO with hardware sales. So much for software-defined storage?
  • Erik Smith explores the Discovery problem with NVMe over Fabrics and how that problem might be resolved.


Career/Soft Skills

That’s all for now! I’ll have another Tech Short Take in a couple weeks. Thanks for reading!

Recent Changes in my "Learning Tools" Repository

A couple years ago, I created a “learning-tools” repository on GitHub with the goal of creating environments/tools that would help others learn new technologies. At first, the contents of the repository were almost exclusively leveraging Vagrant, but over time I’ve extended the environments to also leverage Ansible and to use tools such as Terraform. Over the past month or so, I’ve made a few additional (albeit relatively minor) updates that I also wanted to share.

As I said, the updates are relatively minor:

  • I’ve added environments for running generic versions of Fedora Atomic Host (26 and 27), Ubuntu 16.04, and Debian 9.x. These environments are probably of limited value by themselves, but in the future I may use them as the basis for more complex environments based on these operating systems. Of course, others may leverage them as the basis for projects of their own.
  • I’ve added Libvirt support for a number of the Vagrant-based environments, based on my experience with the Vagrant Libvirt provider. This support is limited to areas where I was able to find Libvirt-formatted Vagrant boxes, so you’ll find Libvirt support for the environments using CentOS Atomic Host, Fedora Atomic Host, and Debian. The generic Ubuntu 16.04 environment also supports Libvirt, but most other Ubuntu-based environments do not (yet).
  • Wherever possible, I’ve removed references to Vagrant boxes that I personally built/maintained (like the slowe/ubuntu-trusty-x64 Vagrant box). I simply don’t have the time/bandwidth to maintain those boxes properly, and feel like users of the repository are best served by me pointing them to boxes that are kept more up-to-date. There are a few exceptions where these boxes are still referenced; just be aware that I’m no longer updating those boxes (I recommend you switch to a box that is maintained).

In the coming months, I plan to expand the repository to include more content on Kubernetes and related projects/technologies, so stay tuned for that. Until then, if there’s additional stuff you’d like to see—feel free to fork the repository, contribute your changes, and submit a pull request! Alternately, you’re welcome to open an issue on the repository with requests for additional technologies (or scenarios/use cases) you’d like to see.

Looking Ahead: My 2018 Projects

For the last six years or so, I’ve been publishing a list of projects/goals for the upcoming year (followed by a year-end review of how I did with those projects/goals). For example, here are my goals for 2017, and here’s my year-end review of my progress in 2017. In this post, I’m going to share with you my list of projects/goals for 2018.

As I’ve done in previous years, I’ll list the projects/goals, along with an optional stretch goal (where it makes sense).

  1. Become extremely fluent with Kubernetes. I’m focusing all my technical skills on Kubernetes this year, with the goal of becoming extremely fluent with the project in all its aspects. There are some aspects—like networking, for example—where some specialization/additional focus will be needed (focusing on particular network architectures/plugins). That means “leaving behind” other technologies, like OpenStack, in order to more fully focus on Kubernetes. (Stretch goal: Pass the Certified Kubernetes Administrator [CKA] exam.)

  2. Learn to code/develop in Go. Given that Kubernetes is written in Go and that Go seems to be the language of choice for many new projects, tools, and utilities, I’m going to learn to code/develop in Go in 2018. Because I learned in years past that one needs to have a reason to write some code, I’ll focus my Go learning efforts around Kubernetes and related projects/tools.

  3. Make three contributions to open source projects. I’ve had this on my list in previous years; last year I removed it because I had a miserable track record. I’m including it here again because I’m going to allow myself to count documentation contributions or code contributions against this goal (it’s probably going to be the former, if I’m honest with myself). To increase the synergy with other goals, I’ll focus efforts around Kubernetes and related projects/technologies. I chose three contributions because I can break that down to one per quarter in the remaining three quarters of 2018. (Stretch goal: Make five contributions.)

  4. Read and review three technical books. This is more formalizing something I’m already (sort of) doing. What I mean is that I’m already reading technical books; what I’m not currently doing is (typically) finishing them or providing a review of them. The idea of adding the review is to serve as a “forcing function” to be sure that I focus on comprehension and actually finishing the book so that I can provide a review here on the site. As with the previous item, I selected three because I can do one in each of the three remaining quarters of 2018. (Stretch goal: Read and review five technical books.)

  5. Complete a “wildcard project.” As I’ve explained in previous years, the purpose of including this item is to allow for the unknown future. Having this project gives me freedom to explore as new projects, new technologies, and new ideas emerge over the course of the coming year. If I don’t complete a wildcard project, I won’t count that against myself.

So that’s my list of goals/projects for the upcoming year. I’ll try to do a better job this year of providing ongoing updates throughout the year on my progress. Feel free to contact me on Twitter if you have any feedback (positive or negative) or questions.

Technology Short Take 95

Welcome to Technology Short Take 95! This Short Take was a bit more challenging than normal to compile, given that I spent the week leading up to its publication visiting customers in Europe. (My travel schedule in Europe is also why it didn’t get published until Saturday instead of the typical Friday.) Nevertheless, I have persevered in order to deliver you this list of links and articles. I hope it proves useful!


  • Larry Smith Jr. has a nice write-up on Cisco XR stemming from a presentation at NFD 17.
  • VMware recently released a reference design guide for NSX-T; see here for more details.
  • The engineering team at Lyft recently discussed a new overlay-free networking approach they’ve been working on for Kubernetes: IPVLAN-based CNI stack for running within VPCs on AWS. This is pretty cool, but does introduce some potential design considerations for deploying Kubernetes on AWS. (For those that may be unfamiliar: CNI, or Container Network Interface, is the means whereby network mechanisms “plug into” Kubernetes. IPVLAN is a low-latency means of providing IP connectivity to containers. VPCs, or Virtual Private Clouds, are Amazon’s software-defined networking mechanism for workloads running on AWS.)
  • Viktor van den Berg writes on deploying NSX load balancers with vRA.
  • Alen Komljen provides an introductory overview of service meshes in Kubernetes.
  • Matt Klein has a great post on why we need to embrace eventual consistency in (distributed) networking.
  • In the event that you need yet-another-introduction to Kubernetes networking, Mark Betz has a three-part series you may find helpful/useful (part 1, part 2, and part 3).


Nothing this time around, sorry!


  • This doesn’t make me feel very secure.

Cloud Computing/Cloud Management

Operating Systems/Applications


  • I had a chance to take a (very) quick look at Dotmesh, a new solution that enables snapshots for stateful workloads in Kubernetes and Docker. It seems like an interesting and potentially helpful solution, although I haven’t (yet) had the time to actually spend any hands-on time with it.


Career/Soft Skills

Nothing this time around (which surprises me—I almost always find material to include in this section).

OK, that’s all this time around, but I’ve already started gathering material for next time. Look for the next Technology Short Take in about two weeks. Thanks for reading!

Some Tools to Help Learn Kubernetes

Kubernetes is emerging as the clear leader in the container orchestration space. This makes it an important technology to know and understand. However, like other distributed systems, learning something like Kubernetes can be challenging due to the effort involved in getting Kubernetes up and running. It’s not about learning to set up Kubernetes (although that comes in time); at first, it’s about understanding what Kubernetes does and how to use Kubernetes. In this post, I’ll share some tools to help learn what Kubernetes does and how to use Kubernetes.

Note that this post is not intended to be a comprehensive list of learning resources for Kubernetes. Also, this post is not focused on providing resources to help you learn to deploy Kubernetes. Instead, I’m focusing here on tools and services that let you get Kubernetes up and running quickly and easily so that you can focus on using Kubernetes (deploying applications and workloads onto Kubernetes). I’m sure there are many more tools/options than what I have listed here; these are just some that I have used and feel might be useful for others.

I’ll briefly cover the following tools and services:

  • Minikube
  • Kops
  • Kube-aws
  • Azure Container Service (ACS/AKS)

You’ll note I’m focused on command-line tools here, since I’m a CLI junkie. Let’s start with Minikube.

Using Minikube

Minikube is a tool to run a single-node Kubernetes cluster locally on your system, using one of a number of supported hypervisors (VirtualBox, VMware Fusion, Hyper-V, xhyve, or KVM). Because it deploys a single-node Kubernetes cluster, you’re limited in the types of things that you can do with the resulting Kubernetes cluster; however, it does offer the ability to work locally while most of the other tools require connectivity to a cloud provider (not unexpectedly, given the nature of what Kubernetes is and what it’s designed to do). Of course, the trade-off for working locally is that your Kubernetes cluster won’t be able to take advantage of cloud provider functionality, like services defined as type LoadBalancer.

(As an aside, since Minikube leverages libmachine—from Docker Machine—on the back-end, using both Minikube and Docker Machine with KVM on the same system has some interesting side effects. You’ve been warned.)

Despite the limitations, I think there’s probably still some value in using Minikube for certain situations.

Using Kops (Kubernetes Operations)

The kops tool can be used to (relatively) quickly and easily stand up Kubernetes clusters on AWS (support for other platforms is in the works). Typically, kops requires a DNS domain hosted in AWS Route 53 (although there are workarounds), and it does require an S3 bucket in which to store cluster/configuration state (no workaround for this). kops also allows you to create some pretty advanced configurations, such as HA deployments spread across multiple availability zones (AZs) and using SSH bastion hosts. I also like that you can use kops to generate Terraform configurations for setting up Kubernetes clusters.

However, you can also use it to set up quick and simple Kubernetes clusters for learning purposes. Here’s an example command-line that could be used to turn up a small Kubernetes cluster using kops:

kops create cluster \
--node-count 3 \
--zones us-west-2a \
--dns-zone \
--node-size t2.large \
--networking flannel \
--ssh-public-key /path/to/public/ \

I won’t go into all the details on using kops (though I might do another post on kops at some point in the future). Instead, I’ll refer you to the AWS guide in the kops repository for more information and details on using kops. There are also a number of blog posts about there; here’s one example, and here’s one from AWS themselves.

Using kube-aws

As the name implies, kube-aws is a command-line tool for deploying a Kubernetes cluster on AWS. Like kops, kube-aws needs a Route 53 domain and an S3 bucket for storing state/configuration information. kube-aws also requires a KMS key. On the back-end, kube-aws is leveraging CloudFormation to spin up the Kubernetes cluster.

As with the other tools, I won’t go into great detail on how to use kube-aws; instead, I’ll direct you to the documentation (which seems really good).

Using Azure Container Service (ACS/AKS)

Using the Azure CLI, it’s pretty easy to spin up a Kubernetes cluster. In fact, there are a couple of different ways to go about it.

You can use ACS (Azure Container Service) and specify Kubernetes as the orchestrator type (ACS supports other types of orchestrators as well):

az acs create --orchestrator-type=kubernetes \
--resource-group=my-grp --name=my-clus \

After a 5-7 minutes, the command should complete and you’ll be ready to roll with a Kubernetes cluster running in Azure. Use the az acs kubernetes get-credentials to configure kubectl to talk to the new cluster.

You can also use AKS (Azure Container Service, a newer offering that—as I understand it—supersedes/replaces ACS) to turn up a Kubernetes cluster (AKS only supports Kubernetes):

az aks create --resource-group=my-grp --name=my-clus \
--ssh-key-value /path/to/public/key

As with az acs create, after a few minutes you’ll have a shiny new Kubernetes cluster up and running. From there, you can use az aks upgrade to potentially upgrade your cluster, or az aks get-credentials to pull down a configuration that allows kubectl to interact with your new cluster.

(Note that there was an issue with some versions of the Azure CLI prior to 2.0.25 that caused az aks get-credentials to fail. Upgrading to 2.0.25 or later seems to address the problem. If you need to upgrade the Azure CLI and you installed via pip, I recommend using the --force-reinstall flag to make sure the upgrade completes successfully.)

Sebastien Goasguen also has a brief write-up of ACS/AKS (he also covers Azure Container Instances, ACI).

Why no mention of Google Kubernetes Engine (GKE)?

As a hosted/managed solution, GKE would fall into the same category as ACS/AKS. However, I didn’t include GKE here for the simple reason that I was focusing on CLI-based tools, and (in my opinion) Google makes it way too difficult to get the gcloud tool installed on your system. (Seriously—if you’re going to write it in Python, why not just use pip?) There is a Docker container available, but at nearly 1GB in size (last time I checked) I don’t feel that’s a very attractive option either.

If you don’t mind jumping through hoops to get gcloud installed (or if you’d prefer to use a GUI), then I’m sure GKE is a perfectly acceptable option.

There you have it…a few options for quickly and easily getting Kubernetes up and running (and you don’t need a massive home lab, either). Enjoy!

Recent Posts

Technology Short Take 94

Welcome to Technology Short Take 94! Ready for another round of links, articles, and thoughts on data center technologies? (Who knows, maybe I’ll throw a rant or two in there.) OK, enough rambling…here’s the good stuff!


Running OVS on Fedora Atomic Host

In this post, I’d like to share the results of some testing I’ve been doing to run Open vSwitch (OVS) in containers on a container-optimized Linux distribution such as Atomic Host (Fedora Atomic Host, specifically). I’m still relatively early in my exploration of this topic, but I felt like sharing what I’ve found so far might be helpful to others, and might help spark conversations within the relevant communities about how this experience might be improved.


Using Docker Machine with Azure

I’ve written about using Docker Machine with a number of different providers, such as with AWS, with OpenStack, and even with a local KVM/Libvirt daemon. In this post, I’ll expand that series to show using Docker Machine with Azure. (This is a follow-up to my earlier post on experimenting with Azure.)


An Update on Using Docker Machine with Vagrant

As part of a project on which I’m working, I’ve been spending some time working with Docker Machine and Vagrant over the last few days. You may recall that I first wrote about using these two tools together back in August 2015. As a result of spending some additional time with these tools—which I chose because I felt like they streamlined some work around this project—I’ve uncovered some additional information that I wanted to share with readers.


Technology Short Take 93

Welcome to Technology Short Take 93! Today I have another collection of data center technology links, articles, thoughts, and rants. Here’s hoping you find something useful!


Experimenting with Azure

I’ve been experimenting with Microsoft Azure recently, and I thought it might be useful to share a quick post on using some of my favorite tools with Azure. I’ve found it useful to try to leverage existing tools whenever I can, and so as I’ve been experimenting with Azure I’ve been leveraging familiar tools like Docker Machine and Vagrant.


Issue with VMware-Formatted Cumulus VX Vagrant Box

I recently had a need to revisit the use of Cumulus VX (the Cumulus Networks virtual appliance running Cumulus Linux) in a Vagrant environment, and I wanted to be sure to test what I was doing on multiple virtualization platforms. Via Vagrant Cloud, Cumulus distributes VirtualBox and Libvirt versions of Cumulus VX, and there is a slightly older version that also provides a VMware-formatted box. Unfortunately, there’s a simple error in the VMware-formatted box that prevents it from working. Here’s the fix.


Technology Short Take 92

Welcome to Technology Short Take 92, the first Technology Short Take of 2018. This one was supposed to be the last Tech Short Take of 2017, but I didn’t get it published in time (I decided to spend time with my family instead—some things are just more important). In any case, hopefully the delay of one additional week hasn’t caused any undue stress—let’s jump right in!


Looking Back: 2017 Project Report Card

As has become my custom for the past several years, I wanted to take a look at how well I fared on my 2017 project list. Normally I’d publish this before the end of 2017, but during this past holiday season I decided to more fully “unplug” and focus on the truly important things in life (like my family). So, here’s a look back at my 2017 projects and a report card on my progress (or lack thereof, in some cases).


Installing XMind 8 on Fedora 27

XMind is a well-known cross-platform mind mapping application. Installing the latest version of XMind (version 8) on Linux is, unfortunately, more complicated than it should be. In this post, I’ll show how to get XMind 8 running on Fedora 27.


Installing the VMware Horizon Client on Fedora 27

In this post, I’ll outline the steps necessary to install the VMware Horizon client for Linux on Fedora 27. Although VMware provides an “install bundle,” the bundle does not, unfortunately, address any of the prerequisites that are necessary in order for the Horizon client to work. Fortunately, some other folks shared their experiences, and building on their knowledge I was able to make it work. I hope that this post will, in turn, help others who may find themselves in the same situation.


Using Vagrant with Azure

In this post, I’ll describe how to use Vagrant with Azure. You can consider this article an extension of some of my earlier Vagrant articles; namely, the posts on using Vagrant with AWS and using Vagrant with OpenStack. The theme across all these posts is examining how one might use Vagrant to simplify/streamline the consumption of resources from a provider using the familiar Vagrant workflow.


Technology Short Take 91

Welcome to Technology Short Take 91! It’s been a bit longer than usual since the last Tech Short Take (partly due to the US Thanksgiving holiday, partly due to vacation time, and partly due to business travel), so apologies for that. Still, there’s a great collection of links and articles here for you, so dig in and enjoy.


Installing the Azure CLI on Fedora 27

This post is a follow-up to a post from earlier this year on manually installing the Azure CLI on Fedora 25. I encourage you to refer back to that post for a bit of background. I’m writing this post because the procedure for manually installing the Azure CLI on Fedora 27 is slightly different than the procedure for Fedora 25.


Using Vagrant with Libvirt on Fedora 27

In this post, I’m going to show you how to use Vagrant with Libvirt via the vagrant-libvirt provider when running on Fedora 27. Both Vagrant and Libvirt are topics I’ve covered more than a few times here on this site, but this is the first time I’ve discussed combining the two projects.


Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!