Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Running the gcloud CLI in a Docker Container

A few times over the last week or two I’ve had a need to use the gcloud command-line tool to access or interact with Google Cloud Platform (GCP). Because working with GCP is something I don’t do very often, I prefer to not install the Google Cloud SDK; instead, I run it in a Docker container. However, there is a trick to doing this, and so to make it easier for others I’m documenting it here.

The gcloud tool stores some authentication data that it needs every time it runs. As a result, when you run it in a Docker container, you must take care to store this authentication data outside the container. Most of the tutorials I’ve seen, like this one, suggest the use of a named Docker container. For future invocations after the first, you would then use the --volumes-from parameter to access this named container.

There’s only one small problem with this approach: what if you’re using another tool that also needs access to these GCP credentials? In my case, I needed to be able to run Packer against GCP as well. If the authentication information is stored inside a named Docker container (and then accessed using the --volumes-from parameter), that information won’t be accessible to commands not running in a Docker container.

The fix for this is to bind mount a host path into the container instead of using a named volume. First, create the ~/.config/gcloud directory on your system. Then you’ll initialize and authenticate with this command:

docker run --rm -ti -v $HOME/.config/gcloud:/root/.config/gcloud \
google/cloud-sdk gcloud init

This will take you through the initialization/authentication process, and will store the authentication information outside the container (so that tools like Packer can still access them). From there, just include the bind mount for future invocations of the Docker image. For example, to see a list of your GKE clusters:

docker run --rm -ti -v $HOME/.config/gcloud:/root/.config/gcloud \
google/cloud-sdk gcloud container clusters list

You could then make this easier for yourself with a Bash alias:

alias gcloud="docker run --rm -ti \
-v $HOME/.config/gcloud:/root/.config/gcloud \
google/cloud-sdk gcloud"

Nothing terribly new or revolutionary here, but I hope it’s useful to someone nevertheless.

Technology Short Take 104

Welcome to Technology Short Take 104! For many of my readers, VMworld 2018 in Las Vegas was “front and center” for them since the last Tech Short Take. Since I wasn’t attending the conference, I won’t try to aggregate information from the event; instead, I’ll focus on including some nuggets you may have missed amidst all the noise.



  • Greg Schulz discusses new Power9-based systems announced by IBM; see his post. Normally I wouldn’t be too interested in non-x86 stuff, as it seems like x86 is ascendant. However, given the rise of all the various speculative execution attacks, and given the recent interest in ARM platforms (can’t recall if they are affected by the speculative execution attacks), is a revival of non-x86 platforms in the works?


Nothing this time around, but I’ll stay alert for items to include next time!

Cloud Computing/Cloud Management

  • This is just awesome.
  • It’s fairly simplistic, but it could be useful nevertheless: the kube-resource-report project offers some insight into the costing behind Kubernetes clusters on AWS.
  • One of the interesting announcements to come out of the recent VMworld 2018 conference in Las Vegas is also described in this post on the AWS News Blog: Amazon RDS on VMware. I suppose it’s a win-win deal for both VMware and AWS (else why bother?), though the long-term implications of having AWS’ policy layer on top may come back to bite VMware. Time will tell.
  • I wouldn’t say noboby noticed, Maish. I called this out when I saw the announcement as well. Combine this with other recent moves (see previous bullet), and it sure seems that AWS is out for the “private cloud” use case (which isn’t really a private cloud, but that’s an entirely different discussion).

Operating Systems/Applications

  • I’ve made no secret of the fact that I’m a Terraform fan, and I recently uncovered a great series by Ned Bellavance called Function of the Day (FotD), in which he discusses a Terraform interpolation function each day. This is a really useful learning resource.
  • Want to become more of a CLI wizard? Start here.
  • If you’re still in need of an introductory article on building your own Docker image, this one may help you out.
  • Here’s a great tutorial (sent in by a reader, thanks Mike!) on building your own Bash completion script.
  • Based partly on this article, I’ve started using fzf and fd on my Fedora laptop. I must admit that these are both pretty cool tools—I’d recommend giving them a look if you spend a lot of time at a prompt. If you’re interested in possibly giving fzf a try, you might have a look here and here as well.
  • Microsoft is adding a third Windows container image, in addition to the existing Nano Server and Windows Server Core images. More details and an explanation of why an additional image is needed are in this blog post.


  • An 8TB NVMe drive? Holy moly! Chris Evans has more details.


  • Curtis (don’t have the last name since it’s not listed anywhere) has a write-up on using cloud images with KVM.
  • I said I wouldn’t try to aggregate the news from VMworld, but I couldn’t resist throwing in at least one “wrap up” from the show. Eric Siebert’s post-conference thoughts are here; he does a pretty good job of capturing the major announcements. I also saw this recap by Rachel Stevens; she’s pretty bullish about the RDS on VMware announcement (as a lot of folks are).

Career/Soft Skills

That’s all for today; it’s a bit shorter than usual but hopefully there are still some useful tidbits in there. Feel free to hit me up on Twitter with comments or feedback!

Kubernetes with Cilium and Containerd using Kubeadm

Now, if that isn’t a title jam-packed with buzzwords, I don’t know what is! In seriousness, though, I wanted to share how to use kubeadm to turn up a Kubernetes cluster using containerd (instead of Docker) and Cilium as the CNI plugin. I’m posting this because I wasn’t able to find a reasonable article that combined all the different threads—some posts talked about using containerd, others talked about using Cilium, and the official Kubernetes docs have examples for using kubeadm. The purpose of this post is to try to pull those threads together.

For structure and context, I’ll build upon the official Kubernetes document outlining creating highly available clusters with kubeadm. You may find it helpful to pull up that article next to this one, as I won’t be duplicating that content here. Instead, I’ll just reference additions/changes to the process in order to accommodate containerd and Cilium.

Before getting started, make sure that your systems will meet the minimum requirements for Cilium. For my testing, I used Ubuntu 16.04 with the latest HWE kernel (4.15.0-33-generic). I used a private fork of Wardroom to build the AWS AMIs with containerd and all the Kubernetes 1.11.2 packages installed, and wrote a custom Ansible script to update the AMIs with the latest HWE kernel. (Side note: I do hope to get my containerd-related changes into Wardroom at some point in the future, so others will be able to leverage Wardroom as well.) Obviously, you can choose to manually build your own images; see the “Additional Resources” section for a link on how to install containerd. You’ll want to do that before trying to turn up the Kubernetes cluster.

Ready? Let’s go! I’ll organize the content below according to the section headings from the official Kubernetes procedure for creating highly available clusters using kubeadm.

Bootstrap the First Stacked Control Plane Node

Up until this point, everything remains the same (no changes are needed to accommodate an alternate container runtime or a specific CNI plugin). Once you get to this point and you’re ready to bootstrap the first control plane node, some additional changes are needed. Specifically, you need to add the following content to the kubeadm configuration file:

  criSocket: /var/run/containerd/containerd.sock

This assumes, naturally, that you’re using the default location for the runtime endpoint. Modify as needed.

I also had to do the following on my Ubuntu 16.04 nodes, but I take this as more of an issue with my images (you may not need these commands):

modprobe br_netfilter
sysctl -w net.ipv4.ip_forward=1

With this content added, you’re good to go to run kubeadm init --config <config-file-name> to bootstrap the first master. kubectl get nodes will report it as NotReady until you install the CNI, but that’s OK. You can proceed with the next steps in the procedure.

Add the Second Stacked Control Plane Node

The only change here is to (again) add this content to the kubeadm config file:

  criSocket: /var/run/containerd/containerd.sock

You may also need to run the modprobe and sysctl commands outlined in the previous section. I suspect the need to run those commands reflects a problem in my custom AMIs.

Run the rest of the commands in this section as outlined, and your second control plane node should come online.

Add the Third Stacked Control Plane Node

The same change to the kubeadm configuration file is needed here, along with (maybe) the modprobe and sysctl commands.

Once this step is complete, you’ll have a three-node control plane up and running, although they’ll all report NotReady in kubectl get nodes until you install a networking plugin. Fortunately, you’ll tackle that next.

Install a Pod Network

Now you’ll start getting into some Cilium-specific stuff. For the most part, this is pretty straightforward. You can follow the instructions found here, with a few changes. I recommend downloading the Cilium-CRIO YAML file instead of the one linked in the instructions (you can find it here). Then follow the instructions for configuring the ConfigMap for etcd and creating the TLS secrets.

However, before proceeding with deploying Cilium, there are a few additional changes to make to the YAML file (these notes assume you are using v1.2 of the Cilium-CRIO YAML file):

  • On line 120, change --container-runtime=crio to --container-runtime=containerd.
  • Add a line immediately after line 120 with the contents --container-runtime-endpoint=containerd=/var/run/containerd/containerd.sock (follow the formatting/indentation of the previous line). Adjust the location of the runtime endpoint as needed.
  • Change line 221 from crio-socket to containerd-socket.
  • Change line 222 to /var/run/containerd/containerd.sock.
  • Change line 249 from crio-socket to containerd-socket.
  • Change line 251 to /var/run/containerd/containerd.sock.

Once you have these changes in place, you can proceed with the instructions in the “Deploying” section of the standard install guide.

Assuming that Cilium deploys properly, your control plane nodes should switch from NotReady to Ready in kubectl get nodes, and you should see some Cilium pods scheduled and running when you run kubectl -n kube-system get pods.

Install Workers

Once the control plane nodes are reporting Ready (and perhaps you’ve done some troubleshooting with Cilium to ensure that it’s working as expected), then you’re ready to add worker nodes.

When you run kubeadm init... on the first stacked control plane node, it will spit out a command line you can use to join worker nodes to the Kubernetes cluster. You can use this command, but you must add one small piece. At the end of the command, add --cri-socket /var/run/containerd/containerd.sock. This tells kubeadm that this node is using containerd as the container runtime, and will enable the join operation to succeed.

Additional Resources

The information is this post was based on a number of other articles along with my direct hands-on experience. Readers may want to review these other documents for additional information:

I hope this information is useful. If you have questions (or corrections!), please hit me up on Twitter. Thanks!

Book Review: REST API Design Rulebook

REST API Design Rulebook (written by Mark Masse and published by O’Reilly Media; more details here) is an older book, published in late 2011. However, having never attempted to design a REST API before, I found lots of useful information inside that really helped shape my understanding of REST APIs and REST API design.

(In case you’re wondering why I was reading a book about REST API design, this ties into my 2018 project list and the software development project I recently launched.)

Overall, I found the book quite helpful and useful. If I had one complaint about the book, it would be the book’s repeated insistence on referring to WRML (Web Resource Modeling Language), which—as I understand it—is a proposed solution by the book’s author to some of the challenges around REST API design. I get that the author is sold on the value of WRML, but at times the book felt very much like a WRML commercial.

Aside from that one complaint, the book’s organization into a set of “rules” helped make the material reasonably consumable, and I appreciated the review of key terms at the end of each chapter.

I do still have some questions about REST APIs and REST API design; I suppose that’s natural and expected for a newcomer like myself. In particular, the emphasis placed on HATEOAS (Hypermedia As The Engine of Application State) is still a bit unclear to me. Some things make sense—like including links to actions (provided by the API) that can be performed on an object—but there were times in the book when it seemed like HATEOAS was being overemphasized. However, I freely admit my perception could be due to my lack of practical experience in this area.

If you’re looking to learn more about REST API design, there are most certainly worse places to look for some useful and practical information.

Disclaimer: I am an O’Reilly author, but I did not receive any compensation from O’Reilly, any other publisher or reviewer, or the author of the book mentioned in this review. These thoughts are mine alone.

Better XMind-GNOME Integration

In December of 2017 I wrote about how to install XMind 8 on Fedora 27, and at the time of that writing I hadn’t quite figured out how to define a MIME type for XMind files that would allow users to double-click on an XMind file in Nautilus and open that file in XMind. After doing a bit of additional research and testing, I’ve found a solution and would like to share it here.

The solution I’ll describe here has been tested on Fedora 28, but it should work on just about any distribution with the GNOME desktop environment.

First, you’ll want to define the MIME type by creating an XML file in the ~/.local/share/mime/packages directory, as outlined here. I called my file application-vnd-xmind-workbook.xml, but I don’t know if the filename actually matters. (I derived the filename from this list of XMind file types.) The contents of the file should look something like this:

<mime-info xmlns="">
  <mime-type type="application/vnd.xmind.workbook">
    <comment>XMind Workbook</comment>
    <glob pattern="*.xmind"/>
    <glob pattern="*.XMIND"/>
    <glob pattern="*.XMind"/>

You’ll note that multiple glob patterns are included to help deal with case sensitivity issues. The specific values used in the mime-type element is again taken from the XMind wiki.

Next, you’ll want to adjust the desktop launcher for XMind. Add the MimeType=application/vnd.xmind.workbook line to the desktop launcher file. I’m using a desktop launcher that is user-specific, and found in the ~/.local/share/applications directory. It should look something like this when you’re done:

[Desktop Entry]
Comment=Create and share mind maps
Exec=/opt/xmind/XMind_amd64/XMind %F

You’ll note that this example points to an icon stored in a user-specific directory; be sure to adjust this as needed to point to the location of the icon you want to use for XMind.

It also appears that the Path= statement in the desktop file is important; it specifies the working directory to run the program in (see this reference). I saw at least one reference (here’s one example) of needing to switch to the correct directory in order to get XMind to open files specified on the command line.

Finally, update the MIME and application databases, respectively:

update-mime-database ~/.local/share/mime/packages
update-desktop-database ~/.local/share/applications

You can run the gio commands outlined here (look toward the bottom of the page) to test to make sure that the MIME type and application are correctly defined and linked, if you like.

For a custom icon (otherwise GNOME will use the generic text document icon for XMind files) the process is a bit trickier. I use the Numix icon theme, so I ran the following commands:

cd /usr/share/icons/Numix/48/mimetypes
sudo ln -s inode-symlink.svg application-vnd.xmind.workbook.svg
sudo gtk-update-icon-cache /usr/share/icons/Numix

After that, the icons for XMind files immediately changed. For readers using a different theme, you’ll need to substitute the correct paths and the correct filenames; the trick, of course, is to end up with a filename that matches the new MIME type defined earlier.

With these changes in place, you should now be able to double-click an XMind mind map and have it launch XMind (if not already running) and open the selected file (as one would expect). Enjoy!

Recent Posts

Populating New Namespaces Using Heptio Ark

Heptio Ark is a tool designed to backup and restore Kubernetes cluster resources and persistent volumes. As such, it enables users to do a bunch of very useful things like copy cluster resources across cloud providers or replicate environments for development, staging, testing, QA, etc. In this post, I’ll share a slightly different use case for Ark: populating resources into new Kubernetes namespaces.


A Simple Kubernetes Context Switcher

I recently needed to find a simple way of switching between Kubernetes contexts. I already use powerline-go (here’s the GitHub repo), which allows me to display the Kubernetes context in the prompt so I always know which context is the active (current) context. However, switching between contexts using kubectl config set-context <name> isn’t the easiest approach; not to mention it requires merging multiple config files into a single file (which is itself a bit of a task). So, I set out to create a simple Kubernetes context switcher—and here’s the initial results of my efforts.


Bootstrapping an etcd Cluster with TLS using Kubeadm

The etcd distributed key-value store is an integral part of Kubernetes. I first wrote about etcd back in 2014 in this post, but haven’t really discussed it in any great detail since then. However, as part of my recent efforts to dive much deeper into Kubernetes, I needed to revisit etcd. In this post, I wanted to share how to boostrap a new etcd cluster with TLS certificates using kubeadm.


Troubleshooting TLS Certificates

I was recently working on a blog post involving the use of TLS certificates for encryption and authentication, and was running into errors. I’d checked all the “usual suspects”—AWS security groups, host-level firewall rules (via iptables), and the application configuration itself—but still couldn’t get it to work. When I did finally find the error, I figured it was probably worth sharing the commands I used in the event others might find it helpful.


Technology Short Take 103

Welcome to Technology Short Take 103, where I’m back yet again with a collection of links and articles from around the World Wide Web (Ha! Bet you haven’t seen that term used in a while!) on various technology areas. Here’s hoping I’ve managed to include something useful to you!


VMworld 2018 Prayer Time

For the last several years, I’ve organized a brief morning prayer time at VMworld. This year, I won’t be at the conference, but I’d like to help coordinate a time for believers to meet nevertheless. So, if you’re a Christian interested in gathering together with other Christians for a brief time of prayer, here are the details.


Bolstering my Software Development Skills

I recently tweeted that I was about to undertake a new pet project where I was, in my words, “probably going to fall flat on my face”. Later, I asked on Twitter if I should share some of the learning that will occur (is ocurring) as a result of this new project, and a number of folks indicated that I should. So, with that in mind, I’m announcing this project I’ve undertaken is a software development project aimed at helping me bolster my software development skills, and that I’ll be blogging about it along the way so that others can benefit from my mistakes…er, learning.


Cloning All Repositories in a GitHub Organization

I’ve recently started playing around with Ballerina, and upon the suggestion of some folks on Twitter wanted to clone down some of the “official” Ballerina GitHub repositories to provide code examples and guides that would assist in my learning. Upon attempting to do so, however, I found myself needing to clone down 39 different repositories (all under a single organization), and so I asked on Twitter if there was an easy way to do this. Here’s what I found.


Spousevitivities at VMworld 2018

In case there was any question whether Spousetivities would be present at VMworld 2018, let this settle it for you: Spousetivities will be there! In fact, registration for Spousetivities at VMworld 2018 is already open. If previous years are any indication, there’s a really good possibility these activities will sell out. Better get your tickets sooner rather than later!


Additive Loops with Ansible and Jinja2

I don’t know if “additive” is the right word, but it was the best word I could come up with to describe the sort of configuration I recently needed to address in Ansible. In retrospect, the solution seems pretty straightforward, but I’ll include it here just in case it proves useful to someone else. If nothing else, it will at least show some interesting things that can be done with Ansible and Jinja2 templates.


Technology Short Take 102

Welcome to Technology Short Take 102! I normally try to get these things published biweekly (every other Friday), but this one has taken quite a bit longer to get published. It’s no one’s fault but my own! In any event, I hope that you’re able to find something useful among the links below.


More Handy CLI Tools for JSON

In late 2015 I wrote a post about a command-line tool named jq, which is used for parsing JSON data. Since that time I’ve referenced jq in a number of different blog posts (like this one). However, jq is not the only game in town for parsing JSON data at the command line. In this post, I’ll share a couple more handy CLI tools for working with JSON data.


A Quick Intro to the AWS CLI

This post provides a (very) basic introduction to the AWS CLI (command-line interface) tool. It’s not intended to be a deep dive, nor is it intended to serve as a comprehensive reference guide (the AWS CLI docs nicely fill that need). I also assume that you already have a basic understanding of the key AWS concepts and terminology, so I won’t bore you with defining an instance, VPC, subnet, or security group.


Examining X.509 Certificates Embedded in Kubeconfig Files

While exploring some of the intricacies around the use of X.509v3 certificates in Kubernetes, I found myself wanting to be able to view the details of a certificate embedded in a kubeconfig file. (See this page if you’re unfamiliar with what a kubeconfig file is.) In this post, I’ll share with you the commands I used to accomplish this task.


Using Variables in AWS Tags with Terraform

I’ve been working to deepen my Terraform skills recently, and one avenue I’ve been using to help in this area is expanding my use of Terraform modules. If you’re unfamiliar with the idea of Terraform modules, you can liken them to Ansible roles: a re-usable abstraction/function that is heavily parameterized and can be called/invoked as needed. Recently I wanted to add support for tagging AWS instances in a module I was building, and I found out that you can’t use variable interpolation in the normal way for AWS tags. Here’s a workaround I found in my research and testing.


Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!