Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Technology Short Take 114

Welcome to Technology Short Take #114! There will be a longer gap than usual before the next Tech Short Take (more details to come on Monday), but in the meantime here’s some articles and links to feed your technical appetite. Enjoy!

Networking

Servers/Hardware

Security

Cloud Computing/Cloud Management

  • This article by Bob Killen provides a good foundation of information on understanding Kubernetes authentication (AuthN) and authorization (AuthZ; implemented via RBAC).
  • For folks that might be a bit newer to the Kubernetes community, this InfoQ article provides a fairly thorough introduction to Kubernetes concepts and building blocks. More advanced users won’t find much useful, so if you’ve already mastered some of the basics I wouldn’t bother spending time reading it. This isn’t a knock against the article, just a frank observation about its usefulness to varying experience levels.
  • I included this link on scaling Kubernetes to 2,500 nodes in TST 94, but wanted to include it again here. If you are like me, your knowledge of Kubernetes has grown progressively since last February, and some of the recommendations made here may make more sense now in the context of the additional knowledge you’ve acquired.
  • Andy Chou introduces faast.js, a new project to make serverless functions super-easy to use.
  • Cody De Arkland spent some time with Pulumi and has an article on setting up a VPN between his home lab and AWS using Pulumi.
  • Speaking of Pulumi, Kyle Galbraith wrote up a comparison of Pulumi and Terraform for infrastructure as code.
  • Richard Bejarano rants a bit about container misconceptions.
  • The CNCF blog has a great article written by an Alibaba software engineer (Xingyu Chen) on some performance optimizations for etcd that have been contributed back to the open source community.

Operating Systems/Applications

  • This article has good information on safely using /tmp and /var/tmp on systemd-powered Linux distributions.
  • Cindy Sridharan has a good article on health checks and graceful degradation in distributed systems. This is a slightly older article (August 2018), but an informative one (to me, at least). The article explains a few orchestration concepts (like liveness probes and readiness probes in Kubernetes), but mostly focuses on the need for fine-grained health checks in applications.
  • Microsoft recently introduced some new remote development extensions for Visual Studio Code. These look interesting, but be aware that they are not licensed with an open source license. Additionally, the wording of the license for these extensions has caused some consternation (at least, among the folks I follow on Twitter). I’ll leave it to readers to come to their own conclusions.
  • Sinny Kumari shares some information on running Fedora CoreOS on libvirt.

Storage

Virtualization

  • I don’t normally post a lot of EUC (end-user computing) content, but this time around I have two articles that popped onto my radar. First, there’s Johan Van Amersfoort who discusses building Linux VDI for deep learning workloads (looks like this may be the first in a series, so if this is of interest to you be sure to keep an eye on his site). The second article is by Rob Beekmans examining several virtual desktop platforms available to customers/users.

Career/Soft Skills

OK, that’s all for now. As always, feel free to provide any feedback via Twitter. Have a great weekend!

The Linux Migration: Preparing for the Migration

As far back as 2012, I was already thinking about migrating away from Mac OS X (now known as macOS). While the migration didn’t start in earnest until late 2016, a fair amount of work happened in advance of the migration. Since I’ve had a number of folks ask me about migrating to Linux, I thought I’d supplement my Linux migration series with a “prequel” about some of the work that happened to prepare for the migration.

In the end—and I imagine some folks may get upset or offended at this—an operating system (OS) is really just a vehicle to deliver applications to the user. While users like myself have strong preferences about their OS and how their OS works, ultimately it is the ability to “get things done” that really matters. This is why I ended up suspending my Linux migration in August 2017; I didn’t have access to the applications I needed in order to do what I needed to do. (Though, to be fair, part of that was a lack of growth on my part, though that’s a different blog post for a different day.)

To that end, most of the work I did in advance of the migration involved three key areas:

  • Understanding the tasks I needed to do and the workflows used to do them
  • Mapping my tasks to applications
  • Freeing data from proprietary formats

Let’s look at each of these individually. Keep in mind there’s a pretty fair amount of “fluidity” among these three areas; each area is both informed by and informs the other areas.

Understanding my Tasks and Workflows

In order to know if you’ll be able to do what you need to do using Linux (or Windows, or macOS; it applies equally regardless of your destination OS), you first need to fully understand what you need to do. I know this sounds simplistic—“How could I not know what I need to do?“—but it can be easy to overlook things when you’ve gotten into a comfortable workflow. On three separate occasions between 2012 (when I first started considering migrating away from macOS) and 2016 (when I actually kicked off a migration), I thoroughly examined the “things I do” (my tasks) as well as how I did those things (my workflows). These examinations exposed potential areas of concern, such as the use of OS-specific scripting functionality (AppleScript) or areas where I had not yet identified a replacement application (see the next section).

I also took this time to identify “critical” tasks—what are the things that I absolutely needed to be able to do, and what are the things it would merely be nice to be able to do?

If you’re considering a migration and haven’t yet done something like this, I’d strongly urge you to run through this exercise first. When you do, you’ll also want to consider any tasks that are being done “automatically” for you. For example, when I was on macOS I used a tool called Hazel to automatically organize files in specific folders. This was a task I needed to do, but which was being done automatically. By making sure to include such tasks, I was able to have a clearer view of what life would be like after the migration.

Mapping Tasks to Applications

Once you have a good understanding of the things you need to do, then it’s time to determine if that functionality exists on your target platform (Linux, in my case). So, in addition to examining my tasks and workflows on multiple occasions, I also went through every single application that was installed on my systems to determine if a Linux equivalent or replacement existed. Early in the timeline (in 2013), I hadn’t yet identified replacement applications for many of my macOS applications. As time progressed and I spent more time researching, I identified more and more replacement applications (or I migrated away from such applications; see the next section). In some cases, I realized I didn’t use that application very much, and so I stopped using it entirely because my task analysis (what I described in the previous section) indicated that it wasn’t needed for any critical tasks. In other cases, I migrated to software-as-a-service (SaaS) alternatives when an equivalent Linux application wasn’t available.

This step—and the next step regarding data formats—are likely going to be fairly time-consuming, and you’ll very likely need to iterate a couple of times through the process. Of course, it’s probable that a fair amount of this will be driven by your job and your employer, who will dictate certain requirements/standards. Be sure to keep that in mind.

Freeing Data from Proprietary Formats

Proprietary data formats were on my mind as far back as 2011, in advance of my first published mention of migrating away from macOS. It was around that time that I started strongly embracing Markdown and more platform-independent file formats. In looking back over it now, I realize that my embrace of Markdown led to other notable changes in my workflows and applications:

  • The blog migration from WordPress to Jekyll/GitHub Pages was driven largely by my desire to use Markdown for blogging (as well as a desire to more fully include git in my regular workflows). It also had the nice effect of freeing me from a macOS-specific application I’d been using to make blogging easier, and unlocking my blog content into a more portable format.
  • My use of Markdown for presentations (first via Deckset as described here, later via Remark.js as described here) helped lessen my reliance on proprietary presentation applications. I moved all my personal presentations to Markdown (well, variants of Markdown).

It wasn’t just about Markdown, though Markdown was helpful and played a large role. I had to identify every instance where data was being stored in some proprietary, non-portable format and use that information hand-in-hand with the application research to determine the best course of action. In some cases, there was no migration path. In other cases, I could use an intermediate format that served as a “bridge” between the old application and the new application. In yet other cases, I could export proprietary data formats to a standard-based format (often text-based, like OPML or XML) that I could open and manipulate with a text editor (which had now become perhaps the most important application I used). The ideal scenario is when the new application supported importing the data formats of the old application (support in LibreOffice for working with Microsoft Office documents is a great example).

Those of you considering a migration to a new OS will need to undergo a similar process. I hope that those of you thinking about migrating to Linux find the information I’ve shared here and in the rest of the Linux migration series helpful and informative.

Wrapping Up

The gap of four years between when I first mentioned switching away from macOS and when I actually started a migration in earnest is due partially to a lack of urgency on my part, but there was stuff happening in that time. As I’ve described in the previous sections, I did spend quite a bit of time thoroughly exploring how I was spending my time, the workflows I was using, the applications I was running, and the data formats that were in use. Armed with that knowledge, I was then able to identify a path forward—either identifying a replacement application and/or replacement data format, migrating to a different application or different data format, or completely retiring applications and data formats that were no longer essential for me. Could I have done this in less time? Oh, most certainly! My point is, though, that anyone considering switching to Linux—or to any alternate OS—is going to need to go through a similar set of exercises in order for their migration to also be successful.

If you’re thinking of migrating to Linux, I’d love to hear from you and possibly help answer any questions you may have. Feel free to contact me on Twitter!

A Sandbox for Learning Pulumi

I recently started using Pulumi, a way of using a general purpose programming language for infrastructure-as-code projects. I’ve been using Pulumi with JavaScript (I know, some folks would say I should question my life decisions), and while installing Pulumi itself is pretty low-impact (a small group of binaries) there are a number of dependencies that need to be installed when using Pulumi with JavaScript. As I’m a stickler for keeping my primary system very “clean” with regard to installed packages and software, I thought I’d create a means whereby I can easily spin up a “sandbox environment” for learning Pulumi.

When creating this sandbox environment, I turned to some tools that are very familiar:

  • I used virtualization (a virtual machine) as the isolation mechanism. The next step is to use a Linux container, like a Docker container, as the isolation mechanism, but I thought I’d start with something a bit simpler at first.
  • Vagrant provides a way of automating the creation/destruction of said VM. Again, Vagrant is well-understood and widely used.
  • Ansible provides the automation to configure the VM with the necessary software (Pulumi and associated dependencies).
  • I also thought that some folks might find it interesting or useful to be able to instantiate AWS instances from a preconfigured AMI, so I also included a Packer build file to enable folks to build their own Pulumi-ready AMI.

Since it seemed reasonable to think that others might find this useful as well, I placed all of this into my GitHub “learning-tools” repository; check the pulumi folder. Obviously, you’ll need a virtualization provider (VMware Fusion, VirtualBox, and Libvirt are all supported), Vagrant, and Ansible installed. If you want to build your own AMI, you’ll need Packer and the associated AWS tooling installed as well.

There’s nothing terribly new or novel here; it uses tools and techniques that are already pretty well-known and well-understood by most folks. My goal here wasn’t to create something entirely new, but rather to make it easier to learn Pulumi while minimizing the impact/affect on my Linux system.

To use the Vagrant environment I created, simply copy the directory (or clone the entire repository) and then run vagrant up. After Vagrant is finished, run ansible-playbook configure.yml to configure the VM for Pulumi, and then you’re good to go.

In AWS, you could create a new AMI using Packer by running packer build packer.json, or create a new EC2 instance and run the configure.yml Ansible playbook against it.

To then make it easier to set up this Pulumi sandbox for various scenarios, I’ll be creating scenario-specific playbooks that you can apply using ansible-playbook. There’s only one there right now; it’s for creating a single EC2 instance using Pulumi and JavaScript. Since I’ve chosen to focus my efforts around JavaScript, future scenarios will also be JavaScript-centric, so keep that in mind.

(By the way, all this information is in the README file in the pulumi directory of my “learning-tools” repository.)

Additional Resources

For more background and context on Pulumi, you may want to check out episode 30 of the Full Stack Journey podcast, which features Luke Hoban, CTO of Pulumi, discussing what Pulumi is and why users should consider using it for their infrastructure-as-code projects. Also, Joe Beda did a TGIK episode on Pulumi; check it out on YouTube.

Technology Short Take 113

Welcome to Technology Short Take #113! I hope the collection of links and articles I’ve gathered for you contains something useful for you. I think I have a pretty balanced collection this time around; there’s a little bit of something for almost everyone. Who says you can’t please everyone all the time?

Networking

  • Via the Kubernetes blog, Box announced it has open sourced a project called kube-iptables-tailer, which turns packet drops from iptables into Kubernetes events that can be logged for easier troubleshooting. The GitHub repository for the project is here.
  • Via BlueCat Networks, John Capobianco shares his network automation journey. In part 1, John discusses the frameworks/tooling and the goals for his network automation efforts; in part 2, John digs into getting started with Ansible and the initial impact of his efforts.
  • Diógenes Rettori has a comparison of Istio and Linkerd as solutions for service mesh. Personally, I could’ve done without the little product advertisement at the end, but that’s just me.
  • Here’s a good article on packets-per-second limits in EC2.

Servers/Hardware

Security

Cloud Computing/Cloud Management

  • This looks like a handy tool. It was pointed out to me quite some time ago, but I was lax in getting into a Technology Short Take.
  • This bare metal host management solution for Kubernetes looks interesting, but it seems like something that should be part of the community’s ClusterAPI efforts.
  • This past week Microsoft and VMware announced Azure VMware Solutions, which allows customers to run the VMware software stack on Azure (an arrangement similar in nature to VMware Cloud on AWS, as I understand it). One key difference to keep in mind is that this solution is delivered by Microsoft (not VMware); with VMware Cloud on AWS, the solution is delivered by VMware. Check out Thomas Maurer’s post on the announcement.
  • Henning Jacobs shares his perspective on many Kubernetes clusters vs. fewer Kubernetes clusters.
  • This article on edge triggering vs. level triggering in Kubernetes is really good, and well worth reading.

Operating Systems/Applications

  • This post is from 2012, but is—to me—still as applicable today as ever.
  • Nicolas Fränkel discussing some optimizations that might be possible by building dependencies into your Docker base layer. I could see this making sense in this case, since the dependencies are pretty stable.
  • William Henry takes some time to explain podman and buildah in terms that are familiar for existing Docker users.
  • I’ve said before that just becuase you can do something doesn’t mean you should do it. Nathaniel Schutta takes that approach to breaking your application down into microservices, and provides six factors to help gauge whether a microservices architecture is the right approach.
  • Rinu Gour an article titled “Kafka for Beginners,” but the article covers a lot of ground (in my opinion)—perhaps a bit too much for true beginners to Kafka (like myself). Still, it’s a useful collection of terminology and Kafka-related links.
  • This article on building DevOps pipelines with open source tools was exactly what a beginner (to CI/CD) needed. It’s high-level and not very detailed, but it outlines the basic components of a pipeline and provide options for each component. If you’re new to this space as well, you may also find this article helpful.
  • Lars Kellogg-Stedman has a post on writing Ansible filter plugins.

Storage

  • It’s the end of an era. This, plus the fact that Howard Marks is working for a vendor…what is the world coming to?
  • Jim Handy (“The SSD Guy”) has a four-part series (I think only three parts have been published as of this post) on the two different operating modes for Intel’s Optane DIMMs (persistent memory). Start with part 1, which is here. Thankfully, Jim has done a good job of linking together the different parts of the series.

Virtualization

  • Here’s a great post with more details on concerns over randomness in virtual machines.
  • Although written more from a security perspective (as in helping folks get started with security research), this article on Hyper-V provide some good foundational information that’s useful for just about anyone.

Career/Soft Skills

  • This slightly older post by Jessie Frazelle outlines what she envisions it means to be a “distinguished engineer” (or some other high-level technical individual contributor). There are some great points here that are, in my opinion, well worth considering.
  • Alice Goldfuss’ article on how to get into SRE is also worth reading, in my opinion, even if you aren’t interested in SRE specifically. Why? The guidance she offers is applicable in many cases to any technical specialty you might be interested in pursuing.

Hit me on Twitter with any comments or feedback. Otherwise, enjoy catching up on some technical reading!

Technology Short Take 112

Welcome to Technology Short Take #112! It’s been quite a while since the last one, as life and work have been keeping me busy. I have, however, finally managed to pull together this list of links and articles from around the Internet, and I hope that something I’ve included here proves useful to readers.

Networking

Servers/Hardware

Nothing this time around! I’ll stay alert for content I can include next time.

Security

  • Tim Hinrichs discusses securing the Kubernetes API with Open Policy Agent.
  • Pod Security Policies (PSPs) are an important security feature in Kubernetes. Sysdig explains PSPs, and talks about kube-psp-advisor, a tool to help simplify deploying PSPs.
  • ClusterScope is a handy tool for finding outdated images in your Kubernetes cluster.
  • This article discusses four open source secrets management tools.
  • Many organizations prefer to use two-factor authentication (2FA) to help protect their systems. While this article on how to configure 2FA for SSH on Fedora probably won’t work in many corporate environments (few use Fedora), it may provide enough information to figure out what it would look like in your environment.

Cloud Computing/Cloud Management

  • Bahubali (Bill) Shetti walks through analyzing the cost of a self-managed Kubernetes cluster on AWS using VMware CloudHealth.
  • Ahmet Alp Balkan does a deep dive on the KUBECONFIG file.
  • Lee Briggs writes about his experience with Fargate. I think the key takeaway here is that prior experience always affects our perceptions and how we go about learning new technologies/acquiring new skills. My prior experience with hypervisors (vSphere, then KVM) affected how I learned Docker and containers. Lee’s prior experience with Kubernetes affected how he learned Fargate. Someone who’d worked quite a bit with Fargate would probably have a hard time switching to Kubernetes. An individual’s learning curve is strongly dictated by previous experience and knowledge.
  • Ernese Norelus has an introductory piece on using Terraform and Ansible to enable repeatable infrastructure builds on AWS.
  • Fernand Galiana introduces Popeye, a tool for finding and identifying misconfigurations in your Kubernetes cluster. I haven’t had the chance to give it a try yet, but it looks pretty interesting.
  • Aeva talks a bit about what happened to OpenStack. Key excerpt (for me) from this article was this statement: “…creating a viable, open source, hyperscale cloud software solution was against the best interest of the companies most heavily investing in OpenStack’s development.”

Operating Systems/Applications

Storage

Nothing this time. Have something you think I should share here? Let me know on Twitter.

Virtualization

Career/Soft Skills

  • This blog post from XMind has some nice tips on staying focused in the workspace.
  • I really enjoyed this discussion on deep work and real-time collaboration. Cal’s book is in the “To Read” pile on my desk; guess I need to hurry up and get to it!

That’s all for now—stay tuned for future Tech Short Takes, as I’m striving to be more regular with publishing them. In the meantime, feel free to contact me on Twitter with any comments, suggestions, corrections, or other feedback.

Recent Posts

Using Kubeadm to Add New Control Plane Nodes with AWS Integration

In my recent post on using kubeadm to set up a Kubernetes 1.13 cluster with AWS integration, I mentioned that I was still working out the details on enabling AWS integration (via the AWS cloud provider) while also using new functionality in kubeadm (specifically, the --experimental-control-plane flag) to make it easier to join new control plane nodes to the cluster. In this post, I’ll share with you what I’ve found to make this work.

Read more...

My Team's Blogs

I’m thankful to have the opportunity to work with an amazing team. Many of my teammates also produce some very useful content via their own sites, and so I thought it might be useful to my readers to share a list of links to my teammates’ blogs.

Read more...

Spousetivities at Oktane 2019

It should come as no surprise to anyone that I’m a huge supporter of Spousetivities, and not just because it was my wife, Crystal Lowe, who launched this movement. What started as the gathering of a few folks at VMworld 2008 has grown over the last 11 years, and this year marks the appearance of Spousetivities at an entirely new conference: Oktane 2019!

Read more...

Looking Ahead: My 2019 Projects

It’s been a little while now since I published my 2018 project report card, which assessed my progress against my 2018 project goals. I’ve been giving a fair amount of thought to the areas where I’d like to focus my professional (technical) development this coming year, and I think I’ve come up with some project goals that align both with where I am professionally right now and where I want to be technically as I grow and evolve. This is a really difficult balance to strike, and we’ll see at the end of the year how well I did.

Read more...

Split Tunneling with vpnc

vpnc is a fairly well-known VPN connectivity package available for most Linux distributions. Although the vpnc web site describes it as a client for the Cisco VPN Concentrator, it works with a wide variety of IPSec VPN solutions. I’m using it to connect to a Palo Alto Networks-based solution, for example. In this post, I’d like to share how to set up split tunneling for vpnc.

Read more...

Advanced AMI Filtering with JMESPath

I recently had a need to do some “advanced” filtering of AMIs returned by the AWS CLI. I’d already mastered the use of the --filters parameter, which let me greatly reduce the number of AMIs returned by aws ec2 describe-images. In many cases, using filters alone got me what I needed. In one case, however, I needed to be even more selective in returning results, and this lead me to some (slightly more) complex JMESPath queries than I’d used before. I wanted to share them here for the benefit of my readers.

Read more...

Technology Short Take 111

Welcome to Technology Short Take #111! I’m a couple weeks late on this one; wanted to publish it earlier but work has been keeping me busy (lots and lots of interest in Kubernetes and cloud-native technologies out there!). In any event, here you are—I hope you find something useful for you!

Read more...

Thoughts on VPNs for Road Warriors

A few days ago I was talking with a few folks on Twitter and the topic of using VPNs while traveling came up. For those that travel regularly, using a VPN to bypass traffic restrictions is not uncommon. Prompted by my former manager Martin Casado, I thought I might share a few thoughts on VPN options for road warriors. This is by no means a comprehensive list, but hopefully something I share here will be helpful.

Read more...

Kubernetes, Kubeadm, and the AWS Cloud Provider

Over the last few weeks, I’ve noticed quite a few questions appearing in the Kubernetes Slack channels about how to use kubeadm to configure Kubernetes with the AWS cloud provider. You may recall that I wrote a post about setting up Kubernetes with the AWS cloud provider last September, and that post included a few snippets of YAML for kubeadm config files. Since I wrote that post, the kubeadm API has gone from v1alpha2 (Kubernetes 1.11) to v1alpha3 (Kubernetes 1.12) and now v1beta1 (Kubernetes 1.13). The changes in the kubeadm API result in changes in the configuration files, and so I wanted to write this post to explain how to use kubeadm 1.13 to set up a Kubernetes cluster with the AWS cloud provider.

Read more...

Scraping Envoy Metrics Using the Prometheus Operator

On a recent customer project, I recommended the use of Heptio Contour for ingress on their Kubernetes cluster. For this particular customer, Contour’s support of the IngressRoute CRD and the ability to delegate paths via IngressRoutes made a lot of sense. Of course, the customer wanted to be able to scrape metrics using Prometheus, which meant I not only needed to scrape metrics from Contour but also from Envoy (which provides the data plane for Contour). In this post, I’ll show you how to scrape metrics from Envoy using the Prometheus Operator.

Read more...

Technology Short Take 110

Welcome to Technology Short Take #110! Here’s a look at a few of the articles and posts that have caught my attention over the last few weeks. I hope something I’ve included here is useful for you also!

Read more...

Technology Short Take 109

Welcome to Technology Short Take #109! This is the first Technology Short Take of 2019. It may be confirmation bias, but I’ve noticed of number of sites adding “Short Take”-type posts to their content lineup. I’ll take that as flattery, even if it wasn’t necessarily intended that way. Enjoy!

Read more...

On Thinking About Infrastructure as Code

I just finished reading Cindy Sridharan’s excellent post titled “Effective Mental Models for Code and Systems,” and some of the points Sridharan makes immediately jumped out to me—not for “traditional” code development, but for the development of infrastructure as code. Take a few minutes to go read the post—seriously, it’s really good. Done reading it? Good, now we can proceed.

Read more...

The Linux Migration: December 2018 Progress Report

In December 2016, I kicked off a migration from macOS to Linux as my primary laptop OS. Throughout 2017, I chronicled my progress and challenges along the way; links to all those posts are found here. Although I stopped the migration in August 2017, I restarted it in April 2018 when I left VMware to join Heptio. In this post, I’d like to recap where things stand as of December 2018, after 8 months of full-time use of Linux as my primary laptop OS.

Read more...

Looking Back: 2018 Project Report Card

Over the last five years or so, I’ve shared with my readers an annual list of projects along with—at the year’s end—a “project report card” on how I fared against the projects I’d set for myself. (For example, here’s my project report card for 2017.) Following that same pattern, then, here is my project report card for 2018.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!