Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Technology Short Take 140

Welcome to Technology Short Take #140! It’s hard to believe it’s already the start of May 2021—my how time flies! In this Technology Short Take, I’ve gathered some links for you covering topics like Azure and AWS networking, moving from macOS to Linux (and back again), and more. Let’s jump right into the content!




  • Peyton Smith and Mitchell Moser share seven common Microsoft Active Directory misconfigurations that adversaries tend to abuse.
  • Paulos Yibelo describes exploiting macOS with a text file.
  • The folks at Netskope have a pair of blog posts on GCP OAuth token hijacking in Google Cloud (part 1, part 2). These are older posts, from August 2020, and I honestly don’t know if the vulnerability still exists (or if it has been patched). If you’re a Google Cloud user, this may be worth a closer examination to make sure your accounts are safe.
  • Most of this was beyond my comprehension, but I found the tale fascinating to read nevertheless.

Cloud Computing/Cloud Management

  • Stefan B├╝ringer talks about optimizing Open Policy Agent (OPA)-based Kubernetes authorization. Note that this is a slightly older post (about 2 years old), so some of it may no longer apply to the latest versions of OPA and Gatekeeper.
  • This post by “xssfox” takes an interesting (to me) look at a security hole created through the use of an automated code pipeline deploying to a production website.
  • I’ve noted several pundits/experts who have noted the transformational nature of AWS Lambda, and the impact it is having/will have on AWS and its offerings. The introduction of S3 Object Lambda is just the latest example, it seems.
  • Chris Evans examines the pricing of virtual instances compared to managed servie offerings as he ponders how hyper-scalers like AWS, Azure, and Google will go about/are going about optimizing service density (i.e., maximizing revenue per hardware instance). It’s an interesting observation, for sure (at least, it’s interesting to me).
  • Marco Lancini discusses security logging in AWS environments.
  • Pulumi recently released version 3; get more details on the latest release in this blog post.

Operating Systems/Applications

  • Justin Garrison shares some thoughts on whiteboarding software (and hardware).
  • Here is a reminder why time synchronization remains important.
  • Carlos Fenollosa has a series of articles describing his attempt to move to Linux from macOS, and why he came back. Part 3 of the series, found here, describes some of the challenges with desktop Linux and why, in his words, “the grass is not greener on the other side.”
  • Paddy Kelly shows how to filter JSON data in Ansible using json_query.
  • Ivan Pepelnjak’s mention of Network to Code’s Schema Enforcer tool sent me down the rabbit hole of JSON Schema and validation. Don’t be surprised if you see a blog post on this topic pop up soon.
  • If you’re new to vim, this post may be helpful.




  • William Lam outlines some enhancements for USB NIC-only installations that appeared in ESXi 7.0 Update 2.

Career/Soft Skills

That’s all for now! I hope that I have shared something useful with you. If you have feedback, or if you just want to say hi, feel free to hit me on Twitter, or find me on one of the various Slack communities I frequent. Have a great weekend!

Making Firefox on Linux use Private Browsing by Default

While there are a couple different methods to make Firefox use private browsing by default (see this page for a couple methods), these methods essentially force private browsing and disable the ability to use “regular” (non-private) browsing. In this post, I’ll describe what I consider to be a better way of achieving this, at least on Linux.

It’s possible this method will also work on Windows, but I haven’t tested it. If anyone gets a chance to test it and let me know, I’ll update this post and credit you accordingly. Just hit me on Twitter and let me know what you’ve found in your testing. I’ve also only tested this on Fedora, but it should be the same or very similar for any distribution that uses GNOME.

GNOME uses the idea of “desktop files” (typically found in /usr/share/applications or ~/.local/share/applications) to enable the launching of applications via the Activities screen or other mechanisms. (For more information on desktop files, see here.) These desktop files specify where the executable is found, what command-line parameters to use, what icon to use, what name the application should go by, etc. Desktop files also allow application developers or users to define additional actions, such as opening a new window.

Firefox’s desktop file is (at least on Fedora) found at /usr/share/applications/firefox.desktop. In that file, the Exec line in the [Desktop Entry] section instructs how to launch Firefox. Farther down, several actions are defined, one of which is opening a new private window. Each of these actions also has an Exec line. Looking at the Exec line for opening a private window versus the Exec line for opening a new window, you’ll note that Firefox uses a --private-window parameter to control this behavior.

The trick here is to add --private-window to the Exec line in the [Desktop Entry] section of the desktop file, so that it looks like the Exec line in the section for opening a new private window. When you do this, launching Firefox will still open a “regular” browser window, but clicking on a link in any other application—e-mail, editor, terminal, whatever—will automatically open a new private browsing window. If a private browsing window is already open, it will open a new tab in that window.

So, to summarize:

  1. Change the /usr/share/applications/firefox.desktop file to add --private-window to the command specified on the Exec line of the [Desktop Entry] section.
  2. Firefox will still open a regular browser window when it is launched.
  3. Links outside of Firefox will open a new private browsing window (or a new tab in an existing private browsing window).

The advantage of this approach versus some of the others is that you still have access to regular browser windows if/when they are needed. This configuration doesn’t force private browsing all the time; rather, it just makes private browsing the default when opening links outside of Firefox. To me, that’s a much more user-friendly experience than forcing private browsing for all sites.

One caveat to this approach is that your changes to the Firefox desktop file get overwritten any time dnf update installs an update for Firefox. I’m sure there’s probably a workaround for this, but I haven’t found it yet.

(By the way, the reason I say this might work on Windows is because command-line parameters are exposed on Windows as well as on Linux through the use of shortcuts on the Start Menu. macOS does expose command-line parameters to a limited extent, but this functionality doesn’t appear usable in any practical way.)

I hope this information is helpful to someone. Feel free to contact me on Twitter if you have any feedback, corrections, or suggestions for improvement.

Technology Short Take 139

Welcome to Technology Short Take #139! This Technology Short Take is a bit heavy on cloud, OS, and programming topics, but there should be enough other interesting links to be useful to plenty of folks. (At least, I hope that’s the case!) Now, let’s get on to the content!


  • Tony Mackay has a tutorial showing how to use Traefik to rate-limit requests to a WordPress instance.
  • Ali Al Idrees has a post on using NSX ALB (formerly Avi Networks) with Kubernetes clusters in a vSphere with Tanzu environment.
  • This post provides some examples of shared control planes (and thus shared failure domains) within networking.
  • In this post, Jakub Sitnicki digs way deep into the Linux kernel to uncover the answer to the question, “Why are there no entries in the conntrack table for SYN packets dropped by the firewall?” Get ready to get nerdy!
  • This article on eBPF and Isovalent (the company behind the Cilium CNI plugin for Kubernetes) has some statements with which I agree, and some that don’t make sense to me. For example, I agree with the statement that the “impact eBPF will have on networking, security and observability will be widespread”. However, I don’t understand how eBPF will “reduce reliance on legacy network overlays”. I could see how eBPF will change how network overlays are implemented, sure, but reduce the reliance on network overlays? I’m not sure about that. If you have strong feelings about this, hit me on Twitter and let’s discuss.



  • Linux malware is getting more sophisticated.
  • A browser-based side-channel attack? Even worse, this isn’t just limited to Intel chips, but may also affect ARM-based systems like Apple’s M1 CPUs. Further, turning off JavaScript doesn’t help. Ugh.
  • Given the prevalence of VMware’s ESXi hypervisor, I suppose it was only a matter of time before the bad guys really started targeting it in a major way. This time, they’re exploiting a weakness that VMware can’t patch: people.
  • A while ago I chatted with the folks at Indeni about Cloudrail, a security solution for infrastructure-as-code environments.

Cloud Computing/Cloud Management

Operating Systems/Applications



  • This post from Enterprise Storage Forum attempts to provide a comparison of cloud storage between AWS and Google Cloud. Frankly, though, I found the article to be a bit unfocused, also discussing other cloud services instead of really concentrating on being the best comparison of cloud storage services. Maybe that’s just me, though.


  • Mike Foley shares details on a new feature in vSphere 7 Update 2 that leverages AMD-specific functionality to create what are called “Confidential Containers.”

Happy reading and learning! If you have any questions, comments, suggestions for improvement, or other feedback, I’m always happy to hear from you. Contact me on Twitter and let’s chat!

Using WireGuard on macOS

A short while ago I published a post on setting up WireGuard for AWS VPC access. In that post, I focused on the use of Linux on both the server side (on an EC2 instance in your AWS VPC) as well as on the client side (using the GNOME Network Manager interface). However, WireGuard is not limited to Linux, and I recently configured one of my macOS systems to take advantage of this WireGuard infrastructure for access to the private subnets in my AWS VPC. In this post, I’ll walk readers through configuring macOS to use WireGuard.

The first step is installing WireGuard on macOS. This is remarkably easy; just go to the App Store and install the WireGuard app for macOS. (Hopefully this link will take you directly there if you’re on a macOS-based system.)

Once the app is installed, the next step is to configure the WireGuard tunnels. I found this to be a bit confusing at first, but only because I wasn’t clear on the relationship between the WireGuard app and the Network pane in System Preferences. In this case, you need to use the WireGuard app to create the tunnels, which will show up as connections (interfaces) in the Network pane of System Preferences.

Running the WireGuard app will put an icon on your menu bar, and should bring up the “Manage WireGuard Tunnels” window. If not, select “Manage Tunnels” from the WireGuard menu icon; this will open the “Manage WireGuard Tunnels” window. From there, you can select the small plus in the lower left corner and select “Add Empty Tunnel…” from the menu. This displays a dialog box something like this:

Add Empty Tunnel dialog box

This will automatically create a new set of public and private keys, and auto-populate the start of a new WireGuard interface configuration. (Don’t worry; these keys aren’t valid for any interfaces/connections.) It’s up to you to finish the configuration by adding directives such as Endpoint, AllowedIPs, and Address. It’s an interesting sort of mismatch to have such a well-designed graphical application, but not provide any guidance or structure to the user on how to configure the interface/tunnel.

Ultimately, you’d need the configuration to look something like this:

PrivateKey = <some private key here>
Address = <IP address of WireGuard interface>

PublicKey = <public key of WireGuard peer interface>
AllowedIPs = <IP addresses or CIDR ranges to be routed across the tunnel>
Endpoint = <peer endpoint IP and port>

If you’re behind a NAT, you may also want to add the PersistentKeepalive = 25 value to the configuration as well (see the “NAT and Firewall Traversal Persistence” section of this page). Next you’ll need to configure the peer with the appropriate configuration. If the peer is Linux-based, you can use the information in my earlier blog post; if the peer is macOS, then use the instructions in this post. For anything else, refer to the WireGuard web site.

Once both sides of the connection are configured, then you should be able to activate the tunnel and start passing traffic. If traffic won’t pass successfully, then check the interface configuration on both sides, and make sure any firewalls along the path allow the traffic. The WireGuard connection will look like this in the Network pane of the System Preferences app (IP addresses have been blacked out):

Network pane

Repeat this process to add more tunnels/connections; each of them will show up as a menu item in the WireGuard menu icon, and you can select them to activate/deactivate the connection. Also note that selecting the “On-Demand” option when creating the tunnel will let WireGuard automatically establish the tunnel when you start passing traffic (assuming both ends are configured).

I hope this information helps. I found the interface to be a bit unintuitive, but after working with it for a little while it doesn’t feel so awkward now. Hopefully this walkthrough will make getting WireGuard set up and configured on macOS a bit easier for others. Thanks for reading, and hit me on Twitter if you have any questions, comments, or other feedback!

Adding a MachineHealthCheck using Kustomize

MachineHealthChecks are a powerful feature in the Kubernetes Cluster API (CAPI), and something I played around with not too long ago on TGIK 143. Recently, I was helping to document the use of kustomize with Cluster API for inclusion in the upstream CAPI documentation, and I learned a simple trick with kustomize that I’d apparently overlooked in the past. If you’ve used kustomize for any great length of time you probably already know and have used the functionality I’ll describe in this post, but if you’re new to kustomize or, like me, a user of kustomize that hasn’t had time to dig into all of its functionality, then read on and see how you can use kustomize to add a MachineHealthCheck to a CAPI workload cluster.

If you’re not familiar with kustomize, then reading my introduction to kustomize may be useful before continuing on with the rest of this article.

In this use case—adding a MachineHealthCheck to an workload cluster in CAPI—I’ll work from the assumption that you have a “base” CAPI workload cluster definition (perhaps one you’ve generated using clusterctl config cluster). In the directory where this workload cluster manifest exists, you’d need to add a kustomization.yaml to specify resources. It would look something like this:

kind: Kustomization
  - base.yaml

Now, let’s say you want to add a MachineHealthCheck for this workload cluster. You’d create a kustomize overlay directory, and in that overlay directory you’d place (at least) two files:

  1. Another kustomization.yaml file (more on that in a moment)
  2. A YAML manifest for a MachineHealthCheck

(I say “at least” two files because you could also place other patches or other resources in the directory as well.)

The YAML manifest for the MachineHealthCheck is straightforward; I’ll only point out to make sure to specify the correct cluster name and deployment name, taking into account any “namePrefix” or “nameSuffix” directives you may be using.

The kustomization.yaml would look something like this:

kind: Kustomization
  - ../../base
  - workload-mhc.yaml

Now, you may also include various other directives, but the key here is in the “resources” section. It does, of course, specify the base configuration, but it also lists the MachineHealthCheck manifest that resides in this overlay directory. When you run kustomize build ., kustomize will combine the specified resources together. In this case, that means it will combine the base workload cluster manifest and the MachineHealthCheck manifest, and the end result—when you feed this to kubectl apply—will be a new workload cluster and a MachineHealthCheck to go along with it.

The functionality of combining resources in an overlay is a core part of the functionality of kustomize, but for some reason I hadn’t leveraged it yet. Kudos to the Cluster API Provider for Azure (CAPZ) team for illustrating this use case in the creation of workload cluster template “flavors." Now that I know it’s there, I can begin to see other potential use cases, such as adding extra MachineDeployments to a base workload cluster configuration.

I hope this information is useful. As I said, if you’re a long-time kustomize user, this is probably not news to you, but for others who are still exploring all the various pieces of functionality that kustomize offers I hope this opens up some new possibilities for you. I welcome all constructive feedback; feel free to reach out to me on the Kubernetes Slack instance or contact me on Twitter.

Recent Posts

Technology Short Take 138

Welcome to Technology Short Take #138. I have what I hope is an interesting and useful set of links to share with everyone this time around. I didn’t do so well on storage links; apologies to my storage-focused friends! However, there should be something for most everyone else. Enjoy!


Deploying a CNI Automatically with a ClusterResourceSet

Not too long ago I hosted an episode of TGIK8s, where I explored some features of Cluster API. One of the features I explored on the show was ClusterResourceSet, an experimental feature that allows users to automatically install additional components onto workload clusters when the workload clusters are provisioned. In this post, I’ll show how to deploy a CNI plugin automatically using a ClusterResourceSet.


Setting up WireGuard for AWS VPC Access

Seeking more streamlined access to AWS EC2 instances on private subnets, I recently implemented WireGuard for VPN access. WireGuard, if you’re not familiar, is a relatively new solution that is baked into recent Linux kernels. (There is also support for other OSes.) In this post, I’ll share what I learned in setting up WireGuard for VPN access to my AWS environments.


Closing out the Tokyo Assignment

In late 2019, I announced that I would be temporarily relocating to Tokyo for a six-month assignment to build out a team focused on cloud-native services and offerings. A few months later, I was still in Colorado, and I explained what was happening in a status update on the Tokyo assignment. I’ve had a few folks ask me about it, so I thought I’d go ahead and share that the Tokyo assignment did not happen and will not happen.


Technology Short Take 137

Welcome to Technology Short Take #137! I’ve got a wide range of topics for you this time around—eBPF, Falco, Snort, Kyverno, etcd, VMware Code Stream, and more. Hopefully one of these links will prove useful to you. Enjoy!


Technology Short Take 136

Welcome to Technology Short Take #136, the first Short Take of 2021! The content this time around seems to be a bit more security-focused, but I’ve still managed to include a few links in other areas. Here’s hoping you find something useful!


Using Velero to Protect Cluster API

Cluster API (also known as CAPI) is, as you may already know, an effort within the upstream Kubernetes community to apply Kubernetes-style APIs to cluster lifecycle management—in short, to use Kubernetes to manage the lifecycle of Kubernetes clusters. If you’re unfamiliar with CAPI, I’d encourage you to check out my introduction to Cluster API before proceeding. In this post, I’m going to show you how to use Velero (formerly Heptio Ark) to backup and restore Cluster API objects so as to protect your organization against an unrecoverable issue on your Cluster API management cluster.


Details on the New Desk Layout

Over the holiday break I made some time to work on my desk layout, something I’d been wanting to do for quite a while. I’d been wanting to “up my game,” so to speak, with regard to producing more content, including some video content. Inspired by—and heavily borrowing from—this YouTube video, I decided I wanted to create a similar arrangement for my desk. In this post, I’ll share more details on my setup.


Technology Short Take 135

Welcome to Technology Short Take #135! This will likely be the last Technology Short Take of 2020, so it’s a tad longer than usual. Sorry about that! You know me—I just want to make sure everyone has plenty of technical content to read during the holidays. And speaking of holidays…whatever holidays you do (or don’t) celebrate, I hope that the rest of the year is a good one for you. Now, on to the content!


Bootstrapping a Cluster API Management Cluster

Cluster API is, if you’re not already familiar, an effort to bring declarative Kubernetes-style APIs to Kubernetes cluster lifecycle management. (I encourage you to check out my introduction to Cluster API post if you’re new to Cluster API.) Given that it is using Kubernetes-style APIs to manage Kubernetes clusters, there must be a management cluster with the Cluster API components installed. But how does one establish that management cluster? This is a question I’ve seen pop up several times in the Kubernetes Slack community. In this post, I’ll walk you through one way of bootstrapping a Cluster API management cluster.


Some Site Updates

For the last three years, the site has been largely unchanged with regard to the structure and overall function even while I continue to work to provide quality technical content. However, time was beginning to take its toll, and some “under the hood” work was needed. Over the Thanksgiving holiday, I spent some time updating the site, and there are a few changes I wanted to mention.


Assigning Node Labels During Kubernetes Cluster Bootstrapping

Given that Kubernetes is a primary focus of my day-to-day work, I spend a fair amount of time in the Kubernetes Slack community, trying to answer questions from users and generally be helpful. Recently, someone asked about assigning node labels while bootstrapping a cluster with kubeadm. I answered the question, but afterward started thinking that it might be a good idea to also share that same information via a blog post—my thinking being that others who also had the same question aren’t likely to be able to find my answer on Slack, but would be more likely to find a published blog post. So, in this post, I’ll show how to assign node labels while bootstrapping a Kubernetes cluster.


Pausing Cluster API Reconciliation

Cluster API is a topic I’ve discussed here in a number of posts. If you’re not already familiar with Cluster API (also known as CAPI), I’d encourage you to check out my introductory post on Cluster API first; you can also visit the official Cluster API site for more details. In this short post, I’m going to show you how to pause the reconciliation of Cluster API cluster objects, a task that may be necessary for a variety of reasons (including backing up the Cluster API objects in your management cluster).


Technology Short Take 134

Welcome to Technology Short Take #134! I’m publishing a bit early this time due to the Thanksgiving holiday in the US. So, for all my US readers, here’s some content to peruse while enjoying some turkey (or whatever you’re having this year). For my international readers, here’s some content to peruse while enjoying dramatically lower volumes of e-mail because the US is on holiday. See, something for everyone!


Review: CPLAY2air Wireless CarPlay Adapter

In late September, I was given a CPLAY2air wireless CarPlay adapter as a gift. Neither of my vehicles support wireless CarPlay, and so I was looking forward to using the CPLAY2air device to enable the use of CarPlay without having to have my phone plugged into a cable. Here’s my feedback on the CPLAY2air device after about six weeks of use.


Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!