Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Using the External AWS Cloud Provider for Kubernetes

In 2018, after finding a dearth of information on setting up Kubernetes with AWS integration/support, I set out to try to establish some level of documentation on this topic. That effort resulted in a few different blog posts, but ultimately culminated in this post on setting up an AWS-integrated Kubernetes cluster using kubeadm. Although originally written for Kubernetes 1.15, the process described in that post is still accurate for newer versions of Kubernetes. With the release of Kubernetes 1.22, though, the in-tree AWS cloud provider—which is what is used/described in the post linked above—has been deprecated in favor of the external cloud provider. In this post, I’ll show how to set up an AWS-integrated Kubernetes cluster using the external AWS cloud provider.

In addition to the post I linked above, there were a number of other articles I published on this topic:

Most of the information in these posts, if not all of it, is found in the latest iteration, but I wanted to include these links here for some additional context. Also, all of these focus on the now-deprecated in-tree AWS cloud provider.

Although all of these prior posts focus on the in-tree provider, they are helpful because many of the same prerequisites/requirements for the in-tree provider are still—as far as I know—applicable for the external AWS cloud provider:

  1. The hostname of each node must match the EC2 Private DNS entry for the instance (by default, this is something like ip-10-11-12-13.us-west-2.compute.internal or similar). Note that I haven’t explicitly tested/verified this requirement in a while, so it’s possible that this has changed. As soon as I am able, I’ll conduct some additional testing and update this post.
  2. Each node needs to have an IAM instance profile that grants it access to an IAM role and policy with permissions to the AWS API.
  3. Specific resources used by the cluster must have certain AWS tags assigned to them. As with the hostname requirement, this is an area where I haven’t done extensive testing of the external cloud provider against the in-tree provider.
  4. Specific entries are needed in the kubeadm configuration file used to bootstrap the cluster, join control plane nodes, and join worker nodes.

The following sections describe each of these four areas in a bit more detail.

Setting Node Hostnames

Based on my testing—see my disclaimer in #1 above—the hostname for the OS needs to match the EC2 Private DNS entry for that particular instance. By default, this is typically something like ip-10-11-12-13.us-west-2.compute.internal (change the numbers and the region to appropriately reflect the private IP address and region of the instance, and be aware that the us-east-1 AWS region uses the ec2.internal DNS suffix). The fastest/easiest way I’ve verified to make sure this is the case is with this command:

sudo hostnamectl set-hostname \
$(curl -s http://169.254.169.254/latest/meta-data/local-hostname)

Be sure to set the hostname before starting the bootstrapping process. I have some references of putting this command in the user data for the instance, so that it runs automatically. I have not, however, specifically tested this approach.

Creating and Assigning the IAM Instance Profile

The nodes in the cluster need permission to the AWS APIs in order for the AWS cloud provider to function properly. The “Prerequisites” page on the Kubernetes AWS Cloud Provider site has a sample policy for both control plane nodes and worker nodes. Consider these sample policies to be starting points; test and modify in order to make sure the policies work for your specific implementation. Once you create IAM instance profiles that reference the appropriate roles and policies, then be sure to specify the IAM instance profile when launching your instances. All the major IaC tools (including both Pulumi and Terraform) have support for specifying the IAM instance profile in code.

Tagging Cluster Resources

While the documentation for the cloud provider is improving, this is one area that could still use some additional work. The “Getting Started” page on the Kubernetes AWS Cloud Provider site only says this about tags:

Add the tag kubernetes.io/cluster/<your_cluster_id>: owned (if resources are owned and managed by the cluster) or kubernetes.io/cluster/<your_cluster_id>: shared (if resources are shared between clusters, and should not be destroyed if the cluster is destroyed) to your instances.

Based on my knowledge of the in-tree provider and the testing I’ve done with the external provider, this is correct. However, additional tags are typically needed:

  • Public (Internet-facing) subnets need a kubernetes.io/elb: 1 tag, while private subnets need a kubernetes.io/internal-elb: 1 tag.
  • All subnets need the kubernetes.io/cluster/<your_cluster_id>: owned|shared tag.
  • If the cloud controller manager isn’t started with --configure-cloud-routes: "false", then the route tables also needed to be tagged like the subnets.
  • At least one security group—one which the nodes should be a member of—needs the kubernetes.io/cluster/<your_cluster_id>: owned|shared tag.

Failure to have things properly tagged results in odd failure modes, like ELBs being automatically created in response to the creation of a Service object of type LoadBalancer, but instances never being populated for the ELB (for example). Another failure I’ve seen is the Kubelet failing to start if the nodes aren’t properly tagged. Unfortunately, the failure modes of the external cloud provider aren’t any better documented than the in-tree provider, which can make troubleshooting a bit challenging.

Using Kubeadm Configuration Files

The final piece is adding the correct values to your kubeadm configuration files so that the cluster is bootstrapped properly. I tested the configurations shown below using Kubernetes 1.22.

Three different configuration files are needed:

  1. A configuration file to be used to bootstrap the first control plane node
  2. A configuration file used to join any additional control plane nodes
  3. A configuration file used to join worker nodes

I’ll begin with the natural starting point: the configuration file for bootstrapping the first/initial control plane node.

Bootstrapping the First Control Plane Node

A kubeadm configuration file you could use to bootstrap your first control plane node with the external AWS cloud provider might look something like this:

---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
  extraArgs:
    cloud-provider: external
clusterName: foo
controllerManager:
  extraArgs:
    cloud-provider: external
kubernetesVersion: v1.22.2 # can use "stable"
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
  name: ip-10-11-12-13.us-west-2.compute.internal
  kubeletExtraArgs:
    cloud-provider: external

Note that this does not take into account configuration settings related to setting up a highly-available control plane; refer to the kubeadm v1beta3 API docs for details on what additional settings are needed. (For sure the controlPlaneEndpoint field should be added, but there may be additional settings that are necessary for your specific environment.)

The big change from previous kubeadm configurations I’ve shared is that cloud-provider: aws is now cloud-provider: external. Otherwise, the configuration remains largely unchanged. Note the absence of the configure-cloud-routes; this is moved to the AWS cloud controller manager itself.

After you’ve bootstrapped the first control plane node (using kubeadm init --config <filename>.yaml) but before you add any other nodes—control plane or otherwise—you’ll need to install the AWS cloud controller manager. Manifests are available, but you’ll need to use kustomize to build them out:

kustomize build 'github.com/kubernetes/cloud-provider-aws/manifests/overlays/superset-role/?ref=master'

Review the output (to ensure the values supplied are correct for your environment), then send the results to your cluster by piping them into kubectl apply -f -.

You’ll also want to go ahead and install the CNI plugin of your choice.

Adding More Control Plane Nodes

If you are building a highly-available control plane, then a kubeadm configuration similar to the one shown below would work with the external AWS cloud provider:

---
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
discovery:
  bootstrapToken:
    token: 123456.a4v4ii39rupz51j3
    apiServerEndpoint: "cp-lb.us-west-2.elb.amazonaws.com:6443"
    caCertHashes: ["sha256:193feed98fb5fd2b497472fb7d9553414e27ff7eeb7b919c82ff3a08fdf5782f"]
nodeRegistration:
  name: ip-10-14-18-22.us-west-2.compute.internal
  kubeletExtraArgs:
    cloud-provider: external
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 10.14.18.22
  certificateKey: "f6fcb672782d6f0581a1060cf135920acde6736ef12562ddbdc4515d1315b518"

You’d want to adjust the values for token, apiServerEndpoint, caCertHashes, and certificateKey as appropriate based on the output of kubeadm init when bootstrapping the first control plane node. Also, refer to the “Adding More Control Plane Nodes” section of the previous post for a few notes regarding tokens, the SHA256 hash, and the certificate encryption key (there are ways to recover/recreate this information if you don’t have it).

Use your final configuration file with kubeadm join --config <filename>.yaml to join the cluster as an additional control plane node.

Adding Worker Nodes

The final step is to add worker nodes. You’d do this with kubeadm join --config <filename>.yaml, where the specified YAML file might look something like this:

---
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
discovery:
  bootstrapToken:
    token: 123456.a4v4ii39rupz51j3
    apiServerEndpoint: "cp-lb.us-west-2.elb.amazonaws.com:6443"
    caCertHashes:
      - "sha256:193feed98fb5fd2b497472fb7d9553414e27ff7eeb7b919c82ff3a08fdf5782f"
nodeRegistration:
  name: ip-10-12-14-16.us-west-2.compute.internal
  kubeletExtraArgs:
    cloud-provider: external

As noted earlier, be sure to specify a correct and valid bootstrap token and the SHA256 hash of the CA certificate.

Wrapping Up

At this point, you should have a (mostly) functional Kubernetes cluster. You’ll probably still want some sort of storage solution; see here for more details on the AWS EBS CSI driver.

If you run into problems or issues getting this to work, please feel free to reach out to me. You can find me on the Kubernetes Slack community, or you can contact me on Twitter (DMs are open). Also, if you’re well-versed in this area and have corrections, clarifications, or suggestions for how I can improve this article, I welcome all constructive feedback. Thanks!

Kustomize Transformer Configurations for Cluster API v1beta1

The topic of combining kustomize with Cluster API (CAPI) is a topic I’ve touched on several times over the last 18-24 months. I first touched on this topic in November 2019 with a post on using kustomize with CAPI manifests. A short while later, I discovered a way to change the configurations for the kustomize transformers to make it easier to use it with CAPI. That resulted in two posts on changing the kustomize transformers: one for v1alpha2 and one for v1alpha3 (since there were changes to the API between versions). In this post, I’ll revisit kustomize transformer configurations again, this time for CAPI v1beta1 (the API version corresponding to the CAPI 1.0 release).

In the v1alpha2 post (the first post on modifying kustomize transformer configurations), I mentioned that changes were needed to the NameReference and CommonLabel transformers. In the v1alpha3 post, I mentioned that the changes to the CommonLabel transformer became largely optional; if you are planning on adding additional labels to MachineDeployments, then the change to the CommonLabels transformer is required, but otherwise you could probably get by without it.

For v1beta1, the necessary changes are very similar to v1alpha3, and (for the most part) are focused on the NameReference transformer. The NameReference transformer tracks references between objects, so that if the name of an object changes—perhaps due to use of the namePrefix or nameSuffix directives in the kustomization.yaml file—references to that object are also appropriately renamed.

Here are the CAPI-related changes needed for the NameReference transformer:

- kind: Cluster
  group: cluster.x-k8s.io
  version: v1beta1
  fieldSpecs:
  - path: spec/clusterName
    kind: MachineDeployment
  - path: spec/template/spec/clusterName
    kind: MachineDeployment

- kind: AWSCluster
  group: infrastructure.cluster.x-k8s.io
  version: v1beta1
  fieldSpecs:
  - path: spec/infrastructureRef/name
    kind: Cluster

- kind: KubeadmControlPlane
  group: controlplane.cluster.x-k8s.io
  version: v1beta1
  fieldSpecs:
  - path: spec/controlPlaneRef/name
    kind: Cluster

- kind: AWSMachine
  group: infrastructure.cluster.x-k8s.io
  version: v1beta1
  fieldSpecs:
  - path: spec/infrastructureRef/name
    kind: Machine

- kind: KubeadmConfig
  group: bootstrap.cluster.x-k8s.io
  version: v1beta1
  fieldSpecs:
  - path: spec/bootstrap/configRef/name
    kind: Machine

- kind: AWSMachineTemplate
  group: infrastructure.cluster.x-k8s.io
  version: v1beta1
  fieldSpecs:
  - path: spec/template/spec/infrastructureRef/name
    kind: MachineDeployment
  - path: spec/machineTemplate/infrastructureRef/name
    kind: KubeadmControlPlane

- kind: KubeadmConfigTemplate
  group: bootstrap.cluster.x-k8s.io
  version: v1beta1
  fieldSpecs:
  - path: spec/template/spec/bootstrap/configRef/name
    kind: MachineDeployment

Generally, you’d append this content to the default NameReference transformer configuration, which you’d obtain using kustomize config save. However, somewhere in the Kustomize 3.8.4 release timeframe, the kustomize config save command for extracting the default transformer configurations was removed, and I have yet to figure out another way of getting this information. In theory, when using kustomize with CAPI manifests, you wouldn’t need any of the default NameReference transformer configurations, but I haven’t conducted any thorough testing of that theory (yet).

Aside from replacing all instances of v1alpha3 with v1beta1, the only other difference in the YAML shown above compared to YAML in the the v1alpha3 post is a change to the fieldSpecs list for AWSMachineTemplate. Previously, the KubeadmControlPlane referenced an underlying AWSMachineTemplate at the path spec/infrastructureTemplate/name. In v1beta1, the KubeadmControlPlane object now references an AWSMachineTemplate at the path spec/machineTemplate/infrastructureRef/name.

As mentioned in both of the previous posts, you’ll need to put this content in a file (I use namereference.yaml) and then specify the path to this configuration in kustomization.yaml, like this:

configurations:
  - /path/to/customized/namereference.yaml

I hope this information is useful to readers. Feel free to find me on the Kubernetes Slack instance if you have questions, and I’ll do my best to help answer them. You’re also welcome to contact me on Twitter (DMs are open). Thanks!

Technology Short Take 146

Welcome to Technology Short Take #146! Over the last couple of weeks, I’ve gathered a few technology-related links for you all. There’s some networking stuff, a few security links, and even a hardware-related article. But enough with the introduction—let’s get into the content!

Networking

Servers/Hardware

  • Chris Mellor speculates that Cisco UCS may be on the way out; Kevin Houston responds with a “I don’t think so.” Who will be correct? I guess we will just have to wait and see.

Security

Cloud Computing/Cloud Management

Operating Systems/Applications

Storage

  • Cloudflare recently introduced its own object storage offering, announced in this blog post. Cloudflare’s offering, called R2, offers an S3-compatible API and no egress fees, among other features.

Virtualization

Although this Tech Short Take is a tad shorter than usual, I still hope that you found something useful in here. Feel free to hit me up on Twitter if you have any feedback. Enjoy!

Installing Cilium via a ClusterResourceSet

In this post, I’m going to walk you through how to install Cilium onto a Cluster API-managed workload cluster using a ClusterResourceSet. It’s reasonable to consider this post a follow-up to my earlier post that walked you through using a ClusterResourceSet to install Calico. There’s no need to read the earlier post, though, as this post includes all the information (or links to the information) you need. Ready? Let’s jump in!

Prerequisites

If you aren’t already familiar with Cluster API—hereafter just referred to as CAPI—then I would encourage you to read my introduction to Cluster API post. Although it is a bit dated (it was written in the very early days of the project, which recently released version 1.0). Some of the commands referenced in that post have changed, but the underlying concepts remain valid. If you’re not familiar with Cilium, check out their introduction to the project for more information. Finally, if you’re not familiar at all with the idea of ClusterResourceSets, you can read my earlier post or check out the ClusterResourceSet CAEP document.

Installing Cilium via a ClusterResourceSet

If you want to install Cilium via a ClusterResourceSet, the process looks something like this:

  1. Create a ConfigMap with the instructions for installing Cilium.
  2. Create a ClusterResourceSet that references the ConfigMap.
  3. Profit (when you deploy matching workload clusters).

Let’s look at these steps in a bit more detail.

Creating the Installation ConfigMap

The Cilium docs generally recommend the use of the cilium CLI tool to install Cilium. The reasoning behind this is, as I understand it, that the cilium CLI tool can interrogate the Kubernetes cluster to gather information and then attempt to pick the best configuration options for you. Using helm is also another option recommended by the docs. For our purposes, however, neither of those approaches will work—using a ClusterResourceSet means you need to be able to supply YAML manifests.

Fortunately, the fact that Cilium supports Helm gives us a path forward via the use of helm template to render the templates locally. As per the docs on helm template, there are some caveats/considerations, but this was the only way I found to create YAML manifests for installing Cilium.

So, the first step to creating the ConfigMap you need is to set up the Helm repository:

helm repo add cilium https://helm.cilium.io

Then render the templates locally:

helm template cilium cilium/cilium --version 1.10.4 \
--namespace kube-system > cilium-1.10.4.yaml

You may need to specify additional options/values as needed in the above command in order to accommodate your specific environment or requirements, of course.

Once you have the templates rendered, then create the ConfigMap that the ClusterResourceSet needs:

kubectl create configmap cilium-crs-cm --from-file=cilium-1.10.4.yaml

This ConfigMap should be created on the appropriate CAPI management cluster, so ensure your Kubernetes context is set correctly.

Creating the ClusterResourceSet

Now you’re ready to create the ClusterResourceSet. Here’s an example you could use as a starting point:

---
apiVersion: addons.cluster.x-k8s.io/v1alpha4
kind: ClusterResourceSet
metadata:
  name: cilium-crs
  namespace: default
spec:
  clusterSelector:
    matchLabels:
      cni: cilium 
  resources:
  - kind: ConfigMap
    name: cilium-crs-cm

You can see that the ClusterResourceSet references the ConfigMap, which in turn contains the YAML to install Cilium (and that’s the YAML applied against matching workload clusters).

Deploy a Matching Workload Cluster

What determines a matching workload cluster? The clusterSelector portion of the ClusterResourceSet. In the example above, the ClusterResourceSet’s clusterSelector specifies that workload clusters should have the cni: cilium label attached.

The label should be part of CAPI’s Cluster object, like this:

---
apiVersion: cluster.x-k8s.io/v1alpha4
kind: Cluster
metadata:
  name: cilium-cluster
  namespace: cilium-test
  labels:
    cni: cilium

When CAPI creates a workload cluster with that label and value, as shown in the example CAPI manifest above, then the ClusterResourceSet will automatically apply the contents of the ConfigMap against the cluster after it has been fully provisioned. The result: Cilium gets installed automatically on the new workload cluster!

I hope this post is useful. If you have any questions or any feedback, I’d love to hear from you! Feel free to find me on Twitter, or connect with me on the Kubernetes Slack instance. For more Cilium information and assistance, I’d encourage you to check out the Cilium Slack community.

Technology Short Take 145

Welcome to Technology Short Take #145! What will you find in this Tech Short Take? Well, let’s see…stuff on Envoy, network automation, network designs, M1 chips (and potential open source variants!), a bevy of security articles (including a couple on very severe vulnerabilities), Kubernetes, AWS IAM, and so much more! I hope that you find something useful here. Enjoy!

Networking

Servers/Hardware

  • Howard Oakley of The Eclectic Light Company discusses some details on Apple’s M1 chip and what it does differently than other chips. Also included in this post are links to other articles with even more details—very helpful.
  • Are open source M1-style chips a possibility? This article seems to think so.

Security

  • The last several weeks haven’t been very nice to Azure with respect to security issues. First there was a vulnerability in the CosmosDB database that, according to this Reuters article, exposed “keys that control access to databases held by thousands of companies.” Following that incident came news of “Azurescape,” billed as the first cross-account container takeover in the public cloud. Finally, I recently saw this news about a “minor privilege escalation” within Azure AD.
  • Colm MacCárthaigh discusses AWS SIGv4 and SIGv4A and some of the details and differences between the two.
  • The AWS WorkSpaces client had a remote code execution flaw (versions before 3.1.9 are affected). See more details here.
  • This isn’t good. Better patch your vCenter Server instances, as VMware released a security advisory with a long list of CVEs, including one with a severity score of 9.8/10.

Cloud Computing/Cloud Management

Operating Systems/Applications

Storage

  • Rather than trying to curate my own list of storage-related links this time around, I’ll point you to this list instead, curated by none other than Dr. J Metz himself.

Virtualization

That’s all for this time around! If you have any feedback for me—additional sites I should monitor for content, or other topics I don’t cover that you think would be useful to readers—I’d love to hear from you! The easiest way to get in touch with me is via Twitter, but I’m also accessible via e-mail (my address isn’t too hard to find) or Slack (I frequent several different Slack communities). Feel free to reach out.

Recent Posts

Technology Short Take 144

Welcome to Technology Short Take #144! I have a fairly diverse set of links for readers this time around, covering topics from microchips to improving your writing, with stops along the way in topics like Kubernetes, virtualization, Linux, and the popular JSON-parsing tool jq. I hope you find something useful!

Read more...

Establishing VPC Peering with Pulumi and Go

I use Pulumi to manage my lab infrastructure on AWS (I shared some of the details in this April 2020 blog post published on the Pulumi site). Originally I started with TypeScript, but later switched to Go. Recently I had a need to add some VPC peering relationships to my lab configuration. I was concerned that this may pose some problems—due entirely to the way I structure my Pulumi projects and stacks—but as it turned out it was more straightforward than I expected. In this post, I’ll share some example code and explain what I learned in the process of writing it.

Read more...

Using the AWS CLI to Tag Groups of AWS Resources

To conduct some testing, I recently needed to spin up a group of Kubernetes clusters on AWS. Generally speaking, my “weapon of choice” for something like this is Cluster API (CAPI) with the AWS provider. Normally this would be enormously simple. In this particular case—for reasons that I won’t bother going into here—I needed to spin up all these clusters in a single VPC. This presents a problem for the Cluster API Provider for AWS (CAPA), as it currently doesn’t add some required tags to existing AWS infrastructure (see this issue). The fix is to add the tags manually, so in this post I’ll share how I used the AWS CLI to add the necessary tags.

Read more...

Technology Short Take 143

Welcome to Technology Short Take #143! I have what I think is an interesting list of links to share with you this time around. Since taking my new job at Kong, I’ve been spending more time with Envoy, so you’ll see some Envoy-related content showing up in this Technology Short Take. I hope this collection of links has something useful for you!

Read more...

Starting WireGuard Interfaces Automatically with Launchd on macOS

In late June of this year, I wrote a piece on using WireGuard on macOS via the CLI, where I walked readers using macOS through how to configure and use the WireGuard VPN from the terminal (as opposed to using the GUI client, which I discussed here). In that post, I briefly mentioned that I was planning to explore how to have macOS' launchd automatically start WireGuard interfaces. In this post, I’ll show you how to do exactly that.

Read more...

An Alternate Approach to etcd Certificate Generation with Kubeadm

I’ve written a fair amount about kubeadm, which was my preferred way of bootstrapping Kubernetes clusters until Cluster API arrived. Along the way, I’ve also discussed using kubeadm to assist with setting up etcd, the distributed key-value store leveraged by the Kubernetes control plane (see here, here, and here). In this post, I’d like to revisit the topic of using kubeadm to set up an etcd cluster once again, this time taking a look at an alternate approach to generating the necessary TLS certificates than what the official documentation describes.

Read more...

Technology Short Take 142

Welcome to Technology Short Take #142! This time around, the Networking section is a bit light, but I’ve got plenty of cloud computing links and articles for you to enjoy, along with some stuff on OSes and applications, programming, and soft skills. Hopefully there’s something useful here for you!

Read more...

Adding Multiple Items Using Kustomize JSON 6902 Patches

Recently, I needed to deploy a Kubernetes cluster via Cluster API (CAPI) into a pre-existing AWS VPC. As I outlined in this post from September 2019, this entails modifying the CAPI manifest to include the VPC ID and any associated subnet IDs, as well as referencing existing security groups where needed. I knew that I could use the kustomize tool to make these changes in a declarative way, as I’d explored using kustomize with Cluster API manifests some time ago. This time, though, I needed to add a list of items, not just modify an existing value. In this post, I’ll show you how I used a JSON 6902 patch with kustomize to add a list of items to a CAPI manifest.

Read more...

Using WireGuard on macOS via the CLI

I’ve written a few different posts on WireGuard, the “simple yet fast and modern VPN” (as described by the WireGuard web site) that aims to supplant tools like IPSec and OpenVPN. My first post on WireGuard showed how to configure WireGuard on Linux, both on the client side as well as on the server side. After that, I followed it up with posts on using the GUI WireGuard app to configure WireGuard on macOS and—most recently—making WireGuard from Homebrew work on an M1-based Mac. In this post, I’m going to take a look at using WireGuard on macOS again, but this time via the CLI.

Read more...

Installing Older Versions of Kumactl on an M1 Mac

The Kuma community recently released version 1.2.0 of the open source Kuma service mesh, and along with it a corresponding version of kumactl, the command-line utility for interacting with Kuma. To make it easy for macOS users to get kumactl, the Kuma community maintains a Homebrew formula for the CLI utility. That includes providing M1-native (ARM64) macOS binaries for kumactl. Unfortunately, installing an earlier version of kumactl on an M1-based Mac using Homebrew is somewhat less than ideal. Here’s one way—probably not the only way—to work around some of the challenges.

Read more...

Making WireGuard from Homebrew Work on an M1 Mac

After writing the post on using WireGuard on macOS (using the official WireGuard GUI app from the Mac App Store), I found the GUI app’s behavior to be less than ideal. For example, tunnels marked as on-demand would later show up as no longer configured as an on-demand tunnel. When I decided to set up WireGuard on my M1-based MacBook Pro (see my review of the M1 MacBook Pro), I didn’t want to use the GUI app. Fortunately, Homebrew has formulas for WireGuard. Unfortunately, the WireGuard tools as installed by Homebrew on an M1-based Mac won’t work. Here’s how to fix that.

Read more...

Kubernetes Port Names and Terminating HTTPS Traffic on AWS

I recently came across something that wasn’t immediately intuitive with regard to terminating HTTPS traffic on an AWS Elastic Load Balancer (ELB) when using Kubernetes on AWS. At least, it wasn’t intuitive to me, and I’m guessing that it may not be intuitive to some other readers as well. Kudos to my teammates Hart Hoover and Brent Yarger for identifying the resolution, which I’m going to call out in this post.

Read more...

Technology Short Take 141

Welcome to Technology Short Take #141! This is the first Technology Short Take compiled, written, and published entirely on my M1-based MacBook Pro (see my review here). The collection of links shared below covers a fairly wide range of topics, from old Sun hardware to working with serverless frameworks in the public cloud. I hope that you find something useful here. Enjoy!

Read more...

Review: Logitech Ergo K860 Ergonomic Keyboard

As part of an ongoing effort to refine my work environment, several months ago I switched to a Logitech Ergo K860 ergonomic keyboard. While I’m not a “keyboard snob,” I am somewhat particular about the feel of my keyboard, so I wasn’t sure how I would like the K860. In this post, I’ll provide my feedback, and provide some information on how well the keyboard works with both Linux and macOS.

Read more...

Review: 2020 M1-Based MacBook Pro

I hadn’t done a personal hardware refresh in a while; my laptop was a 2017-era MacBook Pro (with the much-disliked butterfly keyboard) and my tablet was a 2014-era iPad Air 2. Both were serviceable but starting to show their age, especially with regard to battery life. So, a little under a month ago, I placed an order for some new Apple equipment. Included in that order was a new 2020 13" MacBook Pro with the Apple-designed M1 CPU. In this post, I’d like to provide a brief review of the 2020 M1-based MacBook Pro based on the past month of usage.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!