Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Technology Short Take 120

Welcome to Technology Short Take #120! Wow…hard to believe it’s been almost two months since the last Tech Short Take. Sorry about that! Hopefully something I share here in this Tech Short Take is useful or helpful to readers. On to the content!

Networking

Servers/Hardware

I don’t have anything to share this time around, but I’ll stay alert for content to include future Tech Short Takes.

Security

Cloud Computing/Cloud Management

  • Marc Boorshtein has a write-up on authentication in Kubernetes identity management. I’m glad the author explicitly called out that Kubernetes doesn’t directly connect to any kind of user store; this seems to be a point of confusion for new Kubernetes users with whom I’ve spoken.
  • Yan Cui with Lumigo tackles the concept of serverless vendor lock-in. I have to say, I agree with a lot of the statements Cui makes regarding lock-in, coupling, risk, and the role of data in lock-in. Much of what’s said in this article applies to all forms of “lock-in,” to be honest.
  • I’m glad I read this article from Michael Gasch. I understand why it’s titled the way it is, but the title does not do the excellent content justice. If you’re seeking to deepen your understanding of the Kubernetes architecture, I can heartily recommend this article. (Not to mention there’s a great set of links at the end with even more information.)
  • Jack Lindamood writes about how he regrets switching from CloudFormation to Terraform, and shares lessons learned—both good and bad—about each of these two options for infrastructure as code.
  • I love this article by Liz Fong-Jones. The sentence from which the title is taken captures it all: “Even the experts on the team were afraid to touch our Terraform configs, treating them like a haunted graveyard in which to seldom tread.” The idea of using CI for infrastructure as code is something that’s really been on my mind for the last few weeks, and so I’m glad I came across this article with lessons learned from the Honeycomb team.
  • Ryan Matteson provides some very useful information on creating Kubernetes manifests. The “TL;DR” is that kubectl explain is one of your best friends.
  • This article by Alibaba Cloud on how they scaled Kubernetes to 10,000 nodes has some useful information in it. It’s extremely unlikely that any of the steps this team took would be needed by other organizations, but the information shared does help illuminate some of the inner workings of Kubernetes—and knowing more about Kubernetes can be helpful to anyone supporting or implementing it.
  • I enjoyed this article on describing fault domains by Will Larson. In fact, I spent several hours a couple months ago playing around with dot to visually describe fault domains.

Operating Systems/Applications

Storage

Nothing this time! If you happen to find something you think other readers would find useful, send it my way and I’ll see about including it in a future Tech Short Take.

Virtualization

Career/Soft Skills

  • I’ll put this here, since it’s most closely aligned to career: Gustavo Franco and Matt Brown, both Customer Reliability Engineers with Google, discuss potential SRE team organization. I could see this article being helpful for organizations—or individuals—who are starting down the SRE path.
  • So long, Datanauts! It turns out the show is ending, and Nick Korte takes some time to write a tribute to Datanauts.

That’s all for now; I’ll have more links and articles in the next Tech Short Take. Feel free to contact me on Twitter to share any feedback you may have on this or other content here on the site. Thanks for reading!

KubeCon 2019 Day 3 and Event Summary

Keynotes

Bryan Liles kicked off the day 3 morning keynotes with a discussion of “finding Kubernetes’ Rails moment”—basically focusing on how Kubernetes enables folks to work on/solve higher-level problems. Key phrase from Bryan’s discussion (which, as usual, incorporated the humor I love to see from Bryan): “Kubernetes isn’t the destination. Kubernetes is the vehicle that takes us to the destination.” Ian Coldwater delivered a talk on looking at Kubernetes from the attacker’s point of view, and using that perspective to secure and harden Kubernetes. Two folks from Walmart also discussed their use case, which involves running Kubernetes clusters in retail locations to support a point-of-sale (POS) application at the check-out register. Finally, there was a discussion of chaos engineering from folks at Gremlin and Target.

No Breakout Sessions

Due to booth duty and my flight home, I wasn’t able to attend any breakout sessions today.

Event Summary

If I’m completely honest, I didn’t get as much out of the event as I’d hoped. I’m not yet sure if that is because I didn’t get to attend as many sessions as I’d hoped/planned (due to problems with sessions being moved/rescheduled or whatever), if my choice of sessions was just poor, if it was just me/my personality, if I’m not the target audience for this event, or if it is some combination of these factors. I did enjoy seeing other folks in the community, meeting some folks that I hadn’t met before, and being able to help answer some questions. From that perspective, the event was a success.

Feel free to contact me on Twitter if you’d like to discuss anything in more detail.

KubeCon 2019 Day 2 Summary

Keynotes

This morning’s keynotes were, in my opinion, better than yesterday’s morning keynotes. (I missed the closing keynotes yesterday due to customer meetings and calls.) Only a couple of keynotes really stuck out. Vicki Cheung provided some useful suggestions for tools that are helping to “close the gap” on user experience, and there was an interesting (but a bit overly long) session with a live demo on running a 5G mobile core on Kubernetes.

Running Large-Scale Stateful Workloads

Due to some power outages at the conference venue resulting from rain in San Diego, the Prometheus session I had planned to attend got moved to a different time. As a result, I sat in this session by Lyft instead. The topic was about running large-scale stateful workloads, but the content was really about a custom solution Lyft built (called Flyte) that leveraged CRDs and custom controllers to help manage stateful workloads. While it’s awesome that companies like Lyft can extend Kubernetes to address their specific needs, this session isn’t helpful to more “ordinary” companies that are trying to figure out how to run their stateful workloads on Kubernetes. I’d really like the CNCF and the conference committee to try to promote talks that are more applicable to the general audience instead of these “Look what we built!” sessions.

Leveling Up your CD: Unlocking Progressive Delivery on Kubernetes

This was a useful session covering Argo and some functionality being added to Argo to support what’s being called progressive delivery. Progressive delivery is defined by them as continuous delivery with some fine-grained controls over the blast radius, and the presenters demonstrated new features in Argo that provide fine-grained control over rollouts and the use of canary deployments (they showed a feature whereby Prometheus metrics can be used to determine the validity of a canary deployment and roll it back if unsuccessful). Argo was already on my “list of things to dig into,” but now I think it’s moved up a few spots on that list.

I had on my schedule to attend a talk on Krane, but the room moved from what was on my schedule (I don’t know if this is related to the power outages in other areas of the conference venue or what). After realizing this and wandering around trying to find the room, I realized I’d missed too much of the session so gave up.

That’s it for day 2. Tune in tomorrow for a recap on day 3, the final day of the event.

KubeCon 2019 Day 1 Summary

This week I’m in San Diego for KubeCon + CloudNativeCon. Instead of liveblogging each session individually, I thought I might instead attempt a “daily summary” post that captures highlights from all the sessions each day. Here’s my recap of day 1 at KubeCon + CloudNativeCon.

Keynotes

KubeCon + CloudNativeCon doesn’t have “one” keynote; it uses a series of shorter keynotes by various speakers. This has advantages and disadvantages; one key advantage is that there is more variety, and the attendees are more likely to stay engaged. I particularly enjoyed Bryan Liles’ CNCF project updates; I like Bryan’s sense of humor, and getting updates on some of the CNCF projects is always useful. As for some of the other keynotes, those that were thinly-disguised vendor sales pitches were generally pretty poor.

Introduction to the Virtual Kubelet

I was running late for the start of this session due to booth duty, and I guess the stuff I needed most was presented in that portion I missed. Most of what I saw was about Netflix Titus, and how the Netflix team ported Titus from Mesos to Virtual Kubelet. However, that information was so specific to Netflix’s particular use of Virtual Kubelet that it wasn’t all that helpful (for me, anyway).

TGIK Live

Joe Beda did a “TGIK Live” session on the show floor, discussing CUE. CUE’s been on my list to evaluate more closely (and possibly blog about). Joe’s demonstration of CUE helped provide a bit of a foundation for CUE, but I still have a lot of learning to do before I’ll be ready to write about it.

Russian Doll: Extending Containers with Nested Processes

Due to customer meetings and such, this was the only afternoon session I was able to attend in its entirety (I did manage to catch part of the session on Vitess). This session was presented by a couple of Google employees, and focused around efforts to run containers within a Pod in a sequential fashion (in a more complex fashion than initContainers support). The use case was to support tasks in Tekton, and the solution (or hack?) was to use a Go binary that “waits” on previous containers before running the original container ENTRYPOINT. I understand the use case, but I also have to wonder if a CustomResourceDefinition wouldn’t have been more appropriate for this.

Vitess: Stateless Storage in the Cloud

I was only able to capture the first portion of this session, which was about the Vitess project (a MySQL-compatible, scale-out, cloud-native database). The Vitess project, which is hosted by CNCF, announced today that the project has graduated (putting it on par with Kubernetes and Prometheus and others). The presenter shared the story—often with humorous anecdotes—of how Vitess came to be. I was hoping for some technical depth, but didn’t get any before I had to step out due to a customer meeting.

That’s it for day 1. Tune in tomorrow for a day 2 recap!

Using Kustomize with Cluster API Manifests

A topic that’s been in the back of my mind since writing the Cluster API introduction post is how someone could use kustomize to modify the Cluster API manifests. Fortunately, this is reasonably straightforward. It doesn’t require any “hacks” like those needed to use kustomize with kubeadm configuration files, but similar to modifying kubeadm configuration files you’ll generally need to use the patching functionality of kustomize when working with Cluster API manifests. In this post, I’d like to take a fairly detailed look at how someone might go about using kustomize with Cluster API.

By the way, readers who are unfamiliar with kustomize should probably read this introductory post first, and then read the post on using kustomize with kubeadm configuration files. I suggest reading the latter post because it provides an overview of how to use kustomize to patch a specific portion of a manifest, and you’ll use that functionality again when modifying Cluster API manifests.

A Fictional Use Case

For this post, I’m going to build out a fictional use case/scenario for the use of kustomize and Cluster API. Here are the key points to this fictional use case:

  1. Three different clusters on AWS are needed. The management cluster already exists.
  2. Two of these clusters will run in the AWS “us-west-2” region, while the third will run in the “us-east-2” region.
  3. One of the two “us-west-2” clusters will use larger instance types to accommodate more resource-intensive workloads.
  4. All three clusters need to be highly available, with multiple control plane nodes.

With this fictional use case in place, you’re now ready to set up a directory structure to support using Cluster API with kustomize to satisfy this use case.

Setting up the Directory Structure

To accommodate this fictional use case, you’ll need to use a directory structure that supports the use of kustomize overlays. Therefore, I’d propose a directory structure that looks something like this:

(parent)
 |- base
 |- overlays
     |- usw2-cluster1
     |- usw2-cluster2
     |- use2-cluster1

The base directory will store the “starting” point for the final Cluster API manifests, as well as a kustomization.yaml file that identifies these Cluster API manifests as resources for kustomize to use.

Each of the overlay subdirectories will also have a kustomization.yaml file and various patch files that will be applied against the base resources to produce the final manifests.

Creating the Base Configuration

The base configuration (found in the base directory of the directory structure described above) will contain complete, but fairly generic, configurations for Cluster API:

  • Definitions of the Cluster and AWSCluster objects
  • Definitions of the Machine and AWSMachine objects for the control plane along with associated KubeadmConfig objects
  • Definitions of the Machine, AWSMachine, and KubeadmConfig (or MachineDeployment, AWSMachineTemplate, and KubeadmConfigTemplate) objects for the worker nodes

To make your job easier with the kustomize overlays, modifying the base configurations to accommodate the majority of your deploys will mean fewer patches needed by kustomize later. In this fictional scenario, two of the clusters will run in “us-west-2”, so you should specify “us-west-2” as the region in the base configuration. Similarly, if you were planning on using the same SSH key for all the clusters (not recommended), you could bake that setting into the base configuration.

One final piece is needed, and that is a kustomization.yaml file in the base directory that identifies the resources available to kustomize. Assuming that your files were named cluster.yaml (for the Cluster and AWSCluster objects), controlplane.yaml (for objects related to the control plane), and workers.yaml (for objects related to worker nodes), then your kustomization.yaml might look like this:

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - cluster.yaml
  - controlplane.yaml
  - workers.yaml

With the base configuration done, you’re now ready to move on to the overlays.

Creating the Overlays

The overlays are where things start to get interesting. Each cluster will get its own directory in the overlays directory, where you’ll provide the cluster-specific patches kustomize will use to generate the YAML for that particular cluster.

Let’s start with the “usw2-cluster1” overlay. To understand what will be needed, you must first understand what changes need to be made to the base configuration to produce the desired configuration for this particular cluster. So what changes needed to be made?

  1. The metadata.name for the Cluster and AWSCluster objects needs to be modified. To keep the link between the Cluster and AWSCluster objects, the spec.infrastructureRef.name field for the Cluster object needs to be modified to use the correct value pointing to the AWSCluster object.
  2. The spec.sshKeyName field of the AWSCluster object needs to have the correct SSH key name specified.
  3. Similarly, the metadata.name field for the Machine objects, the AWSMachine objects, and the KubeadmConfig objects also need to be modified to use the correct names for the control plane objects and worker node objects. Since the metadata.name field is being modified for the AWSMachine and KubeadmConfig objects, you’ll also need to update the spec.infrastructureRef.name and spec.bootstrap.configRef.name fields of the Machine object, respectively, with the correct values.
  4. If you’re instead using a MachineDeployment for the worker nodes, the metadata.name fields of the MachineDeployment, AWSMachineTemplate, and KubeadmConfigTemplate objects needs to be updated. As with the previous bullet, the references in spec.template.spec.bootstrap.configRef.name and spec.template.spec.infrastructureRef.name fields need to be updated for the MachineDeployment object. Finally, the spec.template.spec.sshKeyName field needs to be updated for the AWSMachineTemplate object, so that the correct SSH key is used.
  5. All labels referencing the cluster name (such as the labels assigned to any Machine objects, assigned to any MachineDeployment objects, or referenced in the template of any MachineDeployment objects) need to be updated to refer to the correct cluster name. This would also include labels in the spec.selector.matchLabels field of a MachineDeployment.

Now that you have an idea of what changes need to be made to a set of Cluster API manifests, let’s explore how we might go about making those changes with kustomize. I won’t go over all the changes, but rather illustrate a couple of different ways these changes could be implemented.

Using JSON Patches

One way of patching individual fields within a manifest is using JSON 6902 patches (so named because they are described in RFC 6902). As an example, I will explore using JSON 6902 patches to address #1 from the list of changes described above.

The first part of a JSON 6902 patch is the reference to the patch file itself that must be placed in the kustomization.yaml file:

patchesJson6902:
  - target:
      group: cluster.x-k8s.io
      version: v1alpha2
      kind: Cluster
      name: capi-quickstart
    path: cluster-patch.json

This tells kustomize where the patch file is, and against which object(s) the patch file should be applied. Since I am using the manifests from the CAPI Quick Start as the base configuration, you can see the patch is specified to operate against the Cluster object named “capi-quickstart”.

The second part is the patch itself, which can be formatted as either YAML or JSON. I’ll use JSON in this example, but this section of the kubectl book provides an example of a YAML-formatted patch.

Here’s a JSON 6902 patch encoded as JSON:

[
  { "op": "replace",
    "path": "/metadata/name",
    "value": "usw2-cluster-1" },
  { "op": "replace",
    "path": "/spec/infrastructureRef/name",
    "value": "usw2-cluster-1" }
]

(This example, as well as other examples in this post, are wrapped for readability; it is perfectly acceptable to have each operation formatted as a single line.)

In this example, the patches are provided in a JSON list (denoted by the brackets), and each patch is a JSON object with three properties: op, path, and value. (Readers who are unfamiliar with JSON may find this post helpful.) This patch makes two changes to the original manifest. First, it modifies the metadata.name field to use “usw2-cluster1” as the value. Second, it modifies the spec.infrastructureRef.name field to also use “usw2-cluster1” as the value.

This patch addresses the Cluster object, but you also need to address the AWSCluster object. For that, you’ll need a separate patch file referenced by a separate section in kustomization.yaml.

The reference in kustomization.yaml would look like this:

patchesJson6902:
  - target:
      group: infrastructure.cluster.x-k8s.io
      version: v1alpha2
      kind: AWSCluster
      name: capi-quickstart
    path: awscluster-patch.json

And the corresponding patch file would look like this:

[
  { "op": "replace",
    "path": "/metadata/name",
    "value": "usw2-cluster-1" }
]

Note that I haven’t mentioned that the kustomization.yaml file in this directory also needs to have a reference to the base configuration; I’m only discussing the patch configuration. Refer to the kustomize documentation for full details, or refer back to my introductory post on kustomize.

Assuming a properly-configured kustomization.yaml file in this overlay directory referencing these two JSON 6902 patches, running kustomize build . would generate a customized set of manifests where the Cluster and AWSCluster objects have values specific for this particular workload cluster.

You can replicate this approach to make some of the other changes listed above, but in some cases using a JSON 6902 patch may not be the most effective method (this is especially true when a number of different fields are being modified).

Using Strategic Merge Patches

Instead of using a JSON 6902 patch, the other alternative is to use a strategic merge patch. This allows you to easily modify a number of different fields in a single manifest by “overriding” the values that are already present (if any).

As with a JSON 6902 patch, the first part of a strategic merge patch involves adding a reference to the overlay’s kustomization.yaml file:

patches:
  - target:
      group: cluster.x-k8s.io
      version: v1alpha2
      kind: Machine
      name: .*
    path: machine-labels.yaml

This is very much like the reference shown earlier to a JSON 6902 patch, but in this case I’ll draw your attention to the fact this uses a regular expression (regex) for the name field. This allows you to create a patch that will apply to multiple objects (as long as the objects match the group, version, and kind selectors). In this particular example, we’re referencing a patch that should apply to all Machine objects.

The second part is the patch itself, which is now a YAML file that contains the values to override in the base configuration as well as any additional values that should be added to the base configuration. In this example, I’ll only modify an existing value.

Here’s the contents of the patch file referenced above:

---
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Machine
metadata:
  name: .*
  labels:
    cluster.x-k8s.io/cluster-name: "usw2-cluster1"

Here again you see the use of a regex to capture all Machine objects regardless of name, and then a value for labels that will overwrite (in this case) the existing value in the base configuration. If you wanted to add additional labels, you could simply specify the additional labels right here in the patch. kustomize would then handle replacing existing values and adding new values.

Running kustomize build . with these changes present would result in all Machine objects being modified to use the label specified above, which is part of change #5 listed above (note that we haven’t addressed changes affecting the use of a MachineDeployment, only individual Machine objects).

This example, however, doesn’t really illustrate the difference between a JSON 6902 patch and a strategic patch merge. I’ll use another example that addresses the rest of change #5 by modifying a MachineDeployment’s labels.

For this final example, you’d again need both a reference to the patch file in kustomization.yaml as well as the patch file itself. I won’t repeat the entry in kustomization.yaml as you’ve seen a couple of times already; it would look a lot like the one for modifying Machine objects, but pointing to MachineDeployment objects instead.

The actual patch better illustrates how you can make multiple changes to a base manifest with a single patch file:

---
apiVersion: MachineDeployment
kind: MachineDeployment
metadata:
  name: .*
  labels:
    cluster.x-k8s.io/cluster-name: "usw2-cluster1"
spec:
  selector:
    matchLabels:
        cluster.x-k8s.io/cluster-name: "usw2-cluster1"
  template:
    metadata:
      labels:
        cluster.x-k8s.io/cluster-name: "usw2-cluster1"

Here a single patch file is making three separate (but related) changes to MachineDeployment resources in the base manifests. In this case, replicating this functionality with a JSON 6902 patch wouldn’t be terribly difficult, but users may find the readability of this approach to make it easier to reason about what kustomize is doing as it generates manifests.

There are a number of other changes that would be necessary to fully implement the fictional scenario, but in the interest of (reasonable) brevity I won’t include or describe all the necessary changes in this post. See the next section for information on where you can see an example of all the changes needed to implement the fictional scenario described in this post.

Additional Resources

In the event you’d like to use the fictional scenario described here to help with your own learning, I’ve created this exact scenario in my GitHub “learning-tools” repository, found in the kubernetes/capi-kustomize directory of the repository. There you’ll find example YAML files, overlays, JSON patches, and related materials—all based on the fictional scenario described in this post—for you to use in your own experiments in combining Cluster API with kustomize.

If you have any questions, corrections (in the event I’ve made an error), or suggestions for improvement, please don’t hesitate to contact me on Twitter.

Recent Posts

Programmatically Creating Kubernetes Manifests

A while ago I came across a utility named jk, which purported to be able to create structured text files—in JSON, YAML, or HCL—using JavaScript (or TypeScript that has been transpiled into JavaScript). One of the use cases was creating Kubernetes manifests. The GitHub repository for jk describes it as “a data templating tool”, and that’s accurate for simple use cases. In more complex use cases, the use of a general-purpose programming language like JavaScript in jk reveals that the tool has the potential to be much more than just a data templating tool—if you have the JavaScript expertise to unlock that potential.

Read more...

Spousetivities in Barcelona at VMworld EMEA 2019

Barcelona is probably my favorite city in Europe—which works out well, since VMware seems to have settled on Barcelona at the destination for VMworld EMEA. VMworld is back in Barcelona again this year, and I’m fortunate enough to be able to attend. VMworld in Barcelona wouldn’t be the same without Spousetivities, though, and I’m happy to report that Spousetivities will be in Barcelona. In fact, registration is already open!

Read more...

Using Kustomize with Kubeadm Configuration Files

Last week I had a crazy idea: if kustomize can be used to modify YAML files like Kubernetes manifests, then could one use kustomize to modify a kubeadm configuration file, which is also a YAML manifest? So I asked about it in one of the Kubernetes-related channels in Slack at work, and as it turns out it’s not such a crazy idea after all! So, in this post, I’ll show you how to use kustomize to modify kubeadm configuration files.

Read more...

Technology Short Take 119

Welcome to Technology Short Take #119! As usual, I’ve collected some articles and links from around the Internet pertaining to various data center- and cloud-related topics. This installation in the Tech Short Takes series is much shorter than usual, but hopefully I’ve managed to find something that proves to be helpful or informative! Now, on to the content!

Read more...

Exploring Cluster API v1alpha2 Manifests

The Kubernetes community recently released v1alpha2 of Cluster API (a monumental effort, congrats to everyone involved!), and with it comes a number of fairly significant changes. Aside from the new Quick Start, there isn’t (yet) a great deal of documentation on Cluster API (hereafter just called CAPI) v1alpha2, so in this post I’d like to explore the structure of the CAPI v1alpha2 YAML manifests, along with links back to the files that define the fields for the manifests. I’ll focus on the CAPI provider for AWS (affectionately known as CAPA).

Read more...

An Introduction to Kustomize

kustomize is a tool designed to let users “customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is” (wording taken directly from the kustomize GitHub repository). Users can run kustomize directly, or—starting with Kubernetes 1.14—use kubectl -k to access the functionality (although the standalone binary is newer than the functionality built into kubectl as of the Kubernetes 1.15 release). In this post, I’d like to provide an introduction to kustomize.

Read more...

Consuming Pre-Existing AWS Infrastructure with Cluster API

All the posts I’ve published so far about Kubernetes Cluster API (CAPI) assume that the underlying infrastructure needs to be created. This is fine, because generally speaking that’s part of the value of CAPI—it will create new cloud infrastructure for every Kubernetes cluster it instantiates. In the case of AWS, this includes VPCs, subnets, route tables, Internet gateways, NAT gateways, Elastic IPs, security groups, load balancers, and (of course) EC2 instances. But what if you didn’t want CAPA to create AWS infrastructure? In this post, I’ll show you how to consume pre-existing AWS infrastructure with Cluster API for AWS (CAPA).

Read more...

Highly Available Kubernetes Clusters on AWS with Cluster API

In my previous post on Kubernetes Cluster API, I showed readers how to use the Cluster API provider for AWS (referred to as CAPA) to instantiate a Kubernetes cluster on AWS. Readers who followed through the instructions in that post may note CAPA places all the nodes for a given cluster in a single AWS availability zone (AZ) by default. While multi-AZ Kubernetes deployments are not without their own considerations, it’s generally considered beneficial to deploy across multiple AZs for higher availability. In this post, I’ll share how to deploy highly-available Kubernetes clusters—defined as having multiple control plane nodes distributed across multiple AZs—using Cluster API for AWS (CAPA).

Read more...

VMworld 2019 Vendor Meeting: Lightbits Labs

Last week at VMworld, I had the opportunity to meet with Lightbits Labs, a relatively new startup working on what they called “disaggregated storage.” As it turns out, their product is actually quite interesting, and has relevance not only in “traditional” VMware vSphere environments but also in environments more focused on cloud-native technologies like Kubernetes.

Read more...

Bootstrapping a Kubernetes Cluster on AWS with Cluster API

Yesterday I published a high-level overview of Cluster API (CAPI) that provides an introduction to some of the concepts and terminology in CAPI. In this post, I’d like to walk readers through actually using CAPI to bootstrap a Kubernetes cluster on AWS. This walkthrough is for the v1alpha1 release of CAPI (a walk through for CAPI v1alpha2 is coming).

Read more...

An Introduction to Kubernetes Cluster API

In this post, I’d like to provide a high-level introduction to the Kubernetes Cluster API. The aim of Cluster API (CAPI, for short) is, as outlined in the project’s GitHub repository, “a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management”. This high-level introduction serves to establish some core terminology and concepts upon which I’ll build in future posts about CAPI.

Read more...

Liveblog: VMworld 2019 Day 1 General Session

This is the liveblog from the day 1 general session at VMworld 2019. This year the event is back at Moscone Center in San Francisco, and VMware has already released some juicy news (see here, here, and here) in advance of the keynote this morning, foreshadowing what Pat is expected to talk about.

Read more...

Technology Short Take 118

Welcome to Technology Short Take #118! Next week is VMworld US in San Francisco, CA, and I’ll be there live-blogging and meeting up with folks to discuss all things Kubernetes. If you’re going to be there, look me up! Otherwise, I leave you with this list of links and articles from around the Internet to keep you busy. Enjoy!

Read more...

Creating Tagged Subnets Across AWS AZs Using Pulumi

As I mentioned back in May in this post on creating a sandbox for learning Pulumi, I’ve started using Pulumi more and more of my infrastructure-as-code needs. I did switch from JavaScript to TypeScript (which I know compiles to JavaScript on the back-end, but the strong typing helps a new programmer like me). Recently I had a need to create some resources in AWS using Pulumi, and—for reasons I’ll explain shortly—many of the “canned” Pulumi examples didn’t cut it for my use case. In this post, I’ll share how I created tagged subnets across AWS availability zones (AZs) using Pulumi.

Read more...

Reconstructing the Join Command for Kubeadm

If you’ve used kubeadm to bootstrap a Kubernetes cluster, you probably know that at the end of the kubeadm init command to bootstrap the first node in the cluster, kubeadm prints out a bunch of information: how to copy over the admin Kubeconfig file, and how to join both control plane nodes and worker nodes to the cluster you just created. But what if you didn’t write these values down after the first kubeadm init command? How does one go about reconstructing the proper kubeadm join command?

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!