Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

An Introduction to Kustomize

kustomize is a tool designed to let users “customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is” (wording taken directly from the kustomize GitHub repository). Users can run kustomize directly, or—starting with Kubernetes 1.14—use kubectl -k to access the functionality (although the standalone binary is newer than the functionality built into kubectl as of the Kubernetes 1.15 release). In this post, I’d like to provide an introduction to kustomize.

In its simplest form/usage, kustomize is simply a set of resources (these would be YAML files that define Kubernetes objects like Deployments, Services, etc.) plus a set of instructions on the changes to be made to these resources. Similar to the way make leverages a file named Makefile to define its function or the way Docker uses a Dockerfile to build a container, kustomize uses a file named kustomization.yaml to store the instructions on the changes the user wants made to a set of resources.

Here’s a simple kustomization.yaml file:

resources:
- deployment.yaml
- service.yaml
namePrefix: dev-
namespace: development
commonLabels:
  environment: development

This article won’t attempt to explain all the various fields that could be present in a kustomization.yaml file (that’s well handled here), but here’s a quick explanation of this particular example:

  • The resources field specifies which things (resources) kustomize will modify. In this case, it will look for resources inside the deployment.yaml and service.yaml files in the same directory (full or relative paths can be specified as needed here).
  • The namePrefix field instructs kustomize to prefix the name attribute of all resources defined in the resources field with the specified value (in this case, “dev-”). So, if the Deployment specified a name of “nginx-deployment”, then kustomize would change the value to “dev-nginx-deployment”.
  • The namespace field instructs kustomize to add a namespace value to all resources. In this case, the Deployment and the Service are modified to be placed into the “development” namespace.
  • Finally, the commonLabels field includes a set of labels that will be added to all resources. In this example, kustomize will label the resources with the label name “environment” and a value of “development”.

When a user runs kustomize build . in the directory with the kustomization.yaml and the referenced resources (the files deployment.yaml and service.yaml), the output is the customized text with the changes found in the kustomization.yaml file. Users can redirect the output if they want to capture the changes:

kustomize build . > custom-config.yaml

The output is deterministic (given the same inputs, the output will always be the same), so it may not be necessary to capture the output in a file. Instead, users could pipe the output into another command:

kustomize build . | kubectl apply -f -

Users can also invoke kustomize functionality with kubectl -k (as of Kubernetes 1.14). However, be aware that the standalone kustomize binary is more recent than the functionality bundled into kubectl (as of the Kubernetes 1.15 release).

Readers may be thinking, “Why go through this trouble instead of just editing the files directly?” That’s a fair question. In this example, users could modify the deployment.yaml and service.yaml files directly, but what if the files were a fork of someone else’s content? Modifying the files directly makes it difficult, if not impossible, to rebase the fork when changes are made to the origin/source. However, using kustomize allows users to centralize those changes in the kustomization.yaml file, leaving the original files untouched and thereby facilitating the ability to rebase the source files if needed.

The benefits of kustomize become more apparent in more complex kustomize use cases. In the example shown above, the kustomization.yaml and the resources are in the same directory. However, kustomize supports use cases where there is a “base configuration” and multiple “variants”, also known as overlays. Say a user wanted to take this simple Nginx Deployment and Service I’ve been using as an example and create development, staging, and production versions (or variants) of those files. Using overlays with shared base resources would accomplish this.

To help illustrate the idea of overlays with base resources, let’s assume the following directory structure:

- base
  - deployment.yaml
  - service.yaml
  - kustomization.yaml
- overlays
  - dev
    - kustomization.yaml
  - staging
    - kustomization.yaml
  - prod
    - kustomization.yaml

In the base/kustomization.yaml file, users would simply declare the resources that should be included by kustomize using the resources field.

In each of the overlays/{dev,staging,prod}/kustomization.yaml files, users would reference the base configuration in the resources field, and then specify the particular changes for that environment. For example, the overlays/dev/kustomization.yaml file might look like the example shown earlier:

resources:
- ../../base
namePrefix: dev-
namespace: development
commonLabels:
  environment: development

However, the overlays/prod/kustomization.yaml file could look very different:

resources:
- ../../base
namePrefix: prod-
namespace: production
commonLabels:
  environment: production
  sre-team: blue

When a user runs kustomize build . in the overlays/dev directory, kustomize will generate a development variant. However, when a user runs kustomize build . in the overlays/prod directory, a production variant is generated. All without any changes to the original (base) files, and all in a declarative and deterministic way. Users can commit the base configuration and the overlay directories into source control, knowing that repeatable configurations can be generated from the files in source control.

There’s a lot more to kustomize that what I’ve touched upon in this post, but hopefully this gives enough of an introduction to get folks started.

Additional Resources

There are quite a few good articles and posts written about kustomize; here are a few that I found helpful:

Change base YAML config for different environments prod/test using Kustomize

Kustomize - The right way to do templating in Kubernetes

Declarative Management of Kubernetes Objects Using Kustomize

Customizing Upstream Helm Charts with Kustomize

If anyone has questions or suggestions for improving this post, I’m always open to reader feedback. Feel free to contact me via Twitter, or hit me up on the Kubernetes Slack instance. Have fun customizing your manifests with kustomize!

Consuming Pre-Existing AWS Infrastructure with Cluster API

All the posts I’ve published so far about Kubernetes Cluster API (CAPI) assume that the underlying infrastructure needs to be created. This is fine, because generally speaking that’s part of the value of CAPI—it will create new cloud infrastructure for every Kubernetes cluster it instantiates. In the case of AWS, this includes VPCs, subnets, route tables, Internet gateways, NAT gateways, Elastic IPs, security groups, load balancers, and (of course) EC2 instances. But what if you didn’t want CAPA to create AWS infrastructure? In this post, I’ll show you how to consume pre-existing AWS infrastructure with Cluster API for AWS (CAPA).

Why would one not want CAPA to create the necessary AWS infrastructure? There are a variety of reasons, but the one that jumps to my mind immediately is that an organization may have established/proven expertise and a process around the use of infrastructure-as-code (IaC) tooling like Terraform, CloudFormation, or Pulumi. In cases like this, such organizations would very likely prefer to continue to use the tooling they already know and with which they are already familiar, instead of relying on CAPA. Further, the use of third-party IaC tooling may allow for greater customization of the infrastructure than CAPA would allow.

Fortunately, CAPA makes it reasonably straightforward to use pre-existing infrastructure. The key here is the networkSpec object in a Cluster definition. Readers who read the post on highly available clusters on AWS with Cluster API saw how to use the networkSpec object to tell CAPA how to create subnets across multiple availability zones (AZs) for greater availability. The networkSpec object can also be used to tell CAPA how to use pre-existing infrastructure.

For this second use case (having CAPA use pre-existing infrastructure), users will need to add a networkSpec object to the Cluster definition that provides the IDs of the VPC and the subnets within the VPC that CAPA should use. Here’s an example (this example assumes CAPI v1alpha1; the full “path” to where networkSpec should be specified changes in CAPI v1alpha2):

spec:
  providerSpec:
    value:
      networkSpec:
        vpc:
          id: vpc-0425c335226437144
        subnets:
          - id: subnet-07758a9bc904d06af
          - id: subnet-0a3507a5ad2c5c8c3
          - id: subnet-02ad6429dd0532452
          - id: subnet-02b300779e9d895cf
          - id: subnet-03d8f353b289b025f
          - id: subnet-0a2fe03b0d88fa078

This example provides CAPA with the ID of a VPC to use, as well as the IDs for public and private subnets across three different AZs. Note that these need to be fully functional subnets, so users need to be sure to have created not only the VPC and the subnets but also the necessary Internet gateways (for public subnets), NAT gateways (for private subnets), Elastic IP addresses (for the NAT gateways), route tables, and route table associations. Cluster API will take care of security groups, load balancers, and EC2 instances; these do not need to be created in advance.

In addition to ensuring that the infrastructure is fully functional, users must be sure that the appropriate AWS tags are added to all objects. When CAPI creates an AWS infrastructure object, it adds the sigs.k8s.io/cluster-api-provider-aws/cluster/<cluster-name> tag, with a value of “managed”, and the sigs.k8s.io/cluster-api-provider-aws/role tag, with a value of “common.” Users will want to ensure whatever tooling they are using to create the infrastructure to be consumed by CAPA has those tags, although the values may be different (it’s not required to use “managed” and “common”, respectively).

With the infrastructure created and tagged, and the networkSpec information in place, applying the Cluster YAML definition against the management cluster using kubectl will generate a new Kubernetes cluster that will leverage the specified VPC and subnets instead of creating a new VPC and subnets. (Users could also use a configuration like this with clusterctl when creating a new management cluster.)

Users who are configuring CAPA to consume pre-existing infrastructure still have the option of instructing CAPA to distribute EC2 instances across multiple availability zones (AZs) for greater availability (as described here). In this case, users have two options for providing CAPA with the necessary information to distribute the instances across multiple AZs:

  1. By specifying an AZ in the Machine specification
  2. By specifying a subnet ID in the Machine specification

The first option is exactly as described in the previous post on HA clusters, so I won’t repeat it here.

The second option is only available in the case of having CAPA consume pre-existing infrastructure, because only in such cases will the subnet IDs be known in advance. To add the subnet ID to the Machine YAML specification would look something like this (this example is for CAPI v1apha1; the full “path” in the YAML manifest changes for CAPI v1alpha2):

spec:
  providerSpec:
    value:
      subnet:
        id: subnet-0a3507a5ad2c5c8c3

This is a bit more tedious than specifying AZ; here, the user must manually look up the subnet IDs and determine in which AZ each subnet resides. Further, since both public and private subnets are needed, users must manually determine which subnet IDs belong to public subnets and which belong to private subnets.(As noted in this post, placing instances across multiple AZs is not the answer for all availability needs—see the “Disclaimer” portion of the post.)

Finally, users will need to provide their “own” bastion host for accessing the EC2 instances that CAPA places on private subnets. When configuring CAPA to use pre-existing infrastructure, CAPA will not create a bastion host automatically.

If anyone has any feedback, corrections, or suggestions for improving this post, please feel free to contact me via Twitter. All feedback is welcome!

UPDATE 13 September 2019: I’ve updated this post to point out that the examples provided are based on CAPI v1alpha1. The examples will work with CAPI v1alpha2, but the “path” within the YAML manifest changes with CAPI v1alpha2.

Highly Available Kubernetes Clusters on AWS with Cluster API

In my previous post on Kubernetes Cluster API, I showed readers how to use the Cluster API provider for AWS (referred to as CAPA) to instantiate a Kubernetes cluster on AWS. Readers who followed through the instructions in that post may note CAPA places all the nodes for a given cluster in a single AWS availability zone (AZ) by default. While multi-AZ Kubernetes deployments are not without their own considerations, it’s generally considered beneficial to deploy across multiple AZs for higher availability. In this post, I’ll share how to deploy highly-available Kubernetes clusters—defined as having multiple control plane nodes distributed across multiple AZs—using Cluster API for AWS (CAPA).

This post assumes that you have already deployed a management cluster, so the examples may mention using kubectl to apply CAPA manifests against the management cluster to deploy a highly-available workload cluster. However, the information needed in the CAPA manifests would also work with clusterctl in order to deploy a highly-available management cluster, although users should keep in mind that clusterctl is deprecated with the CAPI v1alpha2 release. (Not familiar with what I mean when I say “management cluster” or “workload cluster”? Be sure to go read the introduction to Cluster API post first.)

Also, this post was written with CAPI v1alpha1 in mind. Although the CAPI v1alpha2 release does change quite a bit with regard to the YAML manifests and the fields contained therein, it appears that the specific sections/fields needed to deploy highly-available clusters remains the same between v1alpha1 and v1alpha2.

Two changes are needed to a set of CAPA manifests in order to deploy a highly-available cluster:

  1. The YAML manifest for the Cluster object has to provide the specification for how to arrange the subnets and IP addresses across AZs.
  2. The YAML manifest for the control plane nodes has to specify the AZ for each control plane node (thus allowing the user/operator to distribute them across AZs).

The first of these changes is modifying the YAML manifest for the Cluster object. By default, the Cluster YAML manifest doesn’t provide any information on how to assign subnets or IP addresses across multiple AZs, so CAPA does everything within the first AZ of a region. In order to make CAPA deploy across multiple AZs, users must extend the Cluster YAML definition to include that information via a networkSpec (a Network specification). Since the generate-yaml.sh script included with v1alpha1 of CAPA doesn’t do this, the addition of network information has to be done by the user/operator.

Here’s an example YAML snippet that would lay out subnets across three AZs in the AWS “us-west-2” region (this example assumes v1alpha1; the full “path” to where networkSpec should be specified changes in v1alpha2):

spec:
  value:
    providerSpec:
      networkSpec:
        vpc:
          cidrBlock: "10.20.0.0/16"
        subnets:
        - availabilityZone: us-west-2a
          cidrBlock: "10.20.0.0/20"
          isPublic: true
        - availabilityZone: us-west-2a
          cidrBlock: "10.20.16.0/20"
        - availabilityZone: us-west-2b
          cidrBlock: "10.20.32.0/20"
          isPublic: true
        - availabilityZone: us-west-2b
          cidrBlock: "10.20.48.0/20"
        - availabilityZone: us-west-2c
          cidrBlock: "10.20.64.0/20"
          isPublic: true
        - availabilityZone: us-west-2c
          cidrBlock: "10.20.80.0/20"

This section of YAML provides the overall CIDR block for the VPC that CAPA creates, as well as specifying the CIDR blocks for each subnet in each AZ. This does mean that users will have to “manually” break down the CIDR across the AZs and subnets. This YAML shows both public and private subnets in each AZ, which are required in order for CAPA to create a NAT gateway (for private instances) in each AZ.

When this YAML is applied to the management cluster using kubectl, CAPA will create the VPC, subnets, routes and route tables, and Internet or NAT gateways (for public or private subnets, respectively).

So far, so good! This is only half the picture, though—users also need to tell CAPA how to deploy nodes across the AZs. That’s the second required change: adding AZ information to the Machine specification for the control plane nodes. Additionally, users should make sure they are defining multiple control plane nodes using a MachineList specification, instead of defining only a single control plane node with a simple Machine specification. The generate-yaml.sh script supplied with the CAPA example manifests generates an example MachineList for the control plane as controlplane-machines-ha.yaml users can use as a starting point/basis for an HA control plane manifest.

To specify an AZ for instances created by CAPA, users would add this snippet of YAML in a Machine definition (this example assumes v1alpha1; the full “path” to where users specify this changes in v1alpha2):

spec:
  providerSpec:
    value:
      availabilityZone: "us-west-2a"

When this snippet of YAML is applied to a management cluster as part of a Machine or MachineList specification, CAPA will look up the subnet(s) associated with that AZ and instantiate the instance in the appropriate subnet. Presto! Users now have Kubernetes control plane that contains multiple nodes distributed across multiple AZs.

This same change (adding AZ information) can be used to also distribute worker nodes across multiple AZs. This would not work with a MachineDeployment (as all the Machines created by a MachineDeployment are identical), but users could use separate MachineDeployments for each AZ, or use a MachineList and provide the AZ information for each Machine in the MachineList specification.

Disclaimer

Readers need to be aware that deploying across multiple AZs is not a panacea to cure all availability ills. Although the loss of a single AZ will not (generally) render the cluster unavailable—etcd will maintain a quorum so the API server will continue to function—the control plane may be flooded with the demands of rescheduling Pods, and remaining active nodes may not be able to support the resource requirements of the Pods being rescheduled. The sizing and overall utilization of the cluster will greatly affect the behavior of the cluster and the workloads hosted there in the event of an AZ failure. Careful planning is needed to maximize the availability of the cluster even in the face of an AZ failure.

Have questions? Spotted an error in this post? Feel free to contact me via Twitter; all feedback is welcome.

UPDATE 13 September 2019: I’ve updated this post with some information from the CAPI v1alpha2 release.

VMworld 2019 Vendor Meeting: Lightbits Labs

Last week at VMworld, I had the opportunity to meet with Lightbits Labs, a relatively new startup working on what they called “disaggregated storage.” As it turns out, their product is actually quite interesting, and has relevance not only in “traditional” VMware vSphere environments but also in environments more focused on cloud-native technologies like Kubernetes.

So what is “disaggregated storage”? It’s one of the first questions I asked the Lightbits team. The basic premise behind Lightbits’ solution is that by taking the storage out of nodes—by decoupling storage from compute and memory—they can provide more efficient scaling. Frankly, it’s the same basic premise behind storage area network (SANs), although I think Lightbits wants to distance themselves from that terminology.

Instead of Fibre Channel, Fibre Channel over Ethernet (FCoE), or iSCSI, Lightbits uses NVMe over TCP. This provides good performance over 25, 50, or 100Gbps links with low latency (typically less than 300 microseconds). Disks appear “local” to the node, which allows for some interesting concepts when used in conjunction with hyperconverged platforms (more on that in a moment).

Lightbits has their own operating system, LightOS, which runs on industry-standard x64 servers from Dell, HP, Lenovo, etc. To further enhance the performance of a server running LightOS (a “brick” in their terminology), Lightbits also offers a hardware acceleration card called the LightField. This card offers a number of benefits:

  • 100Gbps compression/decompression at wire speed
  • NVMe/TCP acceleration
  • Global FTL (Flash Translation Layer) acceleration
  • Encryption
  • Deduplication

Lightbits is currently working on certifying their NVMe/TCP implementation for vSphere, at which point vSphere users would be able to use Lightbits bricks as storage for vSphere hypervisors. The Lightbits folks also discussed the idea of using Lightbits with vSAN in what was called “disaggregated hyperconverged storage.” (How’s that for a marketing term!) In this arrangement, the disks used by vSAN would actually be volumes from a Lightbits brick (not local disks). It solves the storage portion of the basic problem behind hyperconverged platforms, which is the inability to independently scale compute, memory, and storage. I must admit, however, that it took a bit of head-scratching to understand what immediately seems to be a counter-intuitive arrangement.

With regard to Kubernetes, Lightbits offers a CSI plugin, which allows Kubernetes clusters to directly access and provision Persistent Volumes (PVs) on a Lightbits brick. The Lightbits team stated that NVMe/TCP support is already present in the upstream Linux kernel, although I haven’t verified this and I don’t have any information as to what version of the kernel added this support.

More information is available from the Lightbits Labs web site.

Feel free to hit me up on Twitter if you have any questions or comments.

Bootstrapping a Kubernetes Cluster on AWS with Cluster API

Yesterday I published a high-level overview of Cluster API (CAPI) that provides an introduction to some of the concepts and terminology in CAPI. In this post, I’d like to walk readers through actually using CAPI to bootstrap a Kubernetes cluster on AWS. This walkthrough is for the v1alpha1 release of CAPI (a walk through for CAPI v1alpha2 is coming).

It’s important to note that all of the information shared here is also found in the “Getting Started” guide in the AWS provider’s GitHub repository. My purpose here is provide an additional walkthrough that supplements that official documentation, not to supplant the official documentation, and to spread the word about how the process works. As mentioned earlier, this is all based on the 0.3.x release of the AWS provider, which adheres to the v1alpha1 revision of CAPI.

Four basic steps are involved in bootstrapping a Kubernetes cluster on AWS using CAPI:

  1. Installing the necessary tools (a one-time task)
  2. Preparing the AWS account with the correct IAM roles and policies (this is a one-time task)
  3. Creating a management cluster (not required every single time)
  4. Creating a workload cluster

The following sections take a look at each of these steps in a bit more detail. First, though, I think it’s important to mention that CAPI is still in its early days (this post is based on v1alpha1). As such, it’s possible that commands may (will) change, and API specifications may (will) change as further development occurs. In fact, there are changes in the process from v1alpha1 (which is what this post is based upon) and the v1alpha2 release.

With that caveat/warning provided, let’s dig into the details.

Installing the Necessary Tools

There are (generally) three tools you’ll want to install on your local system:

  1. kind (available via GitHub)
  2. clusterawsadm and clusterctl for the CAPI AWS provider (available here; this post was written using the 0.3.7 release)
  3. kubectl (see instructions here for installing)

You’ll (generally) also want to have the AWS CLI installed and configured, although this isn’t a hard requirement.

Once all the necessary tools are installed, users can proceed with the next step of preparing their AWS account for CAPA.

Preparing the AWS Account

Before users can use CAPI to stand up clusters in AWS, first the AWS account has to be prepared with an appropriate set of IAM roles and policies. These IAM entities are needed in order to allow the bootstrap and management clusters to create resources on AWS (the bootstrap cluster will create resources consumed by the management cluster; the management cluster will create resources consumed by the workload clusters).

The Cluster API AWS Provider (referred to as CAPA) provides a binary tool that helps with this task, named clusterawsadm.

To prepare the account, you’d run clusterawsadm alpha bootstrap create-stack. This does assume that you have either a) an AWS CLI profile configured that sets AWS region and credentials, or b) set the appropriate environment variables (AWS_REGION, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN if you are using multi-factor authentication). With one of those two conditions satisfied, clusterawsadm will generate a CloudFormation stack that creates all the necessary IAM roles and policies.

If you’re interested in having a deeper understanding of exactly what clusterawsadm is doing, you can instead run clusterawsadm alpha bootstrap generate-cloudformation <aws-account-id> (it appears that this portion of functionality from clusterawsadm does not respect/recognize AWS CLI profiles). This command will generate a YAML template for CloudFormation that, if applied, would create all the necessary IAM roles and policies.

Users can create the roles and policies manually, but this isn’t generally recommended.

Creating a Management Cluster

To create any cluster using CAPA—even the management cluster—users first have to create the YAML manifests that describe the desired state of the cluster they’re creating. To make this process a little easier, the CAPA maintainers have created some templates and a shell script to build out manifests. In the “Getting Started” guide in the GitHub repository, the steps under “Generating cluster manifests and example cluster” walk users through downloading these templates and the associated shell script (appropriately named generate-yaml.sh).

Before users run generate-yaml.sh to create manifests for your management cluster, they can make life a bit easier for themselves by first declaring some environment variables. The shell script uses these environment variables to customize the manifests. Here are some variables to declare before running generate-yaml.sh:

  • CLUSTER_NAME is the name users want to assign to the cluster (this must be unique within an AWS region)
  • AWS_REGION is, naturally, the AWS region in which the cluster should be created
  • SSH_KEY_NAME is the name of the AWS SSH keypair in the target region that users want injected into the EC2 instances (for SSH access)
  • CONTROL_PLANE_MACHINE_TYPE specifies the instance type used for the control plane nodes
  • NODE_MACHINE_TYPE provides the instance type used for the worker nodes in the cluster

Once these variables are defined, running generate-yaml.sh will generate a customized set of manifests into an out subdirectory of the current directory. In that out directory you’ll find a number of files:

  • cluster.yaml contains the definition of the cluster that will be created. This includes things like CIDR ranges and DNS domain.
  • controlplane-machine.yaml contains a specification for a non-redundant (single node) control plane.
  • controlplane-machines-ha.yaml contains a specification for an redundant (multi-node) control plane. This isn’t necessarily truly HA (highly available), though, as all the nodes are deployed into the same AWS AZ.
  • machine-deployment.yaml is for a MachineDeployment (a mechanism for running multiple nodes) for worker nodes.
  • machines.yaml contains specifications for worker nodes.
  • provider-components.yaml contains the CRDs for CAPI, as well as a Kubernetes Secret that contains the AWS credentials. (As such, treat this file with extreme care/caution.)
  • addons.yaml contains specifications for installing the CNI plugin (Calico by default).

If needed, users can further customize these files (like changing the version of Kubernetes they’re deploying, for example, but be aware that only specific versions are supported with CAPI).

Users also need to be aware that they don’t need all these files (users can just delete the one they don’t need):

  • For the control plane, users should use either controlplane-machine.yaml or controlplane-machines-ha.yaml.
  • For the worker nodes, users should (at first) use either machine-deployment.yaml or machines.yaml.

Once the manifests are ready (they’ve been appropriately customized and unnecessary files removed), users are ready to create the management cluster with this command:

clusterctl create cluster -v 3 \
--bootstrap-type kind \
--provider aws \
-c ./out/cluster.yaml \
-m ./out/machines.yaml \
-p ./out/provider-components.yaml \
-a ./out/addons.yaml

This will take a fair amount of time to complete (I haven’t timed it exactly, but I’d say between 10-15 minutes or so). If you want more verbose output, increase the value of the -v parameter. Once it is finished, clusterctl will generate a Kubeconfig file in the current directory file—surprise, surprise—kubeconfig. Using kubectl --kubeconfig ./kubeconfig get nodes should report back on the status of the cluster. (I prefer to copy the file into ~/.kube and use ktx to switch between clusters.) Be aware that clusterctl will report “complete” before all the nodes in the cluster are finished provisioning.

Congratulations! You’ve established a management cluster, which is now prepared with CAPI components to provision and manage workload clusters. From this point on—aside from creating additional management clusters—users will use kubectl to interact with CAPI. If you need to provision additional management clusters, you would use this same process with clusterctl.

Creating a Workload Cluster

Once a management cluster has been established, users can use CAPI via that management cluster to instantiate one or more workload clusters. Overall, the process for creating a workload cluster is very simple once you have a management cluster up and running:

  1. Generate the YAML manifests that describe the workload cluster you want instantiated.
  2. Use kubectl to apply those manifests to the management cluster.

Users can use the same generate-yaml.sh script and associated environment variables, or users can copy the YAML files for the management cluster and edit them manually. (I’m working on a post about using kustomize to help with this process.)

Once the manifests are ready, users just use kubectl like this (users should substitute the correct filename and path in the commands below, of course):

  1. To create the cluster structure and configuration, run kubectl apply -f cluster.yaml.
  2. To create the control plane, run kubectl apply -f controlplane.yaml.
  3. To install a CNI, follow the instructions for that particular CNI.
  4. To create the worker nodes, run kubectl apply -f machines.yaml.

At each step, CAPA (the AWS provider for CAPI) will create and configure the required AWS resources (VPCs, subnets, load balancers, instances, security groups, etc.). Once the process is finished, there will be a Kubeconfig file named kubeconfig in the current directory that users can use to access this new workload cluster. Remember that the steps above should be performed against the management cluster. This does seem a bit odd, but just remember that it is the CAPI controllers in the management cluster that will create the resources and elements for the workload cluster.

And that’s it—you’ve created a management cluster on AWS, and then used that management cluster to bootstrap a Kubernetes workload cluster on AWS. You can now use that management cluster to stamp out as many additional workload clusters as needed. The nice thing about using CAPA is that it handles all the AWS-specific details needed to get the AWS cloud provider working (all the stuff described here).

Got questions, comments, or corrections? I’d love to hear them. Hit me up on Twitter and let me know. Looking for a walk through using v1alpha2 (a newer release) of CAPI? It will be available soon.

UPDATE 13 September 2019: I’ve updated this post to make it clearer it is based on CAPI v1alpha1, and that separate content will be generated for CAPI v1alpha2.

Recent Posts

An Introduction to Kubernetes Cluster API

In this post, I’d like to provide a high-level introduction to the Kubernetes Cluster API. The aim of Cluster API (CAPI, for short) is, as outlined in the project’s GitHub repository, “a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management”. This high-level introduction serves to establish some core terminology and concepts upon which I’ll build in future posts about CAPI.

Read more...

Liveblog: VMworld 2019 Day 1 General Session

This is the liveblog from the day 1 general session at VMworld 2019. This year the event is back at Moscone Center in San Francisco, and VMware has already released some juicy news (see here, here, and here) in advance of the keynote this morning, foreshadowing what Pat is expected to talk about.

Read more...

Technology Short Take 118

Welcome to Technology Short Take #118! Next week is VMworld US in San Francisco, CA, and I’ll be there live-blogging and meeting up with folks to discuss all things Kubernetes. If you’re going to be there, look me up! Otherwise, I leave you with this list of links and articles from around the Internet to keep you busy. Enjoy!

Read more...

Creating Tagged Subnets Across AWS AZs Using Pulumi

As I mentioned back in May in this post on creating a sandbox for learning Pulumi, I’ve started using Pulumi more and more of my infrastructure-as-code needs. I did switch from JavaScript to TypeScript (which I know compiles to JavaScript on the back-end, but the strong typing helps a new programmer like me). Recently I had a need to create some resources in AWS using Pulumi, and—for reasons I’ll explain shortly—many of the “canned” Pulumi examples didn’t cut it for my use case. In this post, I’ll share how I created tagged subnets across AWS availability zones (AZs) using Pulumi.

Read more...

Reconstructing the Join Command for Kubeadm

If you’ve used kubeadm to bootstrap a Kubernetes cluster, you probably know that at the end of the kubeadm init command to bootstrap the first node in the cluster, kubeadm prints out a bunch of information: how to copy over the admin Kubeconfig file, and how to join both control plane nodes and worker nodes to the cluster you just created. But what if you didn’t write these values down after the first kubeadm init command? How does one go about reconstructing the proper kubeadm join command?

Read more...

Setting up an AWS-Integrated Kubernetes 1.15 Cluster with Kubeadm

In this post, I’d like to walk through setting up an AWS-integrated Kubernetes 1.15 cluster using kubeadm. Over the last year or so, the power and utility of kubeadm has vastly improved (thank you to all the contributors who have spent countless hours!), and it is now—in my opinion, at least—at a point where setting up a well-configured, highly available Kubernetes cluster is pretty straightforward.

Read more...

Converting Kubernetes to an HA Control Plane

While hanging out in the Kubernetes Slack community, one question I’ve seen asked multiple times involves switching a Kubernetes cluster from a non-HA control plane (single control plane node) to an HA control plane (multiple control plane nodes). As far as I am aware, this isn’t documented upstream, so I thought I’d walk readers through what this process looks like.

Read more...

Technology Short Take 117

Welcome to Technology Short Take #117! Here’s my latest gathering of links and articles from the around the World Wide Web (an “old school” reference for you right there). I’ve got a little bit of something for most everyone, except for the storage nerds (I’m leaving that to my friend J Metz this time around). Here’s hoping you find something useful!

Read more...

Accessing the Docker Daemon via an SSH Bastion Host

Today I came across this article, which informed me that (as of the 18.09 release) you can use SSH to connect to a Docker daemon remotely. That’s handy! The article uses docker-machine (a useful but underrated tool, I think) to demonstrate, but the first question in my mind was this: can I do this through an SSH bastion host? Read on for the answer.

Read more...

Decoding a Kubernetes Service Account Token

Recently, while troubleshooting a separate issue, I had a need to get more information about the token used by Kubernetes Service Accounts. In this post, I’ll share a quick command-line that can fully decode a Service Account token.

Read more...

Adding a Name to the Kubernetes API Server Certificate

In this post, I’m going to walk you through how to add a name (specifically, a Subject Alternative Name) to the TLS certificate used by the Kubernetes API server. This process of updating the certificate to include a name that wasn’t included could find use for a few different scenarios. A couple of situations come to mind, such as adding a load balancer in front of the control plane, or using a new or different URL/hostname used to access the API server (both situations taking place after the cluster was bootstrapped).

Read more...

VMworld 2019 Prayer Time

For the last several years, I’ve organized a brief morning prayer time at VMworld. I didn’t attend the conference last year, but organized a prayer time nevertheless (and was able to join one morning for prayer). This year, now that I’m back at VMware (via the Heptio acquisition) and speaking at the conference, I’d once again like to coordinate a time for believers to meet. So, if you’re a Christian interested in gathering together with other Christians for a brief time of prayer, here are the details.

Read more...

Spousevitivities at VMworld 2019

This year VMworld—VMware’s annual user conference—moves back to San Francisco from Las Vegas. Returning to the Bay Area with VMworld is Spousetivities, which is happening again this year for the 11th year at VMworld. Better get your tickets sooner rather than later, there’s quite a good chance these activities will sell out!

Read more...

Calculating the CA Certificate Hash for Kubeadm

When using kubeadm to set up a new Kubernetes cluster, the output of the kubeadm init command that sets up the control plane for the first time contains some important information on joining additional nodes to the cluster. One piece of information in there that (until now) I hadn’t figured out how to replicate was the CA certificate hash. (Primarily I hadn’t figured it out because I hadn’t tried.) In this post, I’ll share how to calculate the CA certificate hash for kubeadm to use when joining additional nodes to an existing cluster.

Read more...

Building Jsonnet from Source

I recently decided to start working with jsonnet, a data templating language and associated command-line interface (CLI) tool for manipulating and/or generating various data formats (like JSON, YAML, or other formats; see the Jsonnet web site for more information). However, I found that there are no prebuilt binaries for jsonnet (at least, not that I could find), and so I thought I’d share here the process for building jsonnet from source. It’s not hard or complicated, but hopefully sharing this information will streamline the process for others.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!