Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Examining X.509 Certificates Embedded in Kubeconfig Files

While exploring some of the intricacies around the use of X.509v3 certificates in Kubernetes, I found myself wanting to be able to view the details of a certificate embedded in a kubeconfig file. (See this page if you’re unfamiliar with what a kubeconfig file is.) In this post, I’ll share with you the commands I used to accomplish this task.

First, you’ll want to extract the certificate data from the kubeconfig file. For the purposes of this post, I’ll use a kubeconfig file named config and found in the .kube subdirectory of your home directory. Assuming there’s only a single certificate embedded in the file, you can use a simple grep statement to isolate this information:

grep 'client-certificate-data' $HOME/.kube/config

Combine that with awk to isolate only the certificate data:

grep 'client-certificate-data' $HOME/.kube/config | awk '{print $2}'

This data is Base64-encoded, so we decode it (I’ll wrap the command using backslashes for readability now that it has grown a bit longer):

grep 'client-certificate-data' $HOME/.kube/config | \
awk '{print $2}' | base64 -d

You could, at this stage, redirect the output into a file (like certificate.crt) if so desired; the data you have is a valid X.509v3 certificate. It lacks the private key, of course.

However, if you’re only interested in viewing the properties of the certificate, as I was, there’s no need to redirect the output to a file. Instead, just pipe the output into openssl:

grep 'client-certificate-data' $HOME/.kube/config | \
awk '{print $2}' | base64 -d | openssl x509 -text

The output of this command should be a decoded breakdown of the data in the X.509 certificate. Notable pieces of information in this context are the Subject (this will identify the user being authenticated to Kubernetes with this certificate):

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 8264125584782928183 (0x72b0126f24342937)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Jun 13 01:52:46 2018 GMT
            Not After : Jun 13 01:53:17 2019 GMT
        Subject: CN=system:kube-controller-manager

Also of interest is the X509v3 Extended Key Usage (which indicates the certificate is used for client authentication, i.e., “TLS Web Client Authentication”):

        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage: 
                TLS Web Client Authentication

Note that the certificate is not configured for encryption, meaning this certificate doesn’t ensure the connection to Kubernetes is encrypted. That function is handled by a different certificate; this one is only used for client authentication.

This is probably nothing new for experienced Kubernetes folks, but I thought it might prove useful to a few people out there. Feel free to hit me up on Twitter with any corrections, clarifications, or questions. Have fun examining certificate data!

Using Variables in AWS Tags with Terraform

I’ve been working to deepen my Terraform skills recently, and one avenue I’ve been using to help in this area is expanding my use of Terraform modules. If you’re unfamiliar with the idea of Terraform modules, you can liken them to Ansible roles: a re-usable abstraction/function that is heavily parameterized and can be called/invoked as needed. Recently I wanted to add support for tagging AWS instances in a module I was building, and I found out that you can’t use variable interpolation in the normal way for AWS tags. Here’s a workaround I found in my research and testing.

Normally, variable interpolation in Terraform would allow one to do something like this (this is taken from the aws_instance resource):

tags {
    Name = "${var.name}-${count.index}"
    role = "${var.role}"
}

This approach works, creating tags whose keys are “Name” and “role” and whose values are the interpolated variables. (I am, in fact, using this exact snippet of code in some of my Terraform modules.) Given that this works, I decided to extend it in a way that would allow the code calling the module to supply both the key as well as the value, thus providing more flexibility in the module. I arrived at this snippet:

tags {
    Name = "${var.name}-${count.index}"
    role = "${var.role}"
    "${var.opt_tag_name}" = "${var.opt_tag_value}"
}

The idea here is that the opt_tag_name variable contains a tag key, and the opt_tag_value contains the associated tag value.

Unfortunately, this doesn’t work—instead of interpolating the value for opt_tag_name, the string was applied literally, and the value wasn’t interpolated at all (it came through blank). I’m not really sure why this is the case, but after some searching I came across this GitHub issue that provides a workaround.

The workaround has 2 parts:

  1. A local definition
  2. Mapping the tags into the aws_instance resource

The local definition looks like this:

locals {
    common_tags = "${map(
        "${var.opt_tag_name}", "${var.opt_tag_value}",
        "role", "${var.role}"
    )}"
}

This sets up a “local variable” within the module (it’s scoped to the module). In this case, it’s a map of keys and values. One of the keys is determined by interpolating the opt_tag_name variable, and the value for that key is determined by the variable opt_tag_value. The second key is “role”, and the value is taken by interpolating the role variable.

The second step references this local definition. In the aws_instance resource itself, you’ll reference the local definition along with any additional tags like this:

tags = "${merge(
    local.common_tags,
    map(
        "Name", "${var.name}-${count.index}"
    )
)}"

This snippet of code sets up a new map that defines the tag with the “Name” key and the value taken from the variable name and suffixed by a count (derived from the number of instances being created). Then it uses the merge function to create a union of the two maps, which are then used by the AWS provider to set the tags on the AWS instance.

Taken together, this allows me to provide both the tag name and the tag value to the module, which will then pass those along to the AWS instances created by the module. Now I just need to figure out how to make this truly optional, so that the module doesn’t try to create a key-name value if no values are passed to the module. I haven’t figured that part out (yet).

More Resources

Here’s the Terraform documentation on modules, though—to be honest—I haven’t found it to be as helpful as I’d like.

A Quadruple-Provider Vagrant Environment

In October 2016 I wrote about a triple-provider Vagrant environment I’d created that worked with VirtualBox, AWS, and the VMware provider (tested with VMware Fusion). Since that time, I’ve incorporated Linux (Fedora, specifically) into my computing landscape, and I started using the Libvirt provider for Vagrant (see my write-up here). With that in mind, I updated the triple-provider environment to add support for Libvirt and make it a quadruple-provider environment.

To set expectations, I’ll start out by saying there isn’t a whole lot here that is dramatically different than the triple-provider setup that I shared back in October 2016. Obviously, it supports more providers, and I’ve improved the setup so that no changes to the Vagrantfile are needed (everything is parameterized).

With that in mind, let’s take a closer look. First, let’s look at the Vagrantfile itself:

# Specify minimum Vagrant version and Vagrant API version
Vagrant.require_version '>= 1.6.0'
VAGRANTFILE_API_VERSION = '2'

# Require 'yaml' module
require 'yaml'

# Read YAML file with VM details (box, CPU, and RAM)
machines = YAML.load_file(File.join(File.dirname(__FILE__), 'machines.yml'))

# Create and configure the VMs
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  # Always use Vagrant's default insecure key
  config.ssh.insert_key = false

  # Iterate through entries in YAML file to create VMs
  machines.each do |machine|

    # Configure the AWS provider
    config.vm.provider 'aws' do |aws|

      # Specify default AWS key pair
      aws.keypair_name = machine['aws']['keypair']

      # Specify default region
      aws.region = machine['aws']['region']
    end # config.vm.provider 'aws'

    config.vm.define machine['name'] do |srv|

      # Don't check for box updates
      srv.vm.box_check_update = false

      # Set machine's hostname
      srv.vm.hostname = machine['name']

      # Use dummy AWS box by default (override per-provider)
      srv.vm.box = 'aws-dummy'

      # Configure default synced folder (disable by default)
      if machine['sync_disabled'] != nil
        srv.vm.synced_folder '.', '/vagrant', disabled: machine['sync_disabled']
      else
        srv.vm.synced_folder '.', '/vagrant', disabled: true
      end #if machine['sync_disabled']

      # Iterate through networks as per settings in machines.yml
      machine['nics'].each do |net|
        if net['ip_addr'] == 'dhcp'
          srv.vm.network net['type'], type: net['ip_addr']
        else
          srv.vm.network net['type'], ip: net['ip_addr']
        end # if net['ip_addr']
      end # machine['nics'].each

      # Configure CPU & RAM per settings in machines.yml (Fusion)
      srv.vm.provider 'vmware_fusion' do |vmw, override|
        vmw.vmx['memsize'] = machine['ram']
        vmw.vmx['numvcpus'] = machine['vcpu']
        override.vm.box = machine['box']['vmw']
        if machine['nested'] == true
          vmw.vmx['vhv.enable'] = 'TRUE'
        end #if machine['nested']
      end # srv.vm.provider 'vmware_fusion'

      # Configure CPU & RAM per settings in machines.yml (VirtualBox)
      srv.vm.provider 'virtualbox' do |vb, override|
        vb.memory = machine['ram']
        vb.cpus = machine['vcpu']
        override.vm.box = machine['box']['vb']
        vb.customize ['modifyvm', :id, '--nictype1', 'virtio']
        vb.customize ['modifyvm', :id, '--nictype2', 'virtio']
      end # srv.vm.provider 'virtualbox'

      # Configure CPU & RAM per settings in machines.yml (Libvirt)
      srv.vm.provider 'libvirt' do |lv,override|
        lv.memory = machine['ram']
        lv.cpus = machine['vcpu']
        override.vm.box = machine['box']['lv']
        if machine['nested'] == true
          lv.nested = true
        end # if machine['nested']
      end # srv.vm.provider 'libvirt'

      # Configure per-machine AWS provider/instance overrides
      srv.vm.provider 'aws' do |aws, override|
        override.ssh.private_key_path = machine['aws']['key_path']
        override.ssh.username = machine['aws']['user']
        aws.instance_type = machine['aws']['type']
        aws.ami = machine['box']['aws']
        aws.security_groups = machine['aws']['security_groups']
      end # srv.vm.provider 'aws'
    end # config.vm.define
  end # machines.each
end # Vagrant.configure

A couple of notes about the above Vagrantfile:

  • All the data is pulled from an external YAML file named machines.yml; more information on that shortly.
  • The “magic,” if you will, is in the provider overrides. HashiCorp recommends against provider overrides, but in my experience they’re a necessity when working with multi-provider setups. Within each provider override block, we set provider-specific details and adjust the box needed (because finding boxes that support multiple platforms is downright impossible in many cases).
  • The machine[nics].each do |net| section works for the local virtualization providers (VirtualBox, VMware, and Libvirt), but is silently ignored for AWS. That made making the Vagrantfile much easier, in my opinion. Note that last time I really tested the Libvirt provider there was some weirdness with the network configuration; the configuration shown above works as expected. Other configurations may not.

Now, let’s look at the external YAML data file that feeds Vagrant the information it needs:

- aws:
    type: "t2.medium"
    user: "ubuntu"
    key_path: "~/.ssh/id_rsa"
    security_groups:
      - "default"
      - "test"
    keypair: "ssh_keypair"
    region: "us-west-2"
  box:
    aws: "ami-db710fa3"
    lv: "generic/ubuntu1604"
    vb: "ubuntu/xenial64"
    vmw: "bento/ubuntu-16.04"
  name: "xenial-01"
  nested: false
  nics:
    - type: "private_network"
      ip_addr: "dhcp"
  ram: "512"
  sync_disabled: true
  vcpu: "1"

This is pretty straightforward YAML. This configuration does support multiple VMs/instances, with one interesting twist. When working with multiple AWS instances, you only need to specify the AWS keypair and AWS region on the last instance defined in the YAML file. You can include it for all instances, if you like, but only the values on the last instance will actually apply. I may toy around with supporting multi-region configurations, but that is kind of far down my priority list. The other AWS-specific values (type, user, and path to private key) need to be specified for all instances in the YAML file.

To use this environment, you only need to edit the external YAML file with the appropriate values and make sure authentication against AWS is working as expected. I recommend installing and configuring the AWS CLI to ensure that authentication against AWS is working as expected. Alternately, you could use something like aws-vault.

Then it’s just a matter of running the appropriate command for your particular environment:

vagrant up --provider=aws (to spin up instances on AWS)
vagrant up --provider=virtualbox (to spin up VirtualBox VMs locally)
vagrant up --provider=vmware_fusion (to use Fusion to create local VMs)
vagrant up --provider=libvirt (to create Libvirt guest domains locally)

Using this sort of technique to support multiple providers in a single Vagrant environment provides a clean, consistent workflow regardless of backend provider. Naturally, this could be extended to include other providers using the same basic techniques I’ve used here. I’ll leave that as an exercise to the readers.

My Use Case

You might be wondering, “Why did you put effort into this?” It’s pretty simple, really. I’m working on a project where I needed to be able to quickly and easily spin up a few instances on AWS. I felt like Terraform was a bit too “heavy” for this, as all I really needed was the ability to launch an instance or two, interact with the instances, then tear them down. Yes, I could have done this with the AWS CLI, but…really? I knew that Vagrant worked with AWS, and I already use Vagrant for other purposes. It seemed pretty natural to incorporate the AWS support in Vagrant into my existing environments, and this quadruple-provider environment was the result. Enjoy!

Technology Short Take 101

Welcome to Technology Short Take #101! I have (hopefully) crafted an interesting and varied collection of links for you today, spanning all the major areas of modern data center technology. Now you have some reading material for this weekend!

Networking

Servers/Hardware

  • AWS adds local NVMe storage to the M5 instance family; more details here. What I found interesting is that the local NVMe storage is also hardware encrypted. AWS also mentions that these M5d instances are powered by (in their words) “Custom Intel Xeon Platinum” processors, which just goes to confirm the long-known fact that AWS is leveraging custom Intel CPUs in their stuff (as are all the major cloud providers, I’m sure).

Security

Cloud Computing/Cloud Management

Operating Systems/Applications

  • Speaking of EKS, here’s a new command-line interface for EKS, courtesy of Weaveworks.
  • Along with the GA of EKS, HashiCorp has a release of the Terraform AWS provider that has EKS support. More details are available here.
  • Google recently announced kustomize, a tool that provides a new approach to customizing Kubernetes object configuration.
  • Following after a recent post involving parsing AWS instance data with jq, a Twitter follower pointed out jid (the JSON incremental digger). Handy tool!
  • I’m seeing a fair amount of attention on podman, a tool primarily backed by Red Hat that aims to replace Docker as the client-side tool of choice. The latest was this post. Anyone else spent any quality time with this tool and have some feedback?
  • Nick Janetakis has a collection of quick “Docker tips” that you may find useful; the latest one shows how to see all your container’s environment variables.

Storage

  • This is an interesting announcement from a few weeks ago that I missed—Dell EMC will offer Isilon on Google Cloud Platform. See Chris Evans’ article here. (I’ll leave the analysis and pontificating to Chris, who’s much better at it than I am.)

Virtualization

Career/Soft Skills

  • I found this two-part series (part 1, part 2) on understanding how to process information to help you get organized to be an interesting (and quick) read. It’s been my experience that improving your skills at being organized often reaps benefits in other areas.

OK, that’s all this time around. I hope you found something useful in this post. As always, your feedback is welcome; feel free to hit me up on Twitter.

Exploring Kubernetes with Kubeadm, Part 1: Introduction

I recently started using kubeadm more extensively than I had in the past to serve as the primary tool by which I stand up Kubernetes clusters. As part of this process, I also discovered the kubeadm alpha phase subcommand, which exposes different sections (phases) of the process that kubeadm init follows when bootstrapping a cluster. In this blog post, I’d like to kick off a series of posts that explore how one could use the kubeadm alpha phase command to better understand the different components within Kubernetes, the relationships between components, and some of the configuration items involved.

Before I go any further, I’d like to point readers to this URL that provides an overview of kubeadm and using it to bootstrap a cluster. If you’re new to kubeadm, go read that before continuing on here.

<aside>Quick side note: it’s my understanding that at some point the intent is to move kubeadm alpha phase out of alpha, at which point the command might look more like kubeadm phase or similar (that hasn’t been fully determined yet as far as I know). If you’re reading this at some point in the future, just make note that this was written back in the Kubernetes 1.10 timeframe when this was still an alpha feature.</aside>

Done? OK, let’s proceed. At this point, you should know that kubeadm init is a simple and pretty straightforward way to set up a minimally viable Kubernetes cluster. This is all well and good if that is your goal. If, on the other hand, your goal is to better understand Kubernetes, then this is where I recommend spending some time with the kubeadm alpha phase subcommands. Why? Instead of automating the entire process, these subcommands let you look at the individual steps and the configuration artifacts produced by these individual steps. Want to see how the API server is configured? You can see the static Pod manifest that kubeadm generates. Want to see how certificates are used by Kubernetes to secure communications between the different components? This is visible via the kubeadm alpha phase subcommands. In my opinion, this makes kubeadm alpha phase a valuable learning tool for those seeking to deepen their knowledge of Kubernetes.

Currently, kubeadm is only available for Linux. However, even if you’re working on a Linux-based desktop (as I am, for example—I run Fedora 27 as my primary OS), the best place to experiment with kubeadm alpha phase is in a Linux VM. Why? Some of the paths where kubeadm will output files (certificates, Pod manifests, etc.) are hard-coded within kubeadm itself, and you may not have/want to create that directory structure on your Linux desktop. Fortunately, it’s easy to spin up a Linux VM using any number of tools (Vagrant comes to mind for me), and installing kubeadm is also straightforward (see these instructions for more details).

In future parts of this series, I’ll dig into specific sections of the kubeadm alpha phase subcommands and review what the subcommand is doing, what some of the configuration artifacts are, and explore how this fits into Kubernetes as a whole. I’ll update this post with links to subsequent posts as the series evolves.

Along the way, if you find anything incorrect—I’m human and will make mistakes—feel free to hit me up on Twitter or submit a pull request fixing the error. This will help make this resource more useful for everyone over time.

Recent Posts

Book Review: Infrastructure as Code

As part of my 2018 projects, I committed to reading and reviewing more technical books this year. As part of that effort, I recently finished reading Infrastructure as Code, authored by Kief Morris and published in September 2015 by O’Reilly (more details here). Infrastructure as code is very relevant to my current job function and is an area of great personal interest, and I’d been half-heartedly working my way through the book for some time. Now that I’ve completed it, here are my thoughts.

Read more...

Technology Short Take 100

Wow! This marks 100 posts in the Technology Short Take series! For almost eight years (Technology Short Take #1 was published in August 2010), I’ve been collecting and sharing links and articles from around the web related to major data center technologies. Time really flies when you’re having fun! Anyway, here is Technology Short Take 100…I hope you enjoy!

Read more...

Quick Post: Parsing AWS Instance Data with JQ

I recently had a need to get a specific subset of information about some AWS instances. Naturally, I turned to the CLI and some CLI tools to help. In this post, I’ll share the command I used to parse the AWS instance data down using the ever-so-handy jq tool.

Read more...

Posts from the Past, May 2018

This month—May 2018—marks thirteen years that I’ve been generating content here on this site. It’s been a phenomenal 13 years, and I’ve enjoyed the opportunity to share information with readers around the world. To celebrate, I thought I’d do a quick “Posts from the Past” and highlight some content from previous years. Enjoy!

Read more...

DockerCon SF 18 and Spousetivities

DockerCon SF 18 is set to kick off in San Francisco at the Moscone Center from June 12 to June 15. This marks the return of DockerCon to San Francisco after being held in other venues for the last couple of years. Also returning to San Francisco is Spousetivities, which has organized activities for spouses, significant others/domestic partners, friends, and family members traveling with conference attendees!

Read more...

Manually Installing Firefox 60 on Fedora 27

Mozilla recently released version 60 of Firefox, which contains a number of pretty important enhancements (as outlined here). However, the Fedora repositories don’t (yet) contain Firefox 60 (at least not for Fedora 27), so you can’t just do a dnf update to get the latest release. With that in mind, here are some instructions for manually installing Firefox 60 on Fedora 27.

Read more...

One Week Until Spousetivities in Vancouver

Only one week remains until Spousetivities kicks off in Vancouver at the OpenStack Summit! If you are traveling to the Summit with a spouse, significant other, family member, or friend, I’d encourage you to take a look at the great activities Crystal has arranged during the Summit.

Read more...

Technology Short Take 99

Welcome to Technology Short Take 99! What follows below is a collection of various links and articles about (mostly) data center-related technologies. Hopefully something I’ve included will be useful. Here goes!

Read more...

Installing GitKraken on Fedora 27

GitKraken is a full-featured graphical Git client with support for multiple platforms. Given that I’m trying to live a multi-platform life, it made sense for me to give this a try and see whether it is worth making part of my (evolving and updated) multi-platform toolbelt. Along the way, though, I found that GitKraken doesn’t provide an RPM package for Fedora, and that the installation isn’t as straightforward as one might hope. I’m documenting the procedure here in the hope of helping others.

Read more...

An Updated Look at My Multi-Platform Toolbelt

In early 2017 I posted about my (evolving) multi-platform toolbelt, describing some of the applications, standards, and services that I use across my Linux and macOS systems. In this post, I’d like to provide an updated review of that toolbelt.

Read more...

Technology Short Take 98

Welcome to Technology Short Take #98! Now that I’m starting to get settled into my new role at Heptio, I’ve managed to find some time to pull together another collection of links and articles pertaining to various data center technologies. Feedback is always welcome!

Read more...

List of Kubernetes Folks on Twitter

Earlier this morning, I asked on Twitter about good individuals to follow on Twitter for Kubernetes information. I received quite a few good responses (thank you!), and I though it might be useful to share the list of the folks that were recommended across all those responses.

Read more...

Review: Lenovo ThinkPad X1 Carbon

As part of the transition into my new role at Heptio (see here for more information), I had to select a new corporate laptop. Given that my last attempt at running Linux full-time was thwarted due primarily to work-specific collaboration issues that would no longer apply (see here), and given that other members of my team (the Field Engineering team) are also running Linux full-time, I thought I’d give it another go. Accordingly, I’ve started working on a Lenovo ThinkPad X1 Carbon (5th generation). Here are my thoughts on this laptop.

Read more...

The Future is Containerized

Last week I announced my departure from VMware, and my intention to step away from VMware’s products and platforms to focus on a new technology area moving forward. Today marks the “official” start of a journey that’s been building for a couple years, a journey that will take me into a future that’s containerized. That journey starts in Seattle, Washington.

Read more...

Technology Short Take 97

Welcome to Technology Short Take 97! This Tech Short Take marks the end of an era (sort of); it’s the last Tech Short Take published while I’m a VMware employee (today is my last day; see here for more details). But enough about me—let’s talk some tech! This Short Take may be a bit longer than some, so buckle up.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!